Compare commits

...

703 Commits

Author SHA1 Message Date
Stefan Prodan 27daa2ca46
Merge pull request #1803 from alex-souslik-hs/main
loadtester: add pod security context
2025-04-21 09:23:30 +02:00
Alex ed38a79545 add pod security context
Signed-off-by: Alex <alex.souslik@workday.com>
2025-04-20 19:15:37 +03:00
Sanskar Jaiswal 6f165a10de
Merge pull request #1788 from fluxcd/release-v1.41.0
Release v1.41.0
2025-04-02 12:40:43 +01:00
Sanskar Jaiswal 89c1ddee79
Release v1.41.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-04-02 12:32:23 +01:00
Sanskar Jaiswal 1b8e7653d3
Merge pull request #1787 from fluxcd/update-deps
update Go dependencies
2025-03-31 18:09:11 +05:30
Stefan Prodan 98f8514258
Merge pull request #1786 from fluxcd/fix-session-affinity-e2e
update webhook host in session affinity e2e test
2025-03-30 14:04:02 +01:00
Sanskar Jaiswal d9c8a09d3e
update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-03-29 18:00:26 +00:00
Sanskar Jaiswal 2ac22f831f
update webhook host in session affinity e2e test
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-03-28 12:30:23 +05:30
Sanskar Jaiswal e0de40dcb0
Merge pull request #1757 from otternq/metric-provider-headers
allow headers to be added to prometheus requests
2025-03-28 12:26:38 +05:30
Nick Otter 8f9bb5b1bc
allow headers to be added to prometheus requests
Signed-off-by: Nick Otter <otternq@gmail.com>
Co-authored-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-03-27 01:03:23 +05:30
Sanskar Jaiswal f21bc1de3e
Merge pull request #1783 from fluxcd/session-affinity-primary-cookie
feat: add support for primary backend cookies in session affinity (Gateway API)
2025-03-24 16:02:30 +05:30
Sanskar Jaiswal 1fc7ac5847
add docs for primary stickiness for session affinity
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>

dfk
2025-03-24 13:08:29 +05:30
Sanskar Jaiswal 1dc270c2e6
feat: add support for primary backend cookies in session affinity
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-03-24 13:08:27 +05:30
Stefan Prodan 50d1331ba6
Merge pull request #1785 from fluxcd/loadtester-0.35.0
Release loadtester 0.35.0
2025-03-23 10:14:29 +02:00
Stefan Prodan bc78156535
Release loadtester 0.35.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-03-23 09:31:44 +02:00
Stefan Prodan 0df8af8d04
Merge pull request #1771 from fluxcd/dependabot/github_actions/ci-8077bd6f50
build(deps): bump the ci group across 1 directory with 2 updates
2025-03-23 09:29:16 +02:00
dependabot[bot] 633f639383
build(deps): bump the ci group across 1 directory with 2 updates
Bumps the ci group with 2 updates in the / directory: [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) and [slsa-framework/slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator).


Updates `sigstore/cosign-installer` from 3.7.0 to 3.8.1
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.7.0...v3.8.1)

Updates `slsa-framework/slsa-github-generator` from 2.0.0 to 2.1.0
- [Release notes](https://github.com/slsa-framework/slsa-github-generator/releases)
- [Changelog](https://github.com/slsa-framework/slsa-github-generator/blob/main/CHANGELOG.md)
- [Commits](https://github.com/slsa-framework/slsa-github-generator/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-23 07:19:45 +00:00
Stefan Prodan d03cc73386
Merge pull request #1784 from fluxcd/go-1.24
Build with Go 1.24
2025-03-23 09:17:55 +02:00
Stefan Prodan eaf5bb992c
Ensure constant format strings in fmt calls
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-03-23 08:59:45 +02:00
Stefan Prodan 22618ccb11
Build with Go 1.24
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2025-03-23 08:50:59 +02:00
Stefan Prodan f5af225ffc
Merge pull request #1776 from fluxcd/dependabot/go_modules/golang.org/x/net-0.36.0
build(deps): bump golang.org/x/net from 0.33.0 to 0.36.0
2025-03-23 08:45:20 +02:00
dependabot[bot] 40a34199fe
build(deps): bump golang.org/x/net from 0.33.0 to 0.36.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.33.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-23 06:40:43 +00:00
Stefan Prodan d7357a7377
Merge pull request #1682 from tombanksme/knative-support
Add support for Knative
2025-03-23 08:39:29 +02:00
Sanskar Jaiswal 12ee6cbc86
add docs for knative
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-03-22 01:02:32 +05:30
Thomas Banks f1c8807c0d
feat: add knative integration
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
Co-authored-by: Thomas Banks
2025-03-22 01:02:30 +05:30
Stefan Prodan 8276bfa5a5
Merge pull request #1763 from easimon/fix/datadog-provider
fix: do not evaluate incomplete samples from datadog
2025-02-22 10:36:22 +02:00
Markus Dobel 2c4b7a69a2 fix: do not evaluate incomplete samples from datadog
Signed-off-by: Markus Dobel <markus.dobel@epicompany.eu>
2025-02-12 18:12:38 +01:00
Sanskar Jaiswal 660ed7486b
Merge pull request #1677 from jdgeisler/keda-scaled-object-hpa-migration
Prevent primary hpa collision for keda scaled objects when migrating from an hpa
2025-02-11 22:49:54 +05:30
James Geisler 21acd7e3d6 If applied, this commit will allow the migration from an hpa to a scaled object
Signed-off-by: James Geisler <geislerjamesd@gmail.com>
2025-02-10 10:24:58 -06:00
Stefan Prodan 40e2802c3d
Merge pull request #1707 from quintonm/main
chart: add support for deploymentLabels
2025-01-26 09:44:37 +02:00
Sanskar Jaiswal d99d37b219
Merge pull request #1755 from fluxcd/headless-svc 2025-01-14 15:02:59 +05:30
Sanskar Jaiswal 45618b90db
feat: add option to generate headless services
Add a new field `.spec.service.headless` which if set to true results in
Flagger generating headless Services, i.e. with the Service's
`.spec.clusterIP` set to None.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-01-14 14:09:12 +05:30
Sanskar Jaiswal ff4051f728
Merge pull request #1756 from fluxcd/bump-go-net
chore: bump golang.org/x/net to v0.33.0
2025-01-14 13:18:03 +05:30
Sanskar Jaiswal 2ea13a477b
chore: bump golang.org/x/net to v0.33.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2025-01-14 00:18:27 +05:30
quintonm 03d4acc77f add support for deploymentLabels
Signed-off-by: quintonm <quinton.mccombs@gmail.com>
2025-01-13 07:56:30 -06:00
Stefan Prodan 16a607549e
Merge pull request #1751 from fluxcd/dependabot/github_actions/ci-cba3cefdba
Bump helm/kind-action from 1.11.0 to 1.12.0 in the ci group
2024-12-23 14:12:52 +02:00
dependabot[bot] b57afd3b0f
Bump helm/kind-action from 1.11.0 to 1.12.0 in the ci group
Bumps the ci group with 1 update: [helm/kind-action](https://github.com/helm/kind-action).


Updates `helm/kind-action` from 1.11.0 to 1.12.0
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.11.0...v1.12.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-23 11:28:50 +00:00
Stefan Prodan 9000136233
Merge pull request #1749 from fluxcd/release-1.40.0
Release v1.40.0
2024-12-17 11:42:02 +02:00
Stefan Prodan 14543cc8bf
Release v1.40.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-17 11:14:49 +02:00
Stefan Prodan 642ef6bb7d
Merge pull request #1745 from fluxcd/dependabot/github_actions/ci-f4c9def711
Bump helm/kind-action from 1.10.0 to 1.11.0 in the ci group
2024-12-17 10:06:24 +02:00
Stefan Prodan 3ebbfb0a54
Merge pull request #1747 from fluxcd/loadtester-0.34.0
Release loadtester 0.34.0
2024-12-17 10:05:54 +02:00
Stefan Prodan a52f497370
Release loadtester 0.34.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-16 19:27:19 +02:00
Stefan Prodan 64b50813ff
Merge pull request #1744 from fluxcd/update-deps-alpine
Update dependencies
2024-12-16 19:09:11 +02:00
Stefan Prodan 9244d6de65
Merge pull request #1746 from fluxcd/fix-drift-aws-gateway
Preserve HTTPRoute annotations injected by AWS Gateway API
2024-12-16 19:08:56 +02:00
Sanskar Jaiswal 3b6b550d64
Add tests for annotations preservation in Gateway API router
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-12-16 22:16:08 +05:30
Stefan Prodan 282f2b36f0
Preserve HTTPRoute annotations injected by AWS Gateway API
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-16 16:35:41 +02:00
dependabot[bot] 0a76f808b8
Bump helm/kind-action from 1.10.0 to 1.11.0 in the ci group
Bumps the ci group with 1 update: [helm/kind-action](https://github.com/helm/kind-action).


Updates `helm/kind-action` from 1.10.0 to 1.11.0
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.10.0...v1.11.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-16 11:48:55 +00:00
Stefan Prodan a85887de3c
Merge pull request #1735 from kahirokunn/add-helper-gen
Automate zz_generated.deepcopy.go updates with make codegen
2024-12-13 18:52:29 +02:00
kahirokunn febc327673
chore(codegen): add helper generation to codegen script
chore(gatewayapi/v1beta1): add deepcopy-gen annotations
run `make codegen`

Signed-off-by: kahirokunn <okinakahiro@gmail.com>
2024-12-14 00:10:24 +09:00
Stefan Prodan 6d5aabff05
Update loadtester tools
- helm 3.16.3
- kubectl 1.31.3
- grcp probe 0.4.35
- bash 5.2.37
- bats 1.1.1

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-13 16:23:12 +02:00
Stefan Prodan 51d0bb2c92
Update Alpine to 3.21
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-13 16:21:33 +02:00
Stefan Prodan dc947fb164
Update Go dependencies
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-12-13 16:21:12 +02:00
Stefan Prodan 0138e2e6c4
Merge pull request #1733 from kane8n/add-splunk-provider
Add Splunk as a metrics provider
2024-12-13 16:11:25 +02:00
kane8n d4bd0f2ef8 add splunk provider
Signed-off-by: kane8n <takumi.kaneda@zozo.com>
2024-12-13 22:22:25 +09:00
Stefan Prodan 30f4b25925
Merge pull request #1731 from fluxcd/fix-changelog-date
Fix changelog date for 1.39 release
2024-11-26 12:38:33 +00:00
Stefan Prodan 25fd9be1db
Fix changelog date for 1.39 release
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-11-26 12:12:17 +00:00
Stefan Prodan 4d497b2a9d
Merge pull request #1730 from fluxcd/xx-build
Optimize multi-arch build with XX
2024-11-26 12:03:12 +00:00
Stefan Prodan 0ef356706a
Optimize build with XX
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-11-26 11:56:09 +00:00
Sanskar Jaiswal ebf43ef104
Merge pull request #1728 from fluxcd/release-v1.39.0
Release v1.39.0
2024-11-26 13:08:53 +05:30
Sanskar Jaiswal 7754cdb89a
Release v1.39.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-11-25 23:26:36 +05:30
Stefan Prodan c6b5b39187
Merge pull request #1721 from swimablefish/main
fix(helm): podinfo fails to create the hpa object
2024-11-25 14:51:22 +00:00
Stefan Prodan a6a7a20737
Merge pull request #1727 from fluxcd/dependabot/github_actions/ci-b81aef8ad7
Bump the ci group across 1 directory with 4 updates
2024-11-25 14:49:23 +00:00
dependabot[bot] c04ff05aa4
Bump the ci group across 1 directory with 4 updates
Bumps the ci group with 4 updates in the / directory: [codecov/codecov-action](https://github.com/codecov/codecov-action), [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer), [docker/build-push-action](https://github.com/docker/build-push-action) and [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action).


Updates `codecov/codecov-action` from 4 to 5
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

Updates `sigstore/cosign-installer` from 3.5.0 to 3.7.0
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.5.0...v3.7.0)

Updates `docker/build-push-action` from 5 to 6
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

Updates `goreleaser/goreleaser-action` from 5 to 6
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-25 14:43:40 +00:00
Stefan Prodan b4bc93d0a8
Merge pull request #1726 from fluxcd/go-1.23
Build with Go 1.23
2024-11-25 14:41:19 +00:00
Stefan Prodan 6ee00e14f9
Build with Go 1.23
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-11-25 14:33:38 +00:00
Stefan Prodan a7d90c227f
Merge pull request #1725 from fluxcd/regen-1.31
Update generated client for Kubernetes 1.31
2024-11-25 14:24:48 +00:00
Sanskar Jaiswal 4c0a26b675
gatewayapi: return early after creating new http routes
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-11-24 19:04:27 +05:30
Stefan Prodan d4f766285d
Update generated client for Kubernetes 1.31
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-11-23 20:50:47 +02:00
Stefan Prodan 66fcea7581
Merge pull request #1724 from fluxcd/fix-codegen
fix: fix codegen script and update generated code
2024-11-23 20:35:40 +02:00
Stefan Prodan 9bfc531da0
Merge pull request #1723 from fluxcd/k8s-1.31.3
Update dependencies to Kubernetes v1.31.3
2024-11-23 15:59:22 +02:00
Sanskar Jaiswal 398fc90cc0
fix: fix codegen script and update generated code
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-11-23 18:12:23 +05:30
Stefan Prodan 682230e8c0
Update dependencies to Kubernetes v1.31.3
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-11-23 11:31:29 +02:00
Stefan Prodan 92daf5174c
Merge pull request #1702 from aufarg/add-autoscaler-ref-validation
Add validation for `primaryScalerReplicas` field in the CRD
2024-11-23 11:16:22 +02:00
Stefan Prodan 2ba00a33a7
Merge pull request #1709 from juparog/juparog/webhook-disabletls
feat: add `disableTLS` option for webhooks request
2024-11-23 11:13:25 +02:00
Juan Rodriguez 8f838388e8
feat: add disableTls option for webhooks request
Signed-off-by: Juan Rodriguez <engineer.jrg@gmail.com>
2024-11-21 19:22:18 +05:30
Sanskar Jaiswal 7cd14761d5
Merge pull request #1713 from mingjie-li/main
Gateway API: Sort header filters to avoid canary restarts
2024-11-21 19:06:21 +05:30
swimablefish e99add460f fix(helm): podinfo fails to create the hpa object
Signed-off-by: swimablefish <swimablefish@gmail.com>
2024-11-11 15:24:28 +08:00
Mingjie Li b88e080a66 add test back and use slices.SortFunc
Signed-off-by: Mingjie Li <mli@liveperson.com>
2024-10-26 16:50:51 +02:00
Mingjie Li 9941843385 fix #1712 : sort gateway api header fileter to fix canary restart
Signed-off-by: Mingjie Li <mli@liveperson.com>
2024-10-26 16:50:51 +02:00
Stefan Prodan a159421290
Merge pull request #1711 from fluxcd/update-codeowners
add @aryan9600 to CODEOWNERS
2024-10-07 23:49:57 +03:00
Sanskar Jaiswal 43cb4bc8e9
add @aryan9600 to CODEOWNERS
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-10-08 00:47:33 +05:30
Aufar Gilbran b719427337 Add validation for primaryScalerReplicas counts
Signed-off-by: Aufar Gilbran <aufargilbran@gmail.com>
2024-09-12 16:50:10 +08:00
Sanskar Jaiswal b6ac5e19aa
Merge pull request #1691 from fluxcd/release-v1.38.0
Release v1.38.0
2024-07-30 19:00:43 +05:30
Sanskar Jaiswal 6a090bca51
Release v1.38.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-07-30 13:06:44 +05:30
Stefan Prodan e07a2618c2
Merge pull request #1676 from defenestration/add-podMonitor.honorLabels
Helm - Add podMonitor.honor labels
2024-07-29 12:50:11 +03:00
Stefan Prodan 9fcb6e9c93
Merge pull request #1690 from fluxcd/loadtester-0.33.0
Release loadtester 0.33.0
2024-07-29 12:48:47 +03:00
Stefan Prodan a88e06db17
Release loadtester v0.33.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-07-29 12:12:00 +03:00
Stefan Prodan 401d0490da
Update Kubernetes to v1.30.3
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-07-29 12:11:29 +03:00
Sanskar Jaiswal 3d1aedeb44
Merge pull request #1683 from fluxcd/fix-kuma
kuma: bump e2e version to 2.7.5
2024-07-26 15:51:33 +05:30
Sanskar Jaiswal 4015103815
kuma: disable daemonset for e2e
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-07-26 15:27:48 +05:30
Sanskar Jaiswal 74b98dab00
kuma: add ingress annotaions as custom metadata
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-07-26 15:27:47 +05:30
Sanskar Jaiswal 01dfa06891
kuma: update default namespace to kong-mesh-system
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-07-26 15:27:47 +05:30
Sanskar Jaiswal 90054b3b27
kuma: bump e2e version to 2.7.5
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-07-26 15:27:46 +05:30
Sanskar Jaiswal cff2032ac0
Merge pull request #1686 from driv/fix_nginx_query
Fix Nginx request-duration query
2024-07-26 15:27:24 +05:30
Federico Nafria 2d5e289142 Fix Nginx request-duration query
`nginx_ingress_controller_ingress_upstream_latency_seconds_sum` measures the connection latency, not the time it takes the backend to respond.

Fixes #1685

Signed-off-by: Federico Nafria <federiconafria@gmail.com>
2024-07-19 17:08:02 +00:00
Sanskar Jaiswal f38183bfd1
Merge pull request #1675 from fluxcd/dependabot/go_modules/google.golang.org/grpc-1.64.1
Bump google.golang.org/grpc from 1.64.0 to 1.64.1
2024-07-17 15:07:26 +05:30
Alan B c09a61a198
add podMonitor.honorLabel
Signed-off-by: Alan B <961130+defenestration@users.noreply.github.com>
2024-07-10 10:53:58 -04:00
Alan B 417f035afb
Update values.yaml
add honorLabels to default values.yaml

Signed-off-by: Alan B <961130+defenestration@users.noreply.github.com>
2024-07-10 10:51:48 -04:00
Alan B 28f2ab7bdb
add honorLabels to PodMonitor
Signed-off-by: Alan B <961130+defenestration@users.noreply.github.com>
2024-07-10 10:26:54 -04:00
dependabot[bot] d6433a16b5
Bump google.golang.org/grpc from 1.64.0 to 1.64.1
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.64.0 to 1.64.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.64.0...v1.64.1)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-09 21:51:38 +00:00
Sanskar Jaiswal 9b39cf16f1
Merge pull request #1666 from olivierlemasle/doc/keda
doc: fix KEDA doc regarding namespaces
2024-07-01 17:17:26 +05:30
Olivier Lemasle d2cfcbde1a doc: fix KEDA doc regarding namespaces
Fix KEDA tutorial regarding namespaces

Signed-off-by: Olivier Lemasle <olivier.lemasle@apalia.net>
2024-06-24 15:47:37 +02:00
Sanskar Jaiswal 133fdecf56
Merge pull request #1657 from shivamnarula/fix/empty-annotations-and-volumes
Fix removal of empty keys from flagger chart
2024-06-12 13:19:35 +05:30
Shivam Narula 3490d60e89
Fix removal of empty keys from flagger chart
Signed-off-by: Shivam Narula <shivamnarula@sharechat.co>
2024-06-12 12:49:35 +05:30
Sanskar Jaiswal 97d1ef0f18
Merge pull request #1630 from bacherfl/poc/keptn-provider
feat: implement a Keptn metrics provider
2024-06-12 12:31:49 +05:30
Florian Bacher ce976e28f0
feat: implement a Keptn metrics provider
Add a Keptn metrics provider for two resources:
* KeptnMetric: Verify the value of a single metric.
* Analysis (via AnalysisDefinition): Run a Keptn analysis over an
  interval validating SLOs.

Signed-off-by: Florian Bacher <florian.bacher@dynatrace.com>
2024-06-11 19:23:03 +05:30
Sanskar Jaiswal adc60596f5
Merge pull request #1656 from fluxcd/update-deps
Update Go dependencies and Alpine
2024-05-30 23:09:04 +05:30
Sanskar Jaiswal cf04e28774
Update Alpine to 3.20
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-05-30 21:55:44 +05:30
Sanskar Jaiswal ba29384dd4
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-05-30 21:55:42 +05:30
Sanskar Jaiswal 86a4514932
Merge pull request #1653 from pazmd/bump-deps-golang.org/x/net-v0.25.0
Bump golang.org/x/net to v0.25.0 and other deps.
2024-05-30 21:34:35 +05:30
pazmd 61d81ff35a Tidy dependencies.
Signed-off-by: pazmd <171067554+pazmd@users.noreply.github.com>
2024-05-29 09:50:52 +01:00
pazmd 588f91ab7b Bump golang.org/x/net to v0.25.0.
Signed-off-by: pazmd <171067554+pazmd@users.noreply.github.com>
2024-05-29 09:50:52 +01:00
Stefan Prodan 24b968029e
Merge pull request #1649 from sm43/sa-annotation-support
loadtester: add support for annotation on service account
2024-05-24 18:44:29 +03:00
Shivam Mukhade 0ab3c07017
loadtester: add support for annotation on service account
this adds support to add annotation on serviceaccount when rbac is enabled.

Signed-off-by: Shivam Mukhade <shivam.mukhade@wooga.net>
2024-05-24 16:00:06 +02:00
Stefan Prodan 2d89870b14
Merge pull request #1648 from fluxcd/dependabot/github_actions/ci-610874c938
build(deps): bump the ci group across 1 directory with 2 updates
2024-05-23 10:24:47 +03:00
Sanskar Jaiswal 8e86366484
Merge pull request #1637 from ta924/matrixpanic
block panic when prom returns range vector
2024-05-23 10:21:54 +05:30
Tanner Altares e5dfbf4adc
Signed-off-by: Tanner Altares <ta924@yahoo.com>
block panic when prom returns range vector
2024-05-22 18:51:04 +05:30
Sanskar Jaiswal 04f5c68a83
Merge pull request #1634 from relu/patch-deployment-fix
Use `Patch` instead of `Update` for Deployment scaling
2024-05-22 18:32:24 +05:30
dependabot[bot] 52293a35ad
---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-22 11:48:16 +00:00
Aurel Canciu 553184b82b
Use Patch instead of Update in the deployment_controller when scaling
This should avoid frequent "Operation cannot be fulfilled" errors from
polluting Canary resource events and logs.

Signed-off-by: Aurel Canciu <aurel.canciu@nexhealth.com>
2024-05-22 17:16:16 +05:30
Sanskar Jaiswal 6289f8e371
Merge pull request #1638 from relu/k8s-1.30
Update dependencies to Kubernetes 1.30
2024-05-22 17:15:56 +05:30
Aurel Canciu 5e6815d531
Update e2e kind-related versions
Signed-off-by: Aurel Canciu <aurel.canciu@nexhealth.com>
2024-05-15 10:05:19 +03:00
Aurel Canciu 66d69f3d22
Update dependencies to Kubernetes 1.30
Signed-off-by: Aurel Canciu <aurel.canciu@nexhealth.com>
2024-05-02 12:52:01 +02:00
Stefan Prodan 9a0c6e7e54
Merge pull request #1628 from cyc0l4b/main
Bumps golang.org/x/net to v0.23.0
2024-04-12 09:50:12 +03:00
cyc0l4b 2ddbaf3324 chore: bumps golang.org/x/net to v0.23.0
Signed-off-by: cyc0l4b <cyc0l4b@proton.me>
2024-04-11 15:41:15 -03:00
Stefan Prodan ab68d18230
Merge pull request #1624 from fluxcd/fix-release-workflow
Setup Go toolchain for release workflow
2024-03-26 15:11:52 +02:00
Stefan Prodan 214022ce7b
Setup Go toolchain for release workflow
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-03-26 15:11:25 +02:00
Stefan Prodan 9acc70efc3
Merge pull request #1623 from fluxcd/release-1.37.0
Release 1.37.0
2024-03-26 14:20:43 +02:00
Stefan Prodan 4a12fc8499
Release v1.37.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-03-26 12:47:51 +02:00
Stefan Prodan 407e28e632
Release loadtester v0.32.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-03-26 12:27:11 +02:00
Stefan Prodan 82589a525d
Merge pull request #1622 from fluxcd/go-1.22
Update dependencies (Go 1.22)
2024-03-26 11:38:44 +02:00
Stefan Prodan 6651751fbe
Update dependencies (Go 1.22)
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-03-26 11:02:19 +02:00
Stefan Prodan 62fd5d2f77
Merge pull request #1620 from fluxcd/dependabot/github_actions/ci-ce785973a7
build(deps): bump the ci group with 1 update
2024-03-26 10:33:22 +02:00
Stefan Prodan 0a616df01e
Merge pull request #1602 from benoitg31/main
Migrate istio VirtualService/DestinationRule to APIversion v1beta1 (current v1alpha3)
2024-03-26 10:26:30 +02:00
Stefan Prodan f3be47d90b
Merge pull request #1621 from hernit/main
Add omitempty to statuses to allow better marshalling
2024-03-26 00:59:11 +02:00
Henry Tam 935d6f9746
Add omitempty to statuses to allow better marshalling.
Signed-off-by: Henry Tam <Henry.Tam@anz.com>
2024-03-25 23:17:22 +11:00
dependabot[bot] ded722fb2d
build(deps): bump the ci group with 1 update
Bumps the ci group with 1 update: [slsa-framework/slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator).


Updates `slsa-framework/slsa-github-generator` from 1.9.0 to 1.10.0
- [Release notes](https://github.com/slsa-framework/slsa-github-generator/releases)
- [Changelog](https://github.com/slsa-framework/slsa-github-generator/blob/main/CHANGELOG.md)
- [Commits](https://github.com/slsa-framework/slsa-github-generator/compare/v1.9.0...v1.10.0)

---
updated-dependencies:
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-25 11:40:20 +00:00
Stefan Prodan a437af030a
Merge pull request #1617 from sopida-chotwanwirach/gloo-reconcile-upstream-spec-change
fix(gloo): Update reconciler to detect change in gloo upstream spec
2024-03-25 09:45:12 +02:00
sopida-chotwanwirach e3a529e1c8 switch to use patch
Signed-off-by: sopida-chotwanwirach <sopida.chotwanwirach@offerup.com>
2024-03-18 19:37:08 -07:00
sopida-chotwanwirach e153b8a3df fix(gloo): Update reconciler to detect change in gloo upstream spec
Signed-off-by: sopida-chotwanwirach <sopida.chotwanwirach@offerup.com>
2024-03-14 14:13:02 -07:00
Stefan Prodan c45be96f73
Merge pull request #1614 from cyc0l4b/main
Updates google.golang.org/protobuf to v1.33.0
2024-03-14 11:22:20 +02:00
cyc0l4b dfa403705d chore: bumps google.golang.org/protobuf to v1.33
Signed-off-by: cyc0l4b <cyc0l4b@proton.me>
2024-03-12 16:40:19 -03:00
Stefan Prodan 9a0f01079f
Merge pull request #1611 from LiZhenCheng9527/fix-metricTemplate
Fixed bug where query with no metric template returned an error
2024-03-07 12:03:38 +02:00
LiZhenCheng9527 b778013e07 Fixed issue where query with no metric template returned an error
Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
2024-03-07 16:06:23 +08:00
Stefan Prodan 29576900df
Merge pull request #1610 from andrew-demb/patch-2
Fix link to alerting docs in changelog
2024-03-07 09:45:57 +02:00
Andrii Dembitskyi 5c70efb124
Fix link to alerting docs in changelog
Signed-off-by: Andrii Dembitskyi <andrew.dembitskiy@gmail.com>
2024-03-06 18:23:33 +02:00
Sanskar Jaiswal 9a224a0c90
Merge pull request #1608 from fluxcd/release-v1.36.1
Release v1.36.1
2024-03-06 13:04:41 +05:30
Sanskar Jaiswal bd3249feae
Release v1.36.1
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-03-06 11:27:05 +05:30
Sanskar Jaiswal 740477a757
Merge pull request #1607 from fluxcd/update-deps
Update Go dependencies
2024-03-05 22:01:32 +05:30
Sanskar Jaiswal 27967d7780
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-03-05 20:14:40 +05:30
Sanskar Jaiswal 0e6c88261f
Merge pull request #1598 from fluxcd/dependabot/github_actions/ci-d63955f9e9
build(deps): bump the ci group with 1 update
2024-03-05 20:13:46 +05:30
dependabot[bot] 7780a85bfa
build(deps): bump the ci group with 1 update
Bumps the ci group with 1 update: [helm/kind-action](https://github.com/helm/kind-action).


Updates `helm/kind-action` from 1.8.0 to 1.9.0
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.8.0...v1.9.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-05 13:54:22 +00:00
Sanskar Jaiswal 1f073843bf
Merge pull request #1606 from andrew-demb/patch-2
Actualize link to flux in-depth guide
2024-03-05 19:21:31 +05:30
Sanskar Jaiswal 5b03840db6
Merge pull request #1603 from fluxcd/fix-deploy-progress
scheduler: fail canary according to progress deadline
2024-03-05 19:19:03 +05:30
Andrii Dembitskyi fb4af8217d
Actualize link to flux in-depth guide
Signed-off-by: Andrii Dembitskyi <andrew.dembitskiy@gmail.com>
2024-03-04 23:10:41 +02:00
Sanskar Jaiswal 757d90121b
scheduler: fail canary according to progress deadline
Modify `canary.IsPrimaryReady()` and `canary.Initialize()` to return a
boolean indicating if the error is retriable. Modify the scheduler to
rollback the analysis and mark the Canary object as failed if the above
two functions or `canary.IsCanaryRead()` returns false along with an
error.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-03-05 00:10:30 +05:30
Benoit Gaillard 217db66a5e
make flagger use apiversion v1beta1 for istio VirtualService and DestinationRule instead of v1alpha1
Signed-off-by: Benoit Gaillard <benoit.gaillard@continental-corporation.com>
2024-02-28 15:09:40 +01:00
Stefan Prodan 1a27295728
Merge pull request #1599 from worldtiki/readme
Fix broken link in readme
2024-02-20 13:40:59 +02:00
Daniel Albuquerque 285ee6eee7
Fix broken link in readme
Signed-off-by: Daniel Albuquerque <daniel.albuquerque@teya.com>
2024-02-20 10:05:10 +00:00
Stefan Prodan 613f532b0d
Merge pull request #1597 from fluxcd/fix-changelog
Fix Istio link in changelog
2024-02-08 19:24:34 +02:00
Stefan Prodan 619253ebce
Fix Istio link in changelog
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-08 18:01:55 +02:00
Stefan Prodan 3f06a0b344
Merge pull request #1596 from fluxcd/release-1.36.0
Release 1.36.0
2024-02-07 21:03:54 +02:00
Stefan Prodan cf6e241fa5
Release v1.36.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-07 20:35:09 +02:00
Stefan Prodan 8128ab3785
Release loadtester 0.31.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-07 18:47:23 +02:00
Stefan Prodan c0a00e6970
Merge pull request #1595 from fluxcd/update-deps-k8s
Update dependencies
2024-02-07 18:43:31 +02:00
Stefan Prodan 2c3259bdb3
Merge pull request #1511 from chrisminton/pdb/fix
fix(pdb): use the full capabilities comparison for PDBs
2024-02-07 18:26:33 +02:00
Stefan Prodan 3d40ee1242
Update dependencies
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-07 18:21:32 +02:00
Stefan Prodan dd89cd5625
Merge pull request #1594 from fluxcd/go1.21
Build with Go 1.21 and Alpine 3.19
2024-02-07 18:03:26 +02:00
Stefan Prodan af1e210f08
Merge pull request #1564 from kubroid/istio-tcp-canary
Istio Canary TCP service support
2024-02-07 18:03:06 +02:00
Alexey Kubrinsky 4932527464 Istio Canary TCP service support
Signed-off-by: Alexey Kubrinsky <akubrinsky@zetaglobal.com>
2024-02-07 14:51:30 +01:00
Stefan Prodan 862c63e8c3
Build with Go 1.21 and Alpine 3.19
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-07 15:50:10 +02:00
Stefan Prodan f946e0e9e8
Merge pull request #1590 from fluxcd/dependabot/github_actions/ci-02cc0d4dbf
build(deps): bump the ci group with 3 updates
2024-02-07 13:35:24 +02:00
Sanskar Jaiswal 0a2169965a
Merge pull request #1593 from fluxcd/lfx-trademark
docs: add lfx trademark disclaimer
2024-02-07 16:11:06 +05:30
Sanskar Jaiswal 169aea200c
docs: add lfx trademark disclaimer
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-02-07 13:04:23 +05:30
dependabot[bot] 785db00796
build(deps): bump the ci group with 3 updates
Bumps the ci group with 3 updates: [actions/cache](https://github.com/actions/cache), [codecov/codecov-action](https://github.com/codecov/codecov-action) and [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer).


Updates `actions/cache` from 3.3.2 to 4.0.0
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.3.2...v4.0.0)

Updates `codecov/codecov-action` from 3 to 4
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v3...v4)

Updates `sigstore/cosign-installer` from 3.3.0 to 3.4.0
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.3.0...v3.4.0)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-05 11:16:40 +00:00
Stefan Prodan a88c056e04
Merge pull request #1589 from fluxcd/stefan-affiliation
Change Stefan Prodan's affiliation to ControlPlane
2024-02-05 11:36:54 +02:00
Stefan Prodan 6584f452b7
Change Stefan Prodan's affiliation to ControlPlane
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-02-05 11:27:04 +02:00
Stefan Prodan 16f8e15c98
Merge pull request #1582 from LiZhenCheng9527/fix-metric-bug
return an error for missing metric templates
2024-01-30 14:17:14 +02:00
LiZhenCheng9527 5f8aeb878b add ut for function runMetricChecks
Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
2024-01-18 18:14:44 +08:00
LiZhenCheng9527 d618cfcedd fix ut failed
Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
2024-01-18 16:52:18 +08:00
LiZhenCheng9527 471da0abba return an error for missing metric templates and count that towards the failure threshold
Signed-off-by: LiZhenCheng9527 <lizhencheng6@huawei.com>
2024-01-18 09:51:31 +08:00
Sanskar Jaiswal b562ddd3e2
Merge pull request #1576 from fluxcd/update-sanskar-affiliation
Change Sanskar Jaiswal's affiliation to Independent
2024-01-08 15:35:54 +05:30
Sanskar Jaiswal 82e5e3ad93
Change Sanskar Jaiswal's affiliation to Independent
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2024-01-08 15:24:37 +05:30
Stefan Prodan fa30864580
Merge pull request #1574 from fluxcd/stefanprodan-affiliation
Change Stefan Prodan's affiliation to independent
2024-01-03 15:12:46 +02:00
Stefan Prodan dc9fc923e4
Change Stefan Prodan's affiliation to independent
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2024-01-03 14:22:59 +02:00
Stefan Prodan 6f42af4ade
Merge pull request #1570 from fluxcd/dependabot/github_actions/ci-49e604c729
build(deps): bump the ci group with 3 updates
2024-01-02 11:12:25 +02:00
Stefan Prodan 28afd0acd6
Merge pull request #1572 from fluxcd/dependabot/go_modules/golang.org/x/crypto-0.17.0
build(deps): bump golang.org/x/crypto from 0.15.0 to 0.17.0
2024-01-02 11:11:59 +02:00
dependabot[bot] 0810972d31
build(deps): bump the ci group with 3 updates
Bumps the ci group with 3 updates: [actions/setup-go](https://github.com/actions/setup-go), [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `actions/setup-go` from 4 to 5
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

Updates `sigstore/cosign-installer` from 3.2.0 to 3.3.0
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.2.0...v3.3.0)

Updates `github/codeql-action` from 2 to 3
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-25 11:52:08 +00:00
dependabot[bot] 64f393fd60
build(deps): bump golang.org/x/crypto from 0.15.0 to 0.17.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-19 00:02:01 +00:00
Sanskar Jaiswal a2d147387c
Merge pull request #1571 from fluxcd/istio-retries
istio: make retry attempts a mandatory field
2023-12-18 21:22:17 +05:30
Sanskar Jaiswal 3a887bd79a
istio: make retry attempts a mandatory field
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-12-18 17:36:49 +05:30
Stefan Prodan 9f5ad2ec23
Merge pull request #1555 from fluxcd/dependabot/github_actions/ci-bb4b00b6aa
build(deps): bump the ci group with 1 update
2023-11-30 17:26:06 +02:00
Sanskar Jaiswal d1de1d788d
Merge pull request #1559 from fluxcd/release-v1.35.0
Release v1.35.0
2023-11-30 20:52:16 +05:30
Sanskar Jaiswal 7e95b1a8a5
Release v1.35.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-30 20:16:00 +05:30
Sanskar Jaiswal 83b5800009
Merge pull request #1560 from fluxcd/release-ld-v0.30.0
Release loadtester v0.30.0
2023-11-30 20:15:41 +05:30
Sanskar Jaiswal daab49730e
Release loadtester v0.30.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-30 19:46:21 +05:30
Sanskar Jaiswal 825b5d103a
Merge pull request #1558 from fluxcd/update-deps
Update Go dependencies
2023-11-30 15:57:57 +05:30
Sanskar Jaiswal eb8026e22b
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-30 15:07:42 +05:30
Sanskar Jaiswal e9b8dee726
Merge pull request #1557 from fluxcd/gatewayapi-v1
gatewayapi: add support for `v1`
2023-11-30 15:06:31 +05:30
Sanskar Jaiswal 37621dead8
gatewayapi: modify tutorial to use istio and v1
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-30 13:17:58 +05:30
Sanskar Jaiswal 1f2c464b45
gatewayapi: add support for timeouts
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-29 20:56:33 +05:30
Sanskar Jaiswal 09b0937e18
gatewayapi: bump e2e tests to v1
Bump Gateway API E2E tests to v1 and switch to Istio from Contour.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-29 20:56:33 +05:30
Sanskar Jaiswal 0d0d0ef811
gatewayapi: add support for v1 and drop v1alpha2
Add support for v1 of Gateway API `HTTPRoute`. Drop support for v1alpha2
as it was deprecated almost a year ago.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-29 20:56:30 +05:30
Sanskar Jaiswal 8935ef5e6a
Merge pull request #1541 from Kwasniewski/webhook_retries
feat: Webhook retries
2023-11-28 19:31:39 +05:30
dependabot[bot] 61eee5750b
build(deps): bump the ci group with 1 update
Bumps the ci group with 1 update: [fossa-contrib/fossa-action](https://github.com/fossa-contrib/fossa-action).

- [Release notes](https://github.com/fossa-contrib/fossa-action/releases)
- [Changelog](https://github.com/fossa-contrib/fossa-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fossa-contrib/fossa-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: fossa-contrib/fossa-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-27 11:42:34 +00:00
Joseph Kwasniewski ad8e7d613a
feat: add support for webhook retries
Add a new field `.spec.webhooks[].retries` to specify the number of
retries when calling a webhook.

Signed-off-by: Joseph Kwasniewski <kwasniewski@gmail.com>
2023-11-27 13:57:08 +05:30
Stefan Prodan 3e87c153db
Merge pull request #1549 from fluxcd/dependabot/github_actions/ci-121354bc3a
build(deps): bump the ci group with 1 update
2023-11-20 13:43:16 +02:00
dependabot[bot] 9189f17ff8
build(deps): bump the ci group with 1 update
Bumps the ci group with 1 update: [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer).

- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.1.2...v3.2.0)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-20 11:14:05 +00:00
Sanskar Jaiswal 749e099ff0
Merge pull request #1552 from fluxcd/canary-dep-finalize
controller: wait for canary deployment to be ready before removing finalizers
2023-11-20 15:51:44 +05:30
Sanskar Jaiswal 63ec848b38
controller: wait for canary deployment to be ready before removing finalizers
Fix the waiting logic to actually wait for the canary deployment to be
ready before continuing with the rest of the finalization logic.
Previously, the canary deployment was not being checked for a ready
status due to the the absence of the `Steps` field in the specified
backoff.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-11-17 18:03:57 +05:30
Stefan Prodan 8c078e898b
Merge pull request #1545 from fluxcd/dependabot/go_modules/google.golang.org/grpc-1.58.3
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
2023-10-26 09:28:09 +03:00
dependabot[bot] e784f88045
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.58.2 to 1.58.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-25 21:35:07 +00:00
Sanskar Jaiswal c5369e9113
Merge pull request #1540 from S-mishina/feature/support-istio-destinationrule-warmupdurationsecs
Support istio Destination Dule WarmupDurationSecs
2023-10-17 18:52:50 +05:30
S-mishina d196fae71c support WarmupDurationSecs
Signed-off-by: S-mishina <seiryu.mishina@zozo.com>

Delete unwanted descriptions

Signed-off-by: S-mishina <seiryu.mishina@zozo.com>

Add kustomize/base/flagger/crd.yaml WarmupDurationSecs field 

Signed-off-by: S-mishina <seiryu.mishina@zozo.com>

fix warmupDurationSecs description

Signed-off-by: S-mishina <seiryu.mishina@zozo.com>
2023-10-17 21:50:11 +09:00
Sanskar Jaiswal d7bf6a2474
Merge pull request #1537 from rye-sw/fix-issue-1534
set original node selector value when finalizing service
2023-10-17 12:14:47 +05:30
rye-sw d796c206d3 Set original node selector value when finalizing service
Signed-off-by: rye-sw <rye@stairwell.com>
2023-10-16 10:44:18 -07:00
Stefan Prodan 55db424082
Merge pull request #1538 from fluxcd/dependabot/go_modules/golang.org/x/net-0.17.0
build(deps): bump golang.org/x/net from 0.15.0 to 0.17.0
2023-10-12 09:22:19 +03:00
dependabot[bot] 750a1e53aa
build(deps): bump golang.org/x/net from 0.15.0 to 0.17.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.15.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.15.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 23:18:39 +00:00
Sanskar Jaiswal 450abb60b9
Merge pull request #1529 from fluxcd/release-v1.34.0
Release v1.34.0
2023-10-04 15:41:32 +05:30
Sanskar Jaiswal ce70a50047
Release v1.34.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-10-04 14:56:51 +05:30
Sanskar Jaiswal f465a6cdda
Merge pull request #1528 from fluxcd/update-deps
Update Go dependencies
2023-10-04 13:33:51 +05:30
Sanskar Jaiswal c1f39443d6
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-10-04 12:47:15 +05:30
Sanskar Jaiswal 6112ad9c54
Merge pull request #1525 from fluxcd/gw-mirror
gatewayapi: add support for b/g mirroring
2023-10-04 11:28:48 +05:30
Sanskar Jaiswal 8dbc72d7ff
gatewayapi: add docs for b/g mirroring
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-10-03 14:41:38 +05:30
Sanskar Jaiswal dc6dd0661a
gatewayapi: add support for b/g mirroring
Add support for mirroring requests while performing B/G deployments with
Gateway API. A `RequestMirror` filter pointing to the canary service is
added to the HTTPRoute during a Canary run. During the Canary run, drift
correction for `.spec.rules[].filters` is disabled to avoid removing the
mirror filter.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-10-03 14:41:37 +05:30
Stefan Prodan 475aff81ae
Merge pull request #1524 from Megum1n/main
Change Gloo Duration type to string
2023-10-03 12:11:20 +03:00
Megum1n 968a193f42
Fix typo in the test script
Signed-off-by: Megum1n <misaka@pantsu.moe>
2023-10-03 10:05:45 +02:00
Megum1n 22a9fd3d12
Add connectionTimeout configuration to gloo canary test
Signed-off-by: Megum1n <misaka@pantsu.moe>
2023-10-03 10:05:45 +02:00
Megum1n 8ada61edd1
Use strings for gloo duration configuration
Signed-off-by: Megum1n <misaka@pantsu.moe>
2023-10-03 10:05:45 +02:00
Stefan Prodan cadce1a2c2
Merge pull request #1522 from fluxcd/enterprise-runners
ci: Use GitHub larger runners
2023-09-25 10:49:20 +03:00
Stefan Prodan 3a7fd48d3a
Merge pull request #1518 from mumubin/docs_deployment_strategies_fix
docs: fix error example in deployment strategies
2023-09-22 17:52:47 +03:00
Sanskar Jaiswal 15ef64eb14
Merge pull request #1512 from fluxcd/gw-filters
gatewayapi: add support for route rule filters
2023-09-22 16:39:07 +05:30
Sanskar Jaiswal c0e2096f92
gatewayapi: add support for route rule filters
Add support for [`Filters`](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRouteFilter)
in the HTTPRoute API. We reuse most of the existing fields used for
Istio to construct the appopriate filter. A new API
`.spec.service.mirror` is added to allow for request mirroring. The
`.spec.service.rewrite` API has been changed to a custom `HTTPRewrite`
API instead of importing it from Istio, to allow covering all features
that Gateway API provides.

Support for the [`RequestRedirect`](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPRequestRedirectFilter)
Filter has been left out on purpose, since it's not possible to specify
it if the same rule also specifies `.backendRefs` (which Flagger does).

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-22 16:10:09 +05:30
Stefan Prodan e4c05c3034
ci: Use GitHub larger runners
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-09-19 17:16:01 +03:00
Stefan Prodan 794fea8cc6
Merge pull request #1521 from bigkevmcd/hook-versions
Add Checksum field to the Webhook payload to distinguish canary runs
2023-09-19 16:54:12 +03:00
Kevin McDermott 56b6339f8c Add Canary Webhook checksum.
This adds a new Checksum field to the canary webhook body, which is a
hash of the LastAppliedSpec and TrackedConfigs.

This can be used to identify the rollout of a specific configuration,
and differentiate between webhooks being sent for different
configuration and deployment versions.

Signed-off-by: Kevin McDermott <kevin@weave.works>
2023-09-19 12:50:37 +01:00
Stefan Prodan 788e692e90
Merge pull request #1517 from fluxcd/dependabot/github_actions/ci-86952151d7
build(deps): bump the ci group with 9 updates
2023-09-19 12:19:27 +03:00
Stefan Prodan a517309557
Merge pull request #1516 from adleong/alex/linkerd
Update Linkerd tutorial to use Kubernetes Gateway API
2023-09-19 12:19:01 +03:00
bin.hu ecdde862bf docs: fix error example in deployment strategies
Signed-off-by: bin.hu <bin.hu@ringcentral.com>
2023-09-18 14:08:36 +08:00
dependabot[bot] 0bcc814154
build(deps): bump the ci group with 9 updates
Bumps the ci group with 9 updates:

| Package | From | To |
| --- | --- | --- |
| [actions/checkout](https://github.com/actions/checkout) | `3` | `4` |
| [actions/cache](https://github.com/actions/cache) | `3.3.1` | `3.3.2` |
| [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) | `3.1.1` | `3.1.2` |
| [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) | `2` | `3` |
| [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) | `2` | `3` |
| [docker/login-action](https://github.com/docker/login-action) | `2` | `3` |
| [docker/metadata-action](https://github.com/docker/metadata-action) | `4` | `5` |
| [docker/build-push-action](https://github.com/docker/build-push-action) | `4` | `5` |
| [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) | `4` | `5` |


Updates `actions/checkout` from 3 to 4
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

Updates `actions/cache` from 3.3.1 to 3.3.2
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.3.1...v3.3.2)

Updates `sigstore/cosign-installer` from 3.1.1 to 3.1.2
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.1.1...v3.1.2)

Updates `docker/setup-qemu-action` from 2 to 3
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

Updates `docker/setup-buildx-action` from 2 to 3
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

Updates `docker/login-action` from 2 to 3
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

Updates `docker/metadata-action` from 4 to 5
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v4...v5)

Updates `docker/build-push-action` from 4 to 5
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

Updates `goreleaser/goreleaser-action` from 4 to 5
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: ci
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: ci
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-16 07:12:30 +00:00
Stefan Prodan aaafdca6ca
Merge pull request #1513 from fluxcd/dependabot-group
ci: group depandabot updates
2023-09-16 10:10:35 +03:00
Alex Leong 22c96c5af5
Fix threshold ranges
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-09-15 15:36:19 -07:00
Alex Leong 04a1f2fa68
Add metrics templates
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-09-15 15:08:32 -07:00
Alex Leong efc588001f
Include gatewayRefs in Linkerd Canary resources
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-09-15 14:05:14 -07:00
Alex Leong d543c8ef95
Update test
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-09-15 12:21:46 -07:00
Alex Leong da7015397c
Update Linkerd tutorial
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-09-15 12:08:23 -07:00
Sanskar Jaiswal fe32b2162d
ci: group depandabot updates
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-15 12:47:29 +05:30
Chris Minton 68d306ca83
fix(pdb): use the full capabilities comparison for PDBs
Signed-off-by: Chris Minton <chris.minton@sainsburys.co.uk>
2023-09-11 22:35:49 +01:00
Sanskar Jaiswal 7ab0eb14ea
Merge pull request #1507 from fluxcd/gw-session-affinity
gatewayapi: add support for session affinity
2023-09-11 18:54:54 +05:30
Sanskar Jaiswal 0eaf054e8b
remove all usages of autoscaling/v2beta2 from docs
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-11 13:57:26 +05:30
Sanskar Jaiswal a312f6a5e1
e2e: add tests for canary releases with session affinity
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-11 13:57:26 +05:30
Sanskar Jaiswal 00fcf991a6
gatewayapi: add support for session affinity
Add support for Canary releases with session affinity for Gateway API.
This enables any Gateway API implementation that supports
[`ResponseHeaderModifier`](3d22aa5a08/apis/v1beta1/httproute_types.go (L651))
to be used with session affinity.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-11 13:57:23 +05:30
Sanskar Jaiswal 8dbd8d509b
Merge pull request #1505 from sonbui00/fix-880
fix: Support for queryParams in canary match condition #880
2023-09-07 13:16:48 +05:30
Son Bui ff25d1ee92 fix: Support for queryParams in canary match condition #880
Signed-off-by: Son Bui <sonbv00@gmail.com>
2023-09-07 11:59:03 +08:00
Sanskar Jaiswal 2d3f039d80
Merge pull request #1506 from fluxcd/update-k8s
Update Kubernetes to v1.27
2023-09-06 18:59:25 +05:30
Sanskar Jaiswal 69cb3cd881
run k8s 1.24 in ci for skipper
Skipper's installation requires the creation of a PodSecurityPolicy
object. Since PSP was removed from k8s 1.25, we need to run tests for
skipper on k8s 1.24.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-06 18:34:33 +05:30
Sanskar Jaiswal 225e968288
ci: update kubernetes to v1.27
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-06 16:37:27 +05:30
Sanskar Jaiswal f0ffb67cff
update kubernetes to v1.27
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-09-06 15:49:25 +05:30
Sanskar Jaiswal dc8fe81c91
Merge pull request #1502 from sonbui00/doc-incorrect-name
chore: fix incorrect canary name on document
2023-09-06 00:33:52 +05:30
Son Bui f29c74b957 chore: fix incorrect canary name on document
Signed-off-by: Son Bui <sonbv00@gmail.com>
2023-09-01 13:08:24 +08:00
Stefan Prodan dfc0c96824
Merge pull request #1499 from fluxcd/fix-cosign-goreleaser
ci: Fix goreleaser signatures
2023-08-29 17:08:10 +03:00
Stefan Prodan 1093c64d5a
ci: Fix goreleaser signatures
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-08-29 17:06:15 +03:00
Stefan Prodan 993385036c
Merge pull request #1498 from fluxcd/fix-flux-push
ci: Fix flux push artifact
2023-08-29 16:10:52 +03:00
Stefan Prodan 0c8b5048dd
ci: Fix flux push artifact
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-08-29 16:09:46 +03:00
Stefan Prodan c90da790c5
Merge pull request #1497 from fluxcd/fix-cosign
ci: Fix cosign signatures
2023-08-29 15:27:35 +03:00
Stefan Prodan cef1bb8e67
ci: Fix cosign signatures
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-08-29 15:19:42 +03:00
Stefan Prodan 34b544bb47
Merge pull request #1496 from fluxcd/fix-release
ci: fix release workflow
2023-08-29 14:09:28 +03:00
Sanskar Jaiswal 2fd45cd0d8
ci: fix release workflow
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-29 16:09:24 +05:30
Sanskar Jaiswal aec43794d8
Merge pull request #1493 from fluxcd/release-v1.33.0
Release v1.33.0
2023-08-29 15:42:31 +05:30
Sanskar Jaiswal d35ecbeba8
Release v1.33.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-29 14:32:06 +05:30
Stefan Prodan e1b9d64379
Merge pull request #1495 from fluxcd/slsa3
ci: Generate SLSA provenance for release artifacts
2023-08-29 12:01:52 +03:00
Stefan Prodan 137d31ac79
ci: Generate SLSA provenance for release artifacts
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-08-29 11:34:15 +03:00
Sanskar Jaiswal b30855480f
Merge pull request #1494 from fluxcd/kubectl-docs
add docs for kubectl in loadtester
2023-08-29 14:04:08 +05:30
Sanskar Jaiswal cc08d31622
add docs for kubectl in loadtester
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-29 13:50:50 +05:30
Sanskar Jaiswal d021e25757
Merge pull request #1446 from miguelvalerio/fix-traefik-request-duration
Fix Traefik request-duration metric
2023-08-28 20:58:48 +05:30
Sanskar Jaiswal 7a0e95b498
update traefik version to 24.0.0 in e2e tests
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-28 20:21:17 +05:30
miguelvalerio c812dcffc1
fix traefik request-duration metric
Signed-off-by: miguelvalerio <miguelgomes.valerio@gmail.com>
2023-08-28 20:14:53 +05:30
Sanskar Jaiswal e1c4257b68
Merge pull request #1483 from tyagian/helm_chart_fix
Helm: Allow custom labels for servicemonitor
2023-08-28 20:12:35 +05:30
Anuj Tyagi a31a46f375
Add labels and namespace to servicemonitor chart
Signed-off-by: Anuj Tyagi <tyagi.an@husky.neu.edu>
2023-08-28 19:37:52 +05:30
Stefan Prodan a41a7bb6a4
Merge pull request #1442 from hsolberg/feature/make-honorLabels-configurable
Helm: Add option to configure honorLabels for serviceMonitor
2023-08-28 16:24:46 +03:00
Stefan Prodan 2992a99bbc
Merge pull request #1443 from RobinNil/fix-typos
fix: typo on "Parase", should be "Parse".
2023-08-28 16:23:13 +03:00
Sanskar Jaiswal eb302fe16e
Merge pull request #1489 from sonbui00/fix-1104
Update Istio Gateway reference format
2023-08-28 18:51:09 +05:30
Son Bui 2e4fe73d34 fix: Incorrect format for istio gateways #1104
Signed-off-by: Son Bui <sonbv00@gmail.com>
2023-08-28 20:46:30 +08:00
Stefan Prodan aad4f54afa
Merge pull request #1492 from sonbui00/upgrade-istio
e2e: Update Istio to v1.18
2023-08-28 15:25:29 +03:00
Sanskar Jaiswal 311cb5f2fd
Merge pull request #1490 from fluxcd/release-ld-v0.29.0
Release loadtester v0.29.0
2023-08-28 17:49:26 +05:30
Son Bui 388c0ef344 upgrade istio version
Signed-off-by: Son Bui <sonbv00@gmail.com>
2023-08-28 19:29:46 +08:00
Sanskar Jaiswal 3feaabea76
Release loadtester v0.29.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-28 16:08:44 +05:30
Sanskar Jaiswal ddc337b01a
Update Helm, grpc-health-probe and ghz in loadtester
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-28 16:08:44 +05:30
Sanskar Jaiswal 29c94d5f5e
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-28 16:08:44 +05:30
Sanskar Jaiswal 7af4498fd4
Merge pull request #1491 from fluxcd/cosign
ci: update cosign signing
2023-08-28 15:43:02 +05:30
Sanskar Jaiswal 7cce4fd6d8
ci: update cosign signing
Bypass prompt confirmation and switch to signing digests instead of
tags.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-08-28 14:51:47 +05:30
Sanskar Jaiswal 5a809d7b3a
Merge pull request #1485 from mumubin/support-kubectl
feat: loadtester support kubectl type
2023-08-28 13:27:08 +05:30
bin.hu 1802c4b7be feat: kubectl support kustomize remote git repo
Signed-off-by: bin.hu <bin.hu@ringcentral.com>
2023-08-24 09:56:37 +08:00
bin.hu 084daaf3f9 feat: loadtester support kubectl type
Signed-off-by: bin.hu <bin.hu@ringcentral.com>
2023-08-22 10:36:24 +08:00
Sanskar Jaiswal 7fc007a123
Merge pull request #1452 from fluxcd/dependabot/github_actions/sigstore/cosign-installer-3.1.1
build(deps): bump sigstore/cosign-installer from 2.8.1 to 3.1.1
2023-08-22 00:12:28 +05:30
dependabot[bot] 6359d5ea19
build(deps): bump sigstore/cosign-installer from 2.8.1 to 3.1.1
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 2.8.1 to 3.1.1.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v2.8.1...v3.1.1)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-21 17:27:50 +00:00
Sanskar Jaiswal 1de46a0bd3
Merge pull request #1477 from xtineskim/main
podinfo: Update hpa version from autoscaling/v2beta2 to autoscaling/v2
2023-08-21 22:56:47 +05:30
Christine Kim a8769f8cf5 Kustomize update hpa version
Signed-off-by: Christine Kim <xtineskim@gmail.com>
2023-08-21 12:12:03 -04:00
Sanskar Jaiswal 67bc27f515
Merge pull request #1466 from arukiidou/patch-1
Update doc.go
2023-08-21 14:45:56 +05:30
arukiidou dfb5d0847a
Update gatewayapi v1beta1 doc.go
Signed-off-by: arukiidou <arukiidou@yahoo.co.jp>
2023-08-21 13:49:34 +05:30
Sanskar Jaiswal ee37069385
Merge pull request #1456 from kellyfj/fix-faq
Fix FAQ templating format and change reference of $workload to $target.
2023-08-21 12:51:44 +05:30
Frank Kelly e5c0ffb693
Fix FAQ templating format and change workload to target.
Signed-off-by: Frank Kelly <kellyfj@gmail.com>
2023-08-21 11:36:55 +05:30
Sanskar Jaiswal ba7fedf762
Merge pull request #1451 from miguelvalerio/fix-initialization-downtime
Fix initial deployment downtime
2023-08-18 16:33:21 +05:30
miguelvalerio b25e12d45d
fix initial deployment downtime
Signed-off-by: miguelvalerio <miguelgomes.valerio@gmail.com>
2023-08-18 12:40:56 +05:30
Stefan Prodan 2944581a70
Merge pull request #1470 from ta924/main
Avoid running traffic increase hooks when waiting for promotion or promoting
2023-08-14 18:30:22 +03:00
Stefan Prodan 45038cbf9f
Merge pull request #1476 from bdols/pdb-versions
Helm: Use PodDisruptionBudget API policy/v1 if available
2023-08-14 18:29:36 +03:00
Brian Dols 0bdffc9e10 use PodDisruptionBudget API policy/v1 if available
Signed-off-by: Brian Dols <brian.dols@inky.com>
2023-08-10 23:43:34 -05:00
ta924@yahoo.com ca6867a6b1 fix trafficIncrease calls when using confirmPromotion
Signed-off-by: ta924@yahoo.com <ta924@yahoo.com>
2023-08-03 11:16:25 -05:00
Stefan Prodan eee3607ab7
Merge pull request #1461 from fluxcd/dependabot/github_actions/helm/kind-action-1.8.0
build(deps): bump helm/kind-action from 1.7.0 to 1.8.0
2023-07-17 18:25:18 +03:00
dependabot[bot] 17075e9006
build(deps): bump helm/kind-action from 1.7.0 to 1.8.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.7.0 to 1.8.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.7.0...v1.8.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 11:57:06 +00:00
Sanskar Jaiswal cf037c60ab
Merge pull request #1460 from fluxcd/release-v1.32.0
Release v1.32.0
2023-07-14 15:17:07 +05:30
Sanskar Jaiswal 27f354cc24
Release v1.32.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-07-14 12:57:25 +05:30
Sanskar Jaiswal 3d583068aa
Merge pull request #1459 from fluxcd/update-deps
Update Go dependencies
2023-07-13 21:28:32 +05:30
Sanskar Jaiswal 00fd1f93a9
Update Go dependencies
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-07-13 20:27:17 +05:30
Sanskar Jaiswal 440b88103a
Merge pull request #1455 from nickcaballero/feat/gloo-lb-slowstart
feat: Copy slowStartConfig for Gloo upstreams
2023-07-13 18:56:40 +05:30
Nick Caballero 8747d15417
feat: Copy slowStartConfig for Gloo upstreams
Signed-off-by: Nick Caballero <nick.caballero@offerup.com>
2023-07-13 18:25:02 +05:30
Stefan Prodan ac3140b8a2
Merge pull request #1458 from steve-fraser/main
Fixing namespace of HelmRepository in installation docs
2023-07-13 10:17:10 +03:00
Steven Fraser 310ca7eae8
Fixing namespace of HelmRepository
Signed-off-by: Steven Fraser <steve.fraser@weave.works>
2023-07-12 17:27:04 -04:00
Sanskar Jaiswal 6d6fc94855
Merge pull request #1439 from Codasquieves/main
Add support for istio LEAST_REQUEST destination rule load balancing
2023-07-07 20:45:23 +05:30
Ivan Lopes 7d29af4f41 Add support for istio LEAST_REQUEST destination rule load balancing algorithm
Signed-off-by: Ivan Lopes <ivanckp@gmail.com>
2023-07-06 10:11:00 -03:00
Sanskar Jaiswal 2b80c4756c
Merge pull request #1453 from adleong/alex/parent-port
Add gatewayRef port to Canary CRD
2023-07-05 12:20:36 +05:30
Alex Leong 879ea26cf6
Add gatewayRef port to Canary CRD
Signed-off-by: Alex Leong <alex@buoyant.io>
2023-07-04 13:22:24 -07:00
Robin 7b7cdcf7cd fix: typo on "Parase", should be "Parse".
title says it all.

Signed-off-by: Robin <330836+RobinNil@users.noreply.github.com>
2023-06-23 08:04:24 -04:00
Henrik Solberg 93a3aaa86f Helm: Add option to configure honorLabels for serviceMonitor.
Signed-off-by: Henrik Solberg <henrik.solberg@sparebank1.no>
2023-06-16 09:41:35 +02:00
Sanskar Jaiswal d960666b68
Merge pull request #1437 from pinkavaj/pi-fix-nil
Fix panic when annotation of ingress is empty
2023-05-30 16:09:25 +05:30
Jiří Pinkava d2564874ab Fix panic when annotation of ingress is empty
When the annotation of ingress is not set, the returned value is nil
(not empty map). Trying to assign to this map leads to panic.

Signed-off-by: Jiří Pinkava <j-pi@seznam.cz>
2023-05-29 11:27:28 +02:00
Stefan Prodan e71ce18b9d
Merge pull request #1436 from fluxcd/dependabot/github_actions/helm/kind-action-1.7.0
build(deps): bump helm/kind-action from 1.5.0 to 1.7.0
2023-05-22 16:33:21 +03:00
dependabot[bot] 59849f6c05
build(deps): bump helm/kind-action from 1.5.0 to 1.7.0
Bumps [helm/kind-action](https://github.com/helm/kind-action) from 1.5.0 to 1.7.0.
- [Release notes](https://github.com/helm/kind-action/releases)
- [Commits](https://github.com/helm/kind-action/compare/v1.5.0...v1.7.0)

---
updated-dependencies:
- dependency-name: helm/kind-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-22 11:59:49 +00:00
Sanskar Jaiswal 5a6e2d165b
Merge pull request #1431 from fluxcd/suspend
Add `spec.suspend` to allow suspending canary
2023-05-17 16:13:39 +05:30
Sanskar Jaiswal 6384bfb4a2
add spec.suspend to allow suspending canary
Suspend, if set to true will suspend the Canary, disabling any canary runs
regardless of any changes to its target, services, etc. Note that if the
Canary is suspended during an analysis, its paused until the Canary is unsuspended.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-05-17 13:43:23 +05:30
Sanskar Jaiswal 4303f8edfd
Merge pull request #1429 from fluxcd/finalize-keda
Resume target scaler during finalization
2023-05-17 13:42:50 +05:30
Sanskar Jaiswal 25754a3f03
resume target scaler during finalization
Resume target scaler during finalization so that targetRef deployment
does not get stuck at 0 replicas after canary has been deleted.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-05-17 12:26:06 +05:30
Stefan Prodan b71f0ce721
Merge pull request #1426 from fluxcd/update-alpine
Update Alpine to 3.18
2023-05-12 16:58:28 +03:00
Hidde Beydals 9055f96eeb
Update Alpine to 3.18
Signed-off-by: Hidde Beydals <hiddeco@users.noreply.github.com>
2023-05-12 15:34:38 +02:00
Stefan Prodan 073ac2206f
Merge pull request #1425 from eabykov/main
Helm: Add option to create service and serviceMonitor
2023-05-11 14:06:10 +03:00
Eugene Bykov 3814e8f19d Added servicemonitor
Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update README.md

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update values.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update service.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update values.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update service.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Create servicemonitor.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Update service.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>

Create service.yaml

Signed-off-by: Eugene Bykov <44170496+eabykov@users.noreply.github.com>
2023-05-11 13:38:57 +03:00
Stefan Prodan 1e5d83ad21
Merge pull request #1424 from fluxcd/release-v1.31.0
Release v1.31.0
2023-05-10 18:42:30 +03:00
Sanskar Jaiswal 68f0920548
Release v1.31.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-05-10 19:10:58 +05:30
Stefan Prodan 0d25d84230
Merge pull request #1384 from to266/fix-docs
Fix the loadtester install with flux documentation
2023-05-10 11:30:31 +03:00
Sanskar Jaiswal 15a6f742e0
Merge pull request #1414 from fluxcd/confirm-rollout
Run `confirm-rollout` checks only before scaling up deployment
2023-05-08 23:39:54 +05:30
Sanskar Jaiswal 495a5b24f4
run confirm-rollout checks only before scaling up
Run the `confirm-rollout` webhook check right before scaling up the
deployment only, instead of running it on every loop.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-05-08 20:15:42 +05:30
Stefan Prodan 956daea9dd
Merge pull request #1423 from fluxcd/remove-osm-e2e
e2e: Remove OSM tests
2023-05-08 17:41:31 +03:00
Stefan Prodan 7b17286b96
e2e: Remove OSM tests
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-05-08 17:09:20 +03:00
Sanskar Jaiswal e535b01de1
Merge pull request #1417 from alpeb/linkerd-2.13
Add support for Linkerd 2.13
2023-05-08 18:01:16 +05:30
Alejandro Pedraza d151a1b5e4
Update linkerd tutorial
Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2023-05-08 06:33:00 -05:00
Alejandro Pedraza 7242fa7d5c
Add support for Linkerd 2.13
In Linkerd 2.13 the Prometheus instance in
the `linkerd-viz` namespace is now locked behind an
[_AuthorizationPolicy_](https://github.com/linkerd/linkerd2/blob/stable-2.13.1/viz/charts/linkerd-viz/templates/prometheus-policy.yaml)
that only allows access to the `metrics-api` _ServiceAccount_.

This adds an extra _AuthorizationPolicy_ to authorize the `flagger`
_ServiceAccount_. It's created by default when using Kustomize, but
needs to be opted-in when using Helm via the new
`linkerdAuthPolicy.create` value. This also implies that the Flagger
workload has to be injected by the Linkerd proxy, and that can't happen
in the same `linkerd` namespace where the control plane lives, so we're
moving Flagger into the new injected `flagger-system` namespace.

The `namespace` field in `kustomization.yml` was resetting the namespace
for the new _AuthorizationPolicy_ resource, so that gets restored back
  to `linkerd-viz` using a `patchesJson6902` entry. A better way to do
  this would have been to use the `unsetOnly` field in a
  _NamespaceTransformer_ (see kubernetes-sigs/kustomize#4708) but for
  the life of me I couldn't make that work...

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>
2023-05-08 06:33:00 -05:00
Sanskar Jaiswal 9d4ebd9ddd
Merge pull request #1413 from fluxcd/release-v1.30.0
Release v1.30.0
2023-04-12 20:09:44 +05:30
Sanskar Jaiswal b2e713dbc1
Release v1.30.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-04-12 19:44:59 +05:30
Sanskar Jaiswal 8bcc7bf9af
Merge pull request #1412 from fluxcd/update-deps
Update dependencies
2023-04-12 19:28:06 +05:30
Sanskar Jaiswal 3078f96830
update dependencies
* cloud.google.com/go/monitoring => v1.13.0
* github.com/Masterminds/semver/v3 => v3.2.1
* github.com/aws/aws-sdk-go => v1.44.241
* github.com/googleapis/gax-go/v2 => v2.8.0
* github.com/influxdata/influxdb-client-go/v2 => v2.12.3
* google.golang.org/api => v0.117.0
* google.golang.org/genproto => v0.0.0-20230410155749-daa745c078e1
* google.golang.org/grpc => v1.54.0
* google.golang.org/protobuf => v1.30.0
* k8s.io/klog/v2 => v2.90.1

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-04-12 18:57:14 +05:30
Sanskar Jaiswal 8708e35287
Merge pull request #1411 from clux/main 2023-04-12 18:21:46 +05:30
clux a8b96f053d Allow configuring deployment annotations
Signed-off-by: clux <sszynrae@gmail.com>
2023-04-12 13:27:31 +01:00
Sanskar Jaiswal a487357bd5
Merge pull request #1392 from jonnylangefeld/jlf/update-apex-labels-annotations
Enable updates for labels and annotations
2023-04-12 16:47:28 +05:30
jonny.langefeld e8aba087ac
Enable updates for labels and annotations
Fix #1386

Signed-off-by: jonny.langefeld <jonnylangefeld@gmail.com>
Signed-off-by: Jonny Langefeld <jonnylangefeld@gmail.com>
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-04-12 16:04:28 +05:30
Sanskar Jaiswal 5b7a679944
Merge pull request #1408 from fluxcd/helm-drift
Disable Flux helm drift detection for managed resources
2023-04-10 18:26:20 +05:30
Sanskar Jaiswal 8229852585
disable flux helm drift detection for managed resources
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-04-10 17:28:39 +05:30
Sanskar Jaiswal f1def19f25
Merge pull request #1405 from ta924/main
avoid copying canary labels to primary on promotion
2023-04-10 17:12:03 +05:30
ta924@yahoo.com 44363d5d99 address issue with all canary labels copied to primary on promote
address issue with all canary labels copied to primary on promote

Signed-off-by: ta924@yahoo.com <ta924@yahoo.com>

address review comments
2023-04-09 22:08:09 -05:00
Stefan Prodan f3f62667bf
Merge pull request #1385 from fluxcd/dependabot/github_actions/actions/cache-3.3.1
build(deps): bump actions/cache from 3.2.5 to 3.3.1
2023-04-07 14:05:13 +03:00
Sanskar Jaiswal 3d8615735b
Merge pull request #1394 from fluxcd/dependabot/github_actions/actions/setup-go-4
build(deps): bump actions/setup-go from 3 to 4
2023-04-07 15:57:07 +05:30
dependabot[bot] d1b6b36bcd
build(deps): bump actions/setup-go from 3 to 4
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-07 10:00:46 +00:00
Sanskar Jaiswal e4755a4567
Merge pull request #1393 from gdasson/main
helm: Added the option to supply additional volumes
2023-04-06 16:55:49 +05:30
Gaurav Dasson 2ced721cf1
Added the option to supply additional
volumes to the flagger pod.

Signed-off-by: Gaurav Dasson <gaurav.dasson@gmail.com>
2023-04-06 16:01:38 +05:30
Stefan Prodan cf267d0bbd
Merge pull request #1402 from johnharris85/update-kuma
update Kuma version and docs
2023-04-06 13:29:10 +03:00
John Harris 49d59f3b45
update Kuma version and docs
Signed-off-by: John Harris <john@johnharris.io>
2023-04-06 14:27:19 +05:30
Sanskar Jaiswal 699ea2b8aa
Merge pull request #1406 from aryan9600/bump-k8s
ci: bump k8s to 1.24 and kind to 1.18
2023-04-06 13:23:29 +05:30
Sanskar Jaiswal 064d867510
ci: bump k8s to 1.24 and kind to 1.18
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-04-06 12:35:09 +05:30
Stefan Prodan 5f63c4ae63
Merge pull request #1398 from kwongtn/patch-1
Update flagger-install-with-flux.md
2023-03-25 10:11:37 +02:00
KwongTN 4ebb38743d
Update flagger-install-with-flux.md
Signed-off-by: KwongTN <5886584+kwongtn@users.noreply.github.com>
2023-03-25 00:20:17 +08:00
dependabot[bot] 01a7f3606c
build(deps): bump actions/cache from 3.2.5 to 3.3.1
Bumps [actions/cache](https://github.com/actions/cache) from 3.2.5 to 3.3.1.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.2.5...v3.3.1)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-13 12:03:11 +00:00
Tomas Ostasevicius 699c577fa6 Fix the install documentation so that it works
Signed-off-by: Tomas Ostasevicius <t.ostasevicius@gmail.com>
2023-03-08 10:01:37 +01:00
Sanskar Jaiswal 6879038a63
Merge pull request #1375 from fluxcd/release-v1.29.0
Release v1.29.0
2023-02-21 13:53:20 +05:30
Sanskar Jaiswal cc2f9456cf Release v1.29.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-02-21 12:22:14 +05:30
Sanskar Jaiswal 7994989b29
Merge pull request #1374 from aryan9600/update-deps
update dependencies
2023-02-20 22:40:03 +05:30
Sanskar Jaiswal 1206132e0c update dependencies
* github.com/aws/aws-sdk-go => v1.44.204
* github.com/influxdata/influxdb-client-go/v2 => v2.12.2
* google.golang.org/api => v0.110.0
* google.golang.org/genproto => v0.0.0-20230216225411-c8e22ba71e44
* google.golang.org/grpc => v1.53.0

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-02-20 21:27:02 +05:30
Sanskar Jaiswal 74cfbda40c
Merge pull request #1373 from fluxcd/dependabot/go_modules/golang.org/x/net-0.7.0
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
2023-02-20 21:15:19 +05:30
dependabot[bot] 1266ff48d8
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.4.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.4.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-20 13:35:22 +00:00
Sanskar Jaiswal b1315679b8
Merge pull request #1372 from fluxcd/dependabot/github_actions/fossa-contrib/fossa-action-2
build(deps): bump fossa-contrib/fossa-action from 1 to 2
2023-02-20 19:04:21 +05:30
Sanskar Jaiswal 859fb7e160
Merge pull request #1371 from thechristschn/helm-chart-allow-custom-affinities
Allow custom affinities for flagger deployment in helm chart
2023-02-20 18:52:44 +05:30
dependabot[bot] 32077636ff
build(deps): bump fossa-contrib/fossa-action from 1 to 2
Bumps [fossa-contrib/fossa-action](https://github.com/fossa-contrib/fossa-action) from 1 to 2.
- [Release notes](https://github.com/fossa-contrib/fossa-action/releases)
- [Changelog](https://github.com/fossa-contrib/fossa-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fossa-contrib/fossa-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: fossa-contrib/fossa-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-20 12:06:29 +00:00
Stefan Prodan e263d6a169
Merge pull request #1370 from thechristschn/add-namespaces-in-helm-chart
Add namespace to namespaced resources in helm chart
2023-02-20 12:39:34 +02:00
Christian Baumann 8d517799b5 Allow custom affinities in helm chart
Signed-off-by: Christian Baumann <thechristschn@gmail.com>
2023-02-18 23:45:46 +01:00
Christian Baumann a89cd6d3ba Add namespace to namespaced resources in helm chart
Signed-off-by: Christian Baumann <thechristschn@gmail.com>
2023-02-18 14:50:02 +01:00
Stefan Prodan 4c2de0c716
Merge pull request #1366 from fluxcd/dependabot/github_actions/actions/cache-3.2.5
build(deps): bump actions/cache from 3.2.4 to 3.2.5
2023-02-13 15:37:13 +02:00
dependabot[bot] 059a304a07
build(deps): bump actions/cache from 3.2.4 to 3.2.5
Bumps [actions/cache](https://github.com/actions/cache) from 3.2.4 to 3.2.5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.2.4...v3.2.5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-13 12:07:09 +00:00
Sanskar Jaiswal 317d53a71f
Merge pull request #1364 from fluxcd/session-affinity-regex
use regex to match against headers in istio
2023-02-08 22:38:47 +05:30
Sanskar Jaiswal 202b6e7eb1 use regex to match against headers in istio
Use regex filtering to match against session affinity cookie headers
when using Istio instead of an exact match.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-02-08 21:41:19 +05:30
Sanskar Jaiswal e59e3aedd4
Merge pull request #1355 from njohnstone2/metric_template_vars
Add support for custom variables in metric templates
2023-02-08 18:12:42 +05:30
Nelson Johnstone 2b45c2013c
metric variables documentation and e2e tests
Signed-off-by: Nelson Johnstone <93178586+njohnstone2@users.noreply.github.com>
2023-02-08 19:49:22 +10:00
Nelson Johnstone 6786668684
updated canary CRD and query rendering
Signed-off-by: Nelson Johnstone <93178586+njohnstone2@users.noreply.github.com>
2023-02-08 11:41:58 +10:00
Nelson Johnstone 27eb21ecc8
Support custom variables on metric templates
Signed-off-by: Nelson Johnstone <93178586+njohnstone2@users.noreply.github.com>
2023-02-08 11:41:52 +10:00
Stefan Prodan e7d8adecb4
Merge pull request #1362 from fluxcd/dependabot/github_actions/actions/cache-3.2.4
build(deps): bump actions/cache from 3.2.3 to 3.2.4
2023-02-06 17:19:24 +02:00
Stefan Prodan aa574d469e
Merge pull request #1361 from fluxcd/dependabot/github_actions/docker/build-push-action-4
build(deps): bump docker/build-push-action from 3 to 4
2023-02-06 17:18:58 +02:00
dependabot[bot] 5ba20c254a
build(deps): bump actions/cache from 3.2.3 to 3.2.4
Bumps [actions/cache](https://github.com/actions/cache) from 3.2.3 to 3.2.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.2.3...v3.2.4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-06 11:09:53 +00:00
dependabot[bot] bb2cf39393
build(deps): bump docker/build-push-action from 3 to 4
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3 to 4.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-02-06 11:09:45 +00:00
Sanskar Jaiswal 05d08d3ff1
Merge pull request #1359 from fluxcd/release-workflow-dispatch
modify release workflow to publish rc images
2023-02-05 18:10:27 +05:30
Sanskar Jaiswal 7ec3774172 modify release workflow to publish rc images
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2023-02-05 17:38:41 +05:30
Sanskar Jaiswal e9ffef29f6
Merge pull request #1346 from lloydchang/patch-2
docs(readme.md): add additional tutorial
2023-02-05 00:37:55 +05:30
Stefan Prodan 64035b4942
Merge pull request #1356 from fluxcd/docker-sbom
build: Enable SBOM and SLSA Provenance
2023-01-31 18:25:16 +02:00
Stefan Prodan 925cc37c8f
build: Enable SBOM and SLSA Provenance
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-01-31 15:11:28 +02:00
Stefan Prodan 1574e29376
Merge pull request #1354 from fluxcd/release-1.28.0
Release v1.28.0
2023-01-26 13:33:54 +02:00
Stefan Prodan 4a34158587
Release v1.28.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-01-26 12:35:51 +02:00
Stefan Prodan 534196adde
Merge pull request #1352 from fluxcd/kube-1.26.1
Update Kubernetes packages to v1.26.1
2023-01-26 11:54:08 +02:00
Stefan Prodan 57bf2ab7d1
Update Kubernetes packages to v1.26.1
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2023-01-25 13:47:38 +02:00
Sanskar Jaiswal c65d072249
Merge pull request #1338 from wwadge/add-prometheus-bearer-auth
Allow access to Prometheus in OpenShift via SA token
2023-01-18 19:32:49 +05:30
Wallace Wadge a50d7de86d Allow access to Prometheus in OpenShift via SA token
Fixes: https://github.com/fluxcd/flagger/issues/1064

Signed-off-by: Wallace Wadge <wwadge@gmail.com>
2023-01-18 08:56:53 +01:00
Stefan Prodan e365c21322
Merge pull request #1343 from relu/autoscaler-replicas-overrides
Support for overriding primary scaler replicas
2023-01-17 16:01:14 +02:00
Stefan Prodan 7ce679678f
Merge pull request #1340 from fluxcd/dependabot/github_actions/actions/cache-3.2.3
build(deps): bump actions/cache from 3.0.11 to 3.2.3
2023-01-17 15:59:42 +02:00
lloydchang 685c816a12
docs(readme.md): add additional tutorial
AWS App Mesh: Canary Deployment Using Flagger
https://www.eksworkshop.com/advanced/340_appmesh_flagger/

Signed-off-by: lloydchang <lloydchang@gmail.com>
2023-01-16 15:28:22 -08:00
Aurel Canciu 5d3ab056f0
Support for overriding primary scaler replicas
Adding support for overriding the primary scaler replica count via
.spec.autoscalerRef.primaryScalerReplicas, a feature which would enable
users to define a different scaling configurations for the primary.

This can be useful in the situation where the user does not want to
scale the canary workload to the exact same size as the primary,
especially when opting for a canary deployment pattern where only a
small portion of traffic is routed to the canary workload pods.

Signed-off-by: Aurel Canciu <aurelcanciu@gmail.com>
2023-01-16 18:47:14 +01:00
dependabot[bot] 2587a3d3f1
build(deps): bump actions/cache from 3.0.11 to 3.2.3
Bumps [actions/cache](https://github.com/actions/cache) from 3.0.11 to 3.2.3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3.0.11...v3.2.3)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-09 11:09:47 +00:00
Stefan Prodan 58270dd4b9
Merge pull request #1332 from fluxcd/dependabot/github_actions/goreleaser/goreleaser-action-4
Bump goreleaser/goreleaser-action from 3 to 4
2022-12-19 14:39:12 +02:00
dependabot[bot] 86081708a4
Bump goreleaser/goreleaser-action from 3 to 4
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 3 to 4.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-19 11:17:57 +00:00
Stefan Prodan 686e8a3e8b
Merge pull request #1331 from fluxcd/release-v1.27.0
Release v1.27.0
2022-12-16 11:32:16 +02:00
Sanskar Jaiswal 0aecddb00e Release v1.27.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-12-16 14:37:38 +05:30
Stefan Prodan 26518cecbf
Merge pull request #1328 from fluxcd/loadtester-v0.28.0
Release loadtester v0.28.0
2022-12-09 11:14:50 +02:00
Stefan Prodan 9d1db87592
Release loadtester 0.28.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-12-08 17:43:47 +02:00
Stefan Prodan e352010bfd
Update Helm to 3.10.2
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-12-08 17:42:28 +02:00
Stefan Prodan 58267752b1
Update dependencies
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-12-08 17:41:27 +02:00
Sanskar Jaiswal 2dd48c6e79
Merge pull request #1281 from Gallardot/apisix
[apisix] Implement router interface and observer interface
2022-12-07 18:27:28 +05:30
Gallardot 6c29c21184 add apisix docs
Signed-off-by: Gallardot <tttick@163.com>
2022-12-07 11:57:21 +08:00
Gallardot 85fe251991 create canary apisix object only with the related http route
Signed-off-by: Gallardot <tttick@163.com>
2022-12-07 11:56:57 +08:00
Gallardot 69861e0c8a chore: add kustomize. fix: e2e test
Signed-off-by: Gallardot <tttick@163.com>
2022-12-06 17:56:00 +05:30
Gallardot e440be17ae add e2e tests and helper functions for router
Signed-off-by: Gallardot <tttick@163.com>
2022-12-06 17:54:03 +05:30
Gallardot ce52408bbc improve apisix router and metric observer
Signed-off-by: Gallardot <tttick@163.com>
2022-12-06 17:52:38 +05:30
Gallardot badf7b9a4f chore: add UT, add DIFF
Signed-off-by: Gallardot <tttick@163.com>
2022-12-06 14:46:43 +05:30
Gallardot 3e9fe97ba3 [apisix] Implement router interface and observer interface
Signed-off-by: Gallardot <tttick@163.com>
2022-12-06 14:46:42 +05:30
Stefan Prodan ec7066b31b
Merge pull request #1326 from fluxcd/dependabot/github_actions/stefanprodan/helm-gh-pages-1.7.0
Bump stefanprodan/helm-gh-pages from 1.6.0 to 1.7.0
2022-12-05 12:36:22 +02:00
dependabot[bot] ec74dc5a33
Bump stefanprodan/helm-gh-pages from 1.6.0 to 1.7.0
Bumps [stefanprodan/helm-gh-pages](https://github.com/stefanprodan/helm-gh-pages) from 1.6.0 to 1.7.0.
- [Release notes](https://github.com/stefanprodan/helm-gh-pages/releases)
- [Commits](https://github.com/stefanprodan/helm-gh-pages/compare/v1.6.0...v1.7.0)

---
updated-dependencies:
- dependency-name: stefanprodan/helm-gh-pages
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-28 11:21:21 +00:00
Stefan Prodan cbdc2c5a7c
Merge pull request #1324 from fluxcd/update-release-docs
Update release docs
2022-11-23 17:14:13 +02:00
Stefan Prodan 228fbeeda4
Update release docs
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-11-23 16:40:46 +02:00
Stefan Prodan b0d9825afa
Merge pull request #1323 from fluxcd/release-1.26.0
Release v1.26.0
2022-11-23 16:18:58 +02:00
Stefan Prodan 7509264d73
Release v1.26.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-11-23 15:45:54 +02:00
Stefan Prodan f08725d661
Merge pull request #1322 from fluxcd/update-deps
Update dependencies
2022-11-23 14:54:39 +02:00
Stefan Prodan f2a9a8d645
Update dependencies
- cloud.google.com/go/monitoring v1.9.0
- github.com/aws/aws-sdk-go v1.44.144
- github.com/googleapis/gax-go/v2 v2.7.0
- github.com/influxdata/influxdb-client-go/v2 v2.12.0
- github.com/prometheus/client_golang v1.14.0
- github.com/stretchr/testify v1.8.1
- k8s.io/* v0.25.4

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-11-23 14:30:46 +02:00
Sanskar Jaiswal e799030ae3
Merge pull request #1321 from aryan9600/gw-api-docs
update gateway api docs to v1beta1
2022-11-23 17:05:14 +05:30
Sanskar Jaiswal ec0657f436 update gateway api docs to v1beta1
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-11-23 15:30:42 +05:30
Stefan Prodan fce46e26d4
Merge pull request #1319 from cgrotz/main
Updated Gateway API from v1alpha2 to v1beta1
2022-11-23 10:50:19 +02:00
Christoph Grotz e015a409fe Added support for Gateway API v1beta1
Signed-off-by: Christoph Grotz <grotz@google.com>
2022-11-22 15:51:16 +01:00
Sanskar Jaiswal 82ff90ce26
Merge pull request #1316 from kingdonb/fix-linkerd-install-crds
Add linkerd install --crds to Linkerd tutorial
2022-11-17 10:32:24 +05:30
Kingdon Barrett 63edc627ad
Fix #1315
Signed-off-by: Kingdon Barrett <kingdon@weave.works>
2022-11-16 21:14:30 -05:00
Sanskar Jaiswal c2b4287ce1
Merge pull request #1313 from aryan9600/release-v1.25.0
Release v1.25.0
2022-11-16 15:28:51 +05:30
Sanskar Jaiswal 5b61f15f95 Release v1.25.0
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-11-16 15:06:33 +05:30
Sanskar Jaiswal 9c815f2252
Merge pull request #1280 from aryan9600/sticky-istio
Add support for session affinity during weighted routing with Istio
2022-11-15 18:21:24 +05:30
Sanskar Jaiswal d16c9696c3 add docs for istio sticky canary releases
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-11-15 17:55:52 +05:30
Sanskar Jaiswal 14ccda5506 add unit tests for session affinity in istio router
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-11-10 13:17:16 +05:30
Sanskar Jaiswal a496b99d6e add session affinity support for weighted routing with istio
Add `.spec.analysis.sessionAffinity` to configure session affinity for
weighted routing. Add support for session affinity in the Istio router,
using the `Set-Cookie` and `Cookie` headers.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-11-10 13:17:16 +05:30
Stefan Prodan 882286dce7
Merge pull request #1306 from ashokhein/fix-cluster-name
Fix cluster name inclusion in alerts metadata
2022-11-09 12:52:11 +02:00
ashokhein 8aa9ca92e3 Fixing cluster name
Signed-off-by: ashokhein <ashokhein@gmail.com>
2022-10-28 11:56:06 +00:00
Stefan Prodan 52dd8f8c14
Merge pull request #1302 from mdolinin/fix/faq-doc-zero-downtime
fix(faq): Update FAQ about zero downtime with correct values
2022-10-27 10:11:47 +03:00
mdolinin 4d28b9074b
fix(faq): Update FAQ about zero downtime with correct values
Signed-off-by: mdolinin <dmo.builder@gmail.com>
2022-10-26 20:56:30 -04:00
Stefan Prodan 381c19b952
Merge pull request #1301 from fluxcd/release-v1.24.1
Release v1.24.1
2022-10-26 17:09:26 +03:00
Stefan Prodan 50f9255af2
Release v1.24.1
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-26 16:44:26 +03:00
Stefan Prodan a7df3457ad
Merge pull request #1300 from mdolinin/fix/gloo-non-default-service-name-corretly-get-routes
fix(gloo): Use correct route table name in case service name was overwritten
2022-10-26 15:32:32 +03:00
mdolinin 647f624554
fix(gloo): Update tests to not check gateway deployment. Was removed from >1.12.x
Signed-off-by: mdolinin <dmo.builder@gmail.com>
2022-10-25 11:18:26 -04:00
mdolinin 3d3e051f03
fix(gloo): Update Gloo to the latest stable version
Signed-off-by: mdolinin <dmo.builder@gmail.com>
2022-10-25 07:52:15 -04:00
mdolinin 4c0b2beb63
fix(gloo): Use correct route table name in case service name was overwritten
Signed-off-by: mdolinin <dmo.builder@gmail.com>
2022-10-24 21:38:08 -04:00
Stefan Prodan ec44f64465
Merge pull request #1298 from fluxcd/dependabot/github_actions/goreleaser/goreleaser-action-3
Bump goreleaser/goreleaser-action from 2 to 3
2022-10-24 17:25:52 +03:00
Stefan Prodan 19d4e521a3
Merge pull request #1297 from fluxcd/dependabot/github_actions/docker/login-action-2
Bump docker/login-action from 1 to 2
2022-10-24 17:25:34 +03:00
Stefan Prodan 85a3b7c388
Merge pull request #1296 from fluxcd/dependabot/github_actions/github/codeql-action-2
Bump github/codeql-action from 1 to 2
2022-10-24 17:25:15 +03:00
Stefan Prodan 26ec719c67
Merge pull request #1295 from fluxcd/dependabot/github_actions/actions/checkout-3
Bump actions/checkout from 2 to 3
2022-10-24 17:24:57 +03:00
Stefan Prodan 66364bb2c9
Merge pull request #1299 from fluxcd/dependabot/github_actions/docker/build-push-action-3
Bump docker/build-push-action from 2 to 3
2022-10-24 17:24:34 +03:00
dependabot[bot] f9f8d7e71e
Bump docker/build-push-action from 2 to 3
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-24 11:36:21 +00:00
dependabot[bot] bdbd1fb1f0
Bump goreleaser/goreleaser-action from 2 to 3
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 2 to 3.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-24 11:36:16 +00:00
dependabot[bot] b3112a53f1
Bump docker/login-action from 1 to 2
Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-24 11:36:13 +00:00
dependabot[bot] f1f4e68673
Bump github/codeql-action from 1 to 2
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 1 to 2.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-24 11:36:10 +00:00
dependabot[bot] 9b56445621
Bump actions/checkout from 2 to 3
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-24 11:36:04 +00:00
Stefan Prodan f5f3d92d3d
ci: Pin Helm and Cosign action version
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-23 15:38:36 +03:00
Stefan Prodan 4d074799ca
ci: Use Helm action from fluxcd org
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-23 13:36:32 +03:00
Stefan Prodan d38a2406a7
Release v1.24.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-23 13:07:10 +03:00
Stefan Prodan 25ccfca835
Merge pull request #1294 from fluxcd/install-with-flux-oci
docs: Add guide on how to install Flagger with Flux OCI
2022-10-23 12:43:40 +03:00
Stefan Prodan 487b6566ee
docs: Add guide on how to install Flagger with Flux OCI
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-23 12:15:57 +03:00
Stefan Prodan 14caeb12ad
Merge pull request #1293 from fluxcd/push-flux-oci
ci: Publish signed Helm charts and manifests to GHCR
2022-10-22 15:45:16 +03:00
Stefan Prodan cf8fcd0539
ci: Publish signed Helm charts and manifests to GHCR
- Push Flagger Helm chart to `ghcr.io/fluxcd/charts/flagger`
- Sign Flagger Helm chart with Cosign and GitHub OIDC
- Push install manifests and overlays from `./kustomize` with Flux CLI to `ghcr.io/fluxcd/flagger-manifests`
- Sign Flagger manifests with Cosign and GitHub OIDC

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-22 14:46:20 +03:00
Stefan Prodan d8387a351e
Merge pull request #1292 from fluxcd/cosign-keyless
ci: Sign release and containers with Cosign and GitHub OIDC
2022-10-22 14:33:59 +03:00
Stefan Prodan 300cd24493
ci: Sign release and containers with Cosign and GitHub OIDC
- Replace the Cosign static key with GitHub Actions OIDC when signing the flagger container image
- Sign the GitHub release assets checksums with Cosign keyless
- Sign the load-tester container image with Cosign keyless

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-21 16:39:29 +03:00
Stefan Prodan fb66d24f89
Merge pull request #1288 from fluxcd/dependabot/github_actions/stefanprodan/helm-gh-pages-1.6.0
Bump stefanprodan/helm-gh-pages from 1.3.0 to 1.6.0
2022-10-21 12:33:36 +03:00
Stefan Prodan f1fc8c067e
Merge pull request #1287 from fluxcd/dependabot/github_actions/actions/cache-3.0.11
Bump actions/cache from 1 to 3.0.11
2022-10-21 12:33:16 +03:00
Stefan Prodan da1ee05c0a
Merge pull request #1290 from fluxcd/dependabot/github_actions/docker/metadata-action-4
Bump docker/metadata-action from 3 to 4
2022-10-21 12:32:53 +03:00
Stefan Prodan 57099ecd43
Merge pull request #1291 from fluxcd/dependabot/github_actions/codecov/codecov-action-3
Bump codecov/codecov-action from 1 to 3
2022-10-21 12:32:30 +03:00
Stefan Prodan 8c5b41bbe6
Merge pull request #1289 from fluxcd/dependabot/github_actions/actions/setup-go-3
Bump actions/setup-go from 2 to 3
2022-10-21 12:32:07 +03:00
dependabot[bot] 7bc716508c
Bump codecov/codecov-action from 1 to 3
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-21 09:30:38 +00:00
dependabot[bot] d82d9765e1
Bump docker/metadata-action from 3 to 4
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 3 to 4.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-21 09:30:33 +00:00
dependabot[bot] 74e570c198
Bump actions/setup-go from 2 to 3
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2 to 3.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-21 09:30:27 +00:00
dependabot[bot] 6adf51083e
Bump stefanprodan/helm-gh-pages from 1.3.0 to 1.6.0
Bumps [stefanprodan/helm-gh-pages](https://github.com/stefanprodan/helm-gh-pages) from 1.3.0 to 1.6.0.
- [Release notes](https://github.com/stefanprodan/helm-gh-pages/releases)
- [Commits](https://github.com/stefanprodan/helm-gh-pages/compare/v1.3.0...v1.6.0)

---
updated-dependencies:
- dependency-name: stefanprodan/helm-gh-pages
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-21 09:30:22 +00:00
dependabot[bot] a5be82a7d3
Bump actions/cache from 1 to 3.0.11
Bumps [actions/cache](https://github.com/actions/cache) from 1 to 3.0.11.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v1...v3.0.11)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-21 09:30:16 +00:00
Stefan Prodan 83693668ed
Merge pull request #1286 from fluxcd/ci-perms
ci: Adjust GitHub workflow permissions
2022-10-21 12:29:33 +03:00
Stefan Prodan c2929694a6
ci: Enable Dependabot for GH Actions
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-21 12:11:33 +03:00
Stefan Prodan 82db9ff213
ci: Adjust GitHub workflow permissions
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-21 11:55:37 +03:00
Stefan Prodan 5e853bb589
Merge pull request #1285 from fluxcd/governance-doc
Add link to Flux governance document
2022-10-21 11:26:46 +03:00
Stefan Prodan 9e1fad3947
Add link to Flux governance document
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-21 11:24:09 +03:00
Stefan Prodan a4f5a983ba
Merge pull request #1284 from fluxcd/release-1.23.0
Release v1.23.0
2022-10-20 14:05:25 +03:00
Stefan Prodan 08d7520458
Release v1.23.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-20 12:59:54 +03:00
Stefan Prodan 283de16660
Merge pull request #1283 from fluxcd/kubernetes-1.25.3
Update Kubernetes packages to v1.25.3
2022-10-20 12:46:36 +03:00
Stefan Prodan 5e47ae287b
Update Kubernetes packages to v1.25.3
Update dependencies and fix CVE-2022-32149 of `golang.org/x/text`

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-20 12:20:32 +03:00
Stefan Prodan e7e155048d
Merge pull request #1265 from ebar0n/patch-1
Use Helm to install loadtester in kubernetes-blue-green docs
2022-10-18 17:49:44 +03:00
Stefan Prodan 8197073cf0
Merge pull request #1270 from RicardoLorenzo/slack_bot_token_authentication
Slack bot token authentication
2022-10-18 17:48:00 +03:00
Stefan Prodan 310111bb8d
Merge pull request #1282 from fluxcd/contour-1.22
Bump Contour to v1.22 in e2e tests
2022-10-18 17:28:06 +03:00
Ricardo Lorenzo 3dd667f3b3 Slack bot token authentication
Signed-off-by: Ricardo Lorenzo <rlorenzo@payfone.com>
2022-10-18 14:56:08 +01:00
Stefan Prodan e06334cd12
Bump Contour to v1.22 in e2e tests
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-10-18 16:46:01 +03:00
Sanskar Jaiswal 8d8b99dc78
Merge pull request #1279 from aryan9600/go-1.19
update Go to 1.19
2022-10-12 14:03:12 +05:30
Sanskar Jaiswal 3418488902 update Go to 1.19
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-10-12 12:40:15 +05:30
Sanskar Jaiswal b96f6f0920
Merge pull request #1276 from aryan9600/fix-hostnames
gatewayapi: fix reconcilation of nil hostnames
2022-10-10 17:00:10 +05:30
Sanskar Jaiswal e593f2e258 gatewayapi: fix reconcilation of nil hostnames
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-10-10 16:29:21 +05:30
Stefan Prodan 7b6c37ea1f
Merge pull request #1275 from ashokhein/include-cluster-name
Include cluster name in all alerts
2022-10-06 15:31:56 +02:00
ashokhein 4dbeec02c8 Include cluster name in all alerts
Signed-off-by: ashokhein <ashokhein@gmail.com>
2022-10-05 12:16:57 +00:00
Sanskar Jaiswal 1b2df99799
Merge pull request #1267 from oistein/log-cmd-output-to-log
logCmdOutput to logger instead of stdout
2022-09-27 14:39:21 +05:30
Øistein Sletten Løvik 6d72050e81 logCmdOutput to logger instead of stdout
Signed-off-by: Øistein Sletten Løvik <oistein@oistein.org>
2022-09-26 13:52:59 +02:00
Edwar Baron b97a87a1b4
Update kubernetes-blue-green.md
Signed-off-by: Edwar Baron <edwar.baron@gmail.com>
2022-09-02 10:10:06 -05:00
Sanskar Jaiswal 89b0487376
Merge pull request #1264 from glindstedt/patch-1
Add `app.kubernetes.io/version` label to chart
2022-09-01 15:41:59 +05:30
Gustaf Lindstedt 0ae53e415c
Add `app.kubernetes.io/version` label to chart
Add `app.kubernetes.io/version` label as described in https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/

This is useful if you have many deployments in different clusters and want to be able to monitor what versions you have deployed using something like `kube_pod_labels` from kube-state-metrics.

Signed-off-by: Gustaf Lindstedt <gustaf.lindstedt@embark-studios.com>
2022-08-30 14:22:32 +02:00
Sanskar Jaiswal 915c200c7b
Merge pull request #1262 from fluxcd/release-v1.22.2
Release v1.22.2
2022-08-29 20:18:23 +05:30
Sanskar Jaiswal a4941bd764 Release v1.22.2
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-29 18:37:38 +05:30
Sanskar Jaiswal 5123cbae00
Merge pull request #1261 from fluxcd/release-ld-0.24.0
Release loadtester v0.24.0
2022-08-29 17:55:02 +05:30
Sanskar Jaiswal 135f96d507 Release loadtester v0.24.0
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-29 15:27:01 +05:30
Sanskar Jaiswal aa08ea9160
Merge pull request #1259 from aryan9600/update-deps
Update dependencies
2022-08-29 15:07:38 +05:30
Sanskar Jaiswal fb80eea144 update helm and grpc-health-probe for loadtester
Update Helm to v3.9.4
Update grpc-health-probe to v0.4.12

Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-26 22:35:08 +05:30
Sanskar Jaiswal bebcf1c7d4 update go.mod deps
Update Kubernetes packages to v0.25.0
Update github.com/emicklei/go-restful to fix CVE-2022-1996

Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-26 22:32:58 +05:30
Sanskar Jaiswal f39f0ef101
Merge pull request #1258 from aryan9600/knative-roadmap
docs: add knative support to roadmap
2022-08-26 12:51:54 +05:30
Sanskar Jaiswal f2f4c8397d docs: add knative support to roadmap
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-26 12:47:43 +05:30
Sanskar Jaiswal ae4613fa76
Merge pull request #1253 from andylibrian/scale-from-zero-use-hpa-minreplicas
If HPA is set, it uses HPA minReplicas when scaling up the canary
2022-08-24 13:51:22 +05:30
Andy Librian 8b1155123d
use min replicas set by autoscaler in ScaleFromZero if autoscaler is specified
Without this, the canary replicas are updated twice:
to 1 replica then after a few seconds to the value of HPA minReplicas.

In some cases, when updated to 1 replica (before updated by HPA
controller to the minReplicas), it's considered ready: 1 of 1 (readyThreshold 100%),
and the canary weight is advanced to receive traffic with less capacity
than expected.

Co-Authored-By: Joshua Gibeon <joshuagibeon7719@gmail.com>
Co-authored-by: Sanskar Jaiswal <hey@aryan.lol>

Signed-off-by: Andy Librian <andylibrian@gmail.com>
2022-08-18 13:23:46 +07:00
Sanskar Jaiswal e65dfbb659
Merge pull request #1254 from aryan9600/verify-crds
Add target and script to keep crds in sync
2022-08-11 15:27:13 +05:30
Sanskar Jaiswal fe37bdd9c7 add target and script to keep crds in sync
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-11 14:23:32 +05:30
Stefan Prodan f449ee1878
Merge pull request #1246 from fluxcd/loadtester-0.23.0
Release loadtester v0.23.0
2022-08-01 16:31:44 +03:00
Stefan Prodan 47b6807471
Release loadtester v0.23.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-08-01 13:49:24 +03:00
Stefan Prodan f93708e90f
Merge pull request #1244 from aryan9600/release-v1.22.1
Release v1.22.1
2022-08-01 13:04:52 +03:00
Sanskar Jaiswal 5285b76746 Release v1.22.1
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-08-01 15:03:33 +05:30
Stefan Prodan 1a4d8b965a
Merge pull request #1243 from fluxcd/update-go-alpine
Update Go to 1.18 and Alpine to 3.16
2022-07-29 16:22:36 +03:00
Stefan Prodan 11209fe05d
Update Go to 1.18 and Alpine to 3.16
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-07-29 15:36:25 +03:00
Stefan Prodan 09c1eec8f3
Merge pull request #1233 from ImOwly/main
Update README
2022-07-29 15:20:54 +03:00
Stefan Prodan d3373447c3
Merge pull request #1239 from sympatheticmoose/patch-1
Clarify HPA API requirement
2022-07-29 15:19:57 +03:00
Stefan Prodan d4e54fe966
Merge pull request #1242 from aryan9600/fix-hpa-fallback
Fix fallback logic for HPAv2 to v2beta2
2022-07-29 15:19:06 +03:00
Sanskar Jaiswal a5c284cabb fix fallback logic for HPAv2 to v2beta2
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-07-29 16:33:30 +05:30
Sanskar Jaiswal 80bae41df4
Merge pull request #1241 from vidhartbhatia/fixKEDASO
Update KEDA ScaledObject API to include MetricType for Triggers
2022-07-29 16:33:06 +05:30
Sanskar Jaiswal f5c267144e fix KEDA version typo in tutorial
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-07-29 16:03:10 +05:30
Vidhart Bhatia 25a33fe58f Update ScaledObject API to KEDA 2.7.1
Signed-off-by: Vidhart Bhatia <vidhartbhatia@hotmail.com>
Co-authored-by: Sanskar Jaiswal <sanskar.jaiswal@weave.worksl>
2022-07-29 14:50:29 +05:30
David Harris bab12dc99b
clarify HPA API requirement
Signed-off-by: David Harris <david.harris@weave.works>
2022-07-20 17:25:08 +01:00
Owly 1abb1f16d4
Update README
Signed-off-by: Owly <59724243+ImOwly@users.noreply.github.com>
2022-07-12 18:14:33 -07:00
Stefan Prodan 7cf843d6f4
Merge pull request #1228 from fluxcd/release-v1.22.0
Release v1.22.0
2022-07-12 14:01:38 +03:00
Sanskar Jaiswal a8444a6328 Release v1.22.0
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-07-11 17:34:25 +05:30
Sanskar Jaiswal ca044d3577
Merge pull request #1223 from Mpluya/patch-1
include Contour retryOn in the sample canary
2022-07-11 15:26:04 +05:30
Sanskar Jaiswal 76bac5d971
Merge pull request #1216 from aryan9600/keda-scaled-objects
Add support for KEDA ScaledObjects as an auto scaler
2022-07-08 19:21:22 +05:30
Sanskar Jaiswal f68f291b3d update rbac for helm chart
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-07-01 10:17:37 +05:30
Sanskar Jaiswal b108672fad use a better query to test primary scaledobject reconciliation
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-07-01 09:43:46 +05:30
Sanskar Jaiswal 377a8f48e2 add tutorial for scaledobjects
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-30 17:15:15 +05:30
Sanskar Jaiswal a098d04d64 update primary scaler query handling to consider mutliple triggers
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-30 17:06:45 +05:30
Sanskar Jaiswal 5e4b70bd51
Merge pull request #1222 from yokoyang/doc-update-for-flagger-install-on-asm
update guide for flagger on aliyun ASM
2022-06-23 12:34:44 +05:30
Sanskar Jaiswal 9ce931abb4
Merge pull request #1224 from Freydal/fix-optional-metric-template-namespace
Reintroducing empty check for metric template references.
2022-06-23 12:31:51 +05:30
Nick Freyaldenhoven 072d9b9850 Removing stray blank line.
Signed-off-by: Nick Freyaldenhoven <freyaldenhovennc@gmail.com>
2022-06-22 08:41:45 -05:00
Mae Large 1bb4afaeac updated retryOn supported values link to point to contour's api doc
Signed-off-by: Mae Large <mlarge@vmware.com>
2022-06-22 06:10:42 -05:00
Mae Anne Large 4dd6102a0f include Contour retryOn in the sample canary
without this change the HTTPProxy - podinfo.test was not getting created due to the following warning:
```
test               4m11s       Warning   Synced                         canary/podinfo                               HTTPProxy podinfo.test create error: HTTPProxy.projectcontour.io "podinfo" is invalid: spec.routes.retryPolicy.retryOn: Unsupported value: "": supported values: "5xx", "gateway-error", "reset", "connect-failure", "retriable-4xx", "refused-stream", "retriable-status-codes", "retriable-headers", "cancelled", "deadline-exceeded", "internal", "resource-exhausted", "unavailable"
```

Signed-off-by: Mae Anne Large <Mpluya@users.noreply.github.com>
Signed-off-by: Mae Large <mlarge@vmware.com>
2022-06-22 06:10:42 -05:00
奇方 4f64377480 update install guide on alibaba service mesh
Signed-off-by: 奇方 <qifang.ly@alibaba-inc.com>
2022-06-22 17:56:27 +08:00
Nick Freyaldenhoven 31856a2f46 Reintroducing the old empty check for metric template references. Reverting removal in commit 7df1beef85 to support the optianl namespace. Adding test for future valdiation.
Signed-off-by: Nick Freyaldenhoven <freyaldenhovennc@gmail.com>
2022-06-21 10:28:54 -05:00
Sanskar Jaiswal 358391bfde
Merge pull request #1204 from shipt/contour-service-metric-fix
fix contour prom query for when service name is overwritten
2022-06-21 14:54:15 +05:30
Sanskar Jaiswal 7b2c005d9b
Merge pull request #1205 from shipt/bugfix-contour-annotation-override
fix contour httproxy annotations overwrite
2022-06-21 13:51:40 +05:30
Stefan Prodan c31ef8a788
Merge pull request #1221 from sympatheticmoose/patch-1
typo: controller
2022-06-17 17:55:04 +01:00
brandoncate e1bd004683 fix contour prom query when service name is specified
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-06-17 10:07:00 -05:00
brandoncate 0cecab530f fix contour httproxy annotations overwrite
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-06-17 10:02:02 -05:00
David Harris 844090f842
typo: controller
Signed-off-by: David Harris <david.harris@weave.works>
2022-06-17 10:23:50 +01:00
Stefan Prodan aa48ad45b7
Merge pull request #1219 from vbelouso/canaries-finalizers
fix: add finalizers to canaries
2022-06-14 16:20:52 +03:00
Daniel Holbach 1967e4857b
Merge pull request #1220 from dholbach/fix-typo
typo: boostrap -> bootstrap
2022-06-14 14:22:43 +02:00
Vladimir Belousov 21923d6f87 fix: add finalizers to canaries
Signed-off-by: Vladimir Belousov <vbelouso@redhat.com>
2022-06-14 15:18:38 +03:00
Daniel Holbach a5912ccd89 typo: boostrap -> bootstrap
Signed-off-by: Daniel Holbach <daniel@weave.works>
2022-06-14 13:57:33 +02:00
Sanskar Jaiswal e4252d8cbd
Merge pull request #1210 from aufarg/add-namespace-to-table
charts: Add namespace parameter to parameters table
2022-06-10 18:44:38 +05:30
Sanskar Jaiswal b01e4cf9ec add e2e tests for KEDA ScaledObjects
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-10 15:32:04 +05:30
Aufar Gilbran 703cfd50b2 charts: Add namespace parameter to parameters table
Signed-off-by: Aufar Gilbran <aufargilbran@gmail.com>
2022-06-10 15:15:25 +05:30
Sanskar Jaiswal 6a1b765a77 add unit tests for ScaledObjectReconciler
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-09 21:38:10 +05:30
Sanskar Jaiswal b2dc762937 add support for KEDA ScaledObjects via ScaledObjectReconciler
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-09 21:36:57 +05:30
Stefan Prodan 498f065dea
Merge pull request #1215 from aryan9600/scaler-reconciler
Fix primary HPA label reconciliation
2022-06-09 19:04:12 +03:00
Sanskar Jaiswal 9d8941176b fix primary hpa label reconciliation
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-09 20:30:53 +05:30
Stefan Prodan 4d2a03c0b2
Merge pull request #1211 from aryan9600/scaler-reconciler
Introduce `ScalerReconciler` and refactor HPA reconciliation
2022-06-08 10:21:02 +03:00
Sanskar Jaiswal e0e2d5c0e6 refactor hpa reconcile logic to be generic for both versions
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>

for objectmeta as well
2022-06-08 12:25:35 +05:30
Sanskar Jaiswal 9b97bff7b1 add e2e tests for hpa reconciler
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-07 13:43:11 +05:30
Sanskar Jaiswal f23be1d0ec add unit tests for hpa reconciler
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-07 13:43:11 +05:30
Sanskar Jaiswal fa595e160c add ScalerReconciler to canary and refactor hpa out of deployment controller
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-07 13:43:11 +05:30
Stefan Prodan 4ea5a48f43
Merge pull request #1212 from aryan9600/update-e2e
e2e: Update providers and Kubernetes to v1.23
2022-06-07 10:58:55 +03:00
Sanskar Jaiswal 6dd8a755c8 bump provider versions in e2e tests
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-07 12:09:19 +05:30
Sanskar Jaiswal 063d38dbd2 upgrade k8s in CI to 1.23
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-06-06 12:06:38 +05:30
Stefan Prodan 165c953239
Merge pull request #1208 from fluxcd/kubernetes-v1.24.1
Update Kubernetes packages to v1.24.1
2022-05-31 14:01:16 +03:00
Stefan Prodan a0fae153cf
Use leases for leader election
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-31 13:01:01 +03:00
Stefan Prodan bfcf288561
Update Kubernetes packages to v1.24.1
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-31 13:00:16 +03:00
Sanskar Jaiswal 560f884cc0
Merge pull request #1185 from philnichol/adding-appprotocol
feat: Add an optional `appProtocol` field to `spec.service`
2022-05-19 19:26:49 +05:30
Phil Nichol d79898848e
feat: Added the optional appProtocol field to Canary.Service
Signed-off-by: Phil Nichol <35630607+philnichol@users.noreply.github.com>
2022-05-15 19:07:18 +01:00
Stefan Prodan c03d138cd0
Merge pull request #1191 from aryan9600/maintainer-request
Add Sanskar Jaiswal (@aryan9600) as a maintainer
2022-05-10 16:19:50 +03:00
Sanskar Jaiswal 22d192e7e3 add Sanskar Jaiswal (@aryan9600) to MAINTAINERS
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-05-09 15:21:12 +05:30
Stefan Prodan a4babd6fc4
Merge pull request #1189 from fluxcd/release-1.21.0
Release v1.21.0
2022-05-06 19:15:41 +03:00
Stefan Prodan edd5515bd7
Release v1.21.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-06 18:25:07 +03:00
Stefan Prodan 00dde2358a
Merge pull request #1188 from fluxcd/helm-kubeconfig
Rename kubeconfig section in helm values
2022-05-06 18:04:02 +03:00
Stefan Prodan 8e84262a32
Update the Helm chart kubeVersion to 1.19
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-06 17:02:12 +03:00
Stefan Prodan 541696f3f7
Rename kubeconfig section in helm values
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-06 17:01:07 +03:00
Stefan Prodan 8051d03f08
Merge pull request #1187 from fluxcd/update-digram
Update Flagger overview diagram
2022-05-06 16:48:27 +03:00
Stefan Prodan a78d273aeb
Update Flagger overview diagram
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-05-06 16:25:08 +03:00
Stefan Prodan 07bd3563cd
Merge pull request #1183 from aryan9600/multi-cluster
Avoid setting owner refs if the service mesh/ingress is on a different cluster
2022-05-06 08:50:06 +03:00
Sanskar Jaiswal 8c690d1b21 avoid setting owner refs if the service mesh cluster is different
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-05-06 01:06:03 +05:30
Stefan Prodan a8b4e9cc6d
Merge pull request #1181 from aryan9600/no-cross-ns-refs
Add flag to disable cross namespace refs to Custom Resources
2022-05-03 11:53:38 +03:00
Sanskar Jaiswal 30ed9fb75c verify canary spec before syncing
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-04-29 13:53:14 +05:30
Sanskar Jaiswal 0382d9c1ca Add no cross-namespace refs to FAQ and helm chart
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-04-29 13:50:01 +05:30
Stefan Prodan 95381e1892
Merge pull request #1150 from cdlliuy/be_honor_to_skip_analysis_new
ignore FailedCheck result when skipAnalysis defined and be honor to skipAnalysis when internal error happens
2022-04-28 11:19:05 +03:00
Sanskar Jaiswal 7df1beef85 Add flag to disable cross namespace refs to AlertProviders and MetricTemplates
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-04-27 17:09:07 +05:30
Stefan Prodan a1e519b352
Merge pull request #1172 from fluxcd/release-1.20.0
Release v1.20.0
2022-04-15 13:24:01 +03:00
Stefan Prodan e7f16a8c06
Release v1.20.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-04-15 12:49:17 +03:00
Stefan Prodan a3adae4af0
Merge pull request #1171 from aryan9600/fix-primary-restart
Fix canary rollback behaviour
2022-04-15 12:18:51 +03:00
Sanskar Jaiswal c7c0c76bd3 fix canary rollback behaviour
Prevents the canary from getting triggered, when a canary deploy is
updated to match the primary deploy after an analysis fails.

Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-04-15 13:49:05 +05:30
Stefan Prodan 67cc965d31
Merge pull request #1164 from shipt/contour-retryon-support
Contour: Update the httproxy API and enable RetryOn
2022-04-12 10:32:53 +03:00
brandoncate d09969e3b4 update httpproxy
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-04-08 09:58:23 -05:00
brandoncate 41904b42f8 add assertion to retryOn field
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-04-08 09:58:23 -05:00
brandoncate f638410782 remove custom.sh test file
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-04-08 09:58:23 -05:00
brandoncate 48cc7995d7 adding retryon support
Signed-off-by: brandoncate <brandon.cate@shipt.com>
2022-04-08 09:58:23 -05:00
Stefan Prodan 793b93c665
Merge pull request #1148 from cdlliuy/add_canary_analysis_result_as_metric
Add canary analysis result as Prometheus metrics
2022-04-06 07:57:23 +03:00
Ying Liu e0186cbe2a update docs for metrics part
Signed-off-by: Ying Liu <ying.liu.lying@gmail.com>
2022-04-06 09:57:46 +08:00
Stefan Prodan 2cc2b5dce8
Merge pull request #1162 from denvernyaw/wrong_unit_of_time_for_duration_panels_in_grafana_dashboard
Fix unit of time in the Istio Grafana dashboard
2022-04-05 16:39:21 +03:00
Mikita Reshatko ccdbbdb0ec adapt Prometheus queries results for request duration metrics to Grafana dashboard
Signed-off-by: Mikita Reshatko <mikita.reshatko@gmail.com>
2022-04-05 14:14:09 +03:00
ying 13483321ac Update pkg/metrics/recorder.go
Co-authored-by: Stefan Prodan <stefan.prodan@gmail.com>
Signed-off-by: Ying Liu <ying.liu.lying@gmail.com>
2022-03-25 23:10:03 +08:00
Ying Liu 5547533197 add canary analysis result as prometheus metrics
Signed-off-by: Ying Liu <ying.liu.lying@gmail.com>
2022-03-25 23:10:03 +08:00
Stefan Prodan c68998d75e
Merge pull request #1156 from aryan9600/appmesh-log
AppMesh: Add annotation to enable Envoy access logs
2022-03-22 16:13:37 +02:00
Sanskar Jaiswal 20f2d3f2f9 add annotation to enable appmesh logs
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
Co-authored-by: wucg <wucg@trip.com>
2022-03-22 15:45:02 +05:30
Stefan Prodan cc7b35b44a
Merge pull request #1146 from canidam/fix-podinfo-service-when-canary-enabled
Fix the service toggle condition in the podinfo helm chart
2022-03-18 11:35:06 +02:00
Stefan Prodan 67a2cd6a48
Merge pull request #1139 from cdlliuy/ying_short_metric_analysis_waiting_promption
shorten the metric analysis cycle after confirm promotion gate is open
2022-03-18 11:34:21 +02:00
Stefan Prodan 08deddc4fe
Merge pull request #1145 from anovateam/route_port_on_delegation
istio: Add destination port when port discovery and delegation are true
2022-03-18 10:10:54 +02:00
Ying Liu 77b2eb36a5 ignore FailedCheck result when skipAnalysis defined and be honor to skipAnalysi when internal error happens
Signed-off-by: Ying Liu <ying.liu.lying@gmail.com>
2022-03-17 10:49:30 +08:00
Ying Liu ab84ac207a shorten the metric analysis cycle after confirmpromption gate is open and make the analysis check still works during waitingpromption status
Signed-off-by: Ying Liu <ying.liu.lying@gmail.com>
2022-03-17 10:32:01 +08:00
Chen Anidam 8957d91e01 Fix podinfo service toggle condition
Signed-off-by: Chen Anidam <canidam@gmail.com>
2022-03-17 00:34:51 +02:00
Marco Amador c7cbb729b7
add destination port when port discovery is active and delegation is true
Signed-off-by: Marco Amador <amador.marco@gmail.com>
2022-03-16 18:57:02 +00:00
Stefan Prodan eca6fa7958
Merge pull request #1144 from aryan9600/aryan9600/gateway-api
Remove unnecessary log statement
2022-03-15 15:03:16 +02:00
Sanskar Jaiswal ee535afcb9 remove unnecessary log statement
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-15 18:07:23 +05:30
Stefan Prodan 18b64910d7
Merge pull request #1143 from aryan9600/aryan9600/gateway-api
Fix Gateway API docs
2022-03-15 14:34:57 +02:00
Sanskar Jaiswal 3ca75140d0 fix gateway api docs
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-15 17:38:55 +05:30
Stefan Prodan 960f924448
Merge pull request #1142 from aryan9600/aryan9600/gateway-api
Change debug level to info for gateway API
2022-03-15 14:07:26 +02:00
Sanskar Jaiswal eed128a8b4 change debug level to info
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-15 16:21:39 +05:30
Stefan Prodan 210e21176b
Merge pull request #1138 from fluxcd/release-1.19.0
Release v1.19.0
2022-03-14 14:47:33 +02:00
Stefan Prodan 0a0c3835d6
Release v1.19.0
This release comes with support for Kubernetes Gateway API v1alpha2.

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-03-14 12:09:01 +02:00
Stefan Prodan 531893b279
Merge pull request #1110 from Moscagus/canary-replicas
Use the primary replicas when scaling up the canary (no hpa)
2022-03-14 12:02:27 +02:00
Stefan Prodan e6bb47f920
Merge pull request #1108 from aryan9600/aryan9600/gateway-api
Add Gateway API as a provider
2022-03-14 11:14:01 +02:00
Stefan Prodan 307813a628
Merge pull request #1117 from johnzheng1975/patch-01
Update istio-progressive-delivery.md
2022-03-11 11:53:27 +02:00
Sanskar Jaiswal 38fc6b567f merge a/b and progressive tutorial
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-11 15:19:46 +05:30
Stefan Prodan 17015b23bf
Merge pull request #1131 from aryan9600/bump-podinfo
Bump podinfo to 6.0.x and loadtester to 0.22.0
2022-03-11 10:07:46 +02:00
Sanskar Jaiswal c9e53dd069 remove gateway types, fix rbac and add istio faq
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-10 18:16:44 +05:30
Sanskar Jaiswal e26a10b481 update README
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 281d869f54 add a/b test docs and update progressive docs
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 91126d102d fix a/b testing logic and update e2e tests
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal ba4646cddb fix docs and e2e install.sh
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 438877674a add docs
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal da451a0cf4 add metric templates to tests
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 5e1d00d4d2 add router_test and make test install script platform agnostic
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 00d54d268c add gateway tests and change provider aname
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Sanskar Jaiswal 174e9fdc93 Add support for Gateway API as a provider.
Adds Gateway API as a provider for progressive traffic shifting, A/B
testing and Blue-Green testing. Adds a new field in the Canary
`spec.service.gatewayRefs` which specifies the Gateway that Flagger
should use.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-10 16:54:36 +05:30
Moscagus f7fd6cce8c If HPA isn't set and replicas are not specified, it uses the primary replicas when scaling up the canary
Signed-off-by: Moscagus <gustavo.varisco@gmail.com>
2022-03-09 22:16:47 -03:00
Sanskar Jaiswal 5dc336d609 bump podinfo to 6.0.x and loadtester to 0.22.0
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-09 20:19:39 +05:30
Stefan Prodan ae6a683f23
Merge pull request #1130 from aryan9600/remove-helm2
Remove support for helmv2 in loadtester
2022-03-09 10:18:58 +02:00
Sanskar Jaiswal 5acf189fbe remove support for helmv2 in loadtester
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-08 22:14:32 +05:30
Stefan Prodan 090329d0c9
Merge pull request #1119 from connesc/authorizer
Restrict source namespaces in flagger-loadtester
2022-03-08 14:41:11 +02:00
Cédric Connes 96fd359b99
Add cmd.namespaceRegexp to loadtester Helm chart
Signed-off-by: Cédric Connes <cedric.connes@gmail.com>
2022-03-08 12:38:19 +01:00
Stefan Prodan 519f343fcc
Merge pull request #1125 from aryan9600/fix-finalizer-dupl
Fix potential canary finalizer duplication
2022-03-08 11:10:39 +02:00
Stefan Prodan 5d2a7ba9e7
Merge pull request #1128 from aryan9600/ld-multiarch
Add arm64 support for loadtester
2022-03-07 16:33:26 +02:00
Sanskar Jaiswal 1664ca436e add arm64 support for loadtester
Signed-off-by: Sanskar Jaiswal <sanskar.jaiswal@weave.works>
2022-03-07 17:34:24 +05:30
Sanskar Jaiswal 84ae65c763 fix potential canary finalizer duplication
Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-03-04 21:11:31 +05:30
Cédric Connes 6085753d84
Only allow namespaces matching -namespace-regexp
This allows to forbid access from canaries in non-whitelisted
namespaces.
In a multi-tenant context, this can be combined with network policies to
maintain isolation between namespaces.

Signed-off-by: Cédric Connes <cedric.connes@gmail.com>
2022-02-24 18:24:03 +01:00
John Zheng da706be4aa Update istio-progressive-delivery.md
Signed-off-by: Author Name <johnzhengaz@gmail.com>
It is easy tp raise: Halt advancement no values found for istio metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
If it is inconsistence between the prometheus version and istio version.
Signed-off-by: John Zheng <john.zheng@hp.com>
2022-02-18 20:04:57 +08:00
Stefan Prodan 65e3bcb1d8
Merge pull request #1116 from pjbgf/patch-180222
Update Kubernetes dependencies to v1.23.3
2022-02-18 12:24:44 +02:00
Paulo Gomes 582f6eec77
Update Kubernetes dependecies to v0.23.3
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
2022-02-18 09:56:43 +00:00
Paulo Gomes 4200c0159d
Update github.com/prometheus/client_golang to v1.11.1 (CVE fix)
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
2022-02-18 08:44:48 +00:00
Stefan Prodan cf8fe94fca
Merge pull request #1107 from fluxcd/release-1.18.0
Release v1.18.0
2022-02-14 15:27:57 +02:00
Stefan Prodan 30d553c6f3
Release v1.18.0
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-02-14 12:44:23 +02:00
Stefan Prodan f8f6a994dd
Merge pull request #1106 from SomtochiAma/set-replica
Set primary deployment replicas when autoscaler isn't used
2022-02-14 12:33:10 +02:00
Somtochi Onyekwere 085639bbde Set primary deployment replicas when autoscaler isn't used
Signed-off-by: Somtochi Onyekwere <somtochionyekwere@gmail.com>
2022-02-14 10:54:24 +01:00
Stefan Prodan 3bfa7c974d
Merge pull request #1102 from SomtochiAma/topology-spread
Add field `spec.analysis.canaryReadyThreshold` for configuring canary threshold
2022-02-10 11:11:37 +02:00
Stefan Prodan d29e475277
Merge pull request #1103 from chlunde/patch-2
docs: Fix typo ExternalDNS
2022-02-10 11:10:27 +02:00
Stefan Prodan b7ba3ab063
Merge pull request #1105 from SomtochiAma/error-msg
Send warning and error alerts correctly
2022-02-09 09:31:12 +02:00
Somtochi Onyekwere 9796903c78 Send warning and error alerts correctly
Signed-off-by: Somtochi Onyekwere <somtochionyekwere@gmail.com>
2022-02-08 21:48:46 +01:00
Carl Henrik Lunde 2f25fab560
docs: Fix typo ExternalDNS
Signed-off-by: Carl Henrik Lunde <chlunde@ifi.uio.no>
2022-02-08 18:26:17 +01:00
Somtochi Onyekwere 215c859619 add field for configuring canary threshold
Signed-off-by: Somtochi Onyekwere <somtochionyekwere@gmail.com>
2022-02-08 13:52:28 +01:00
Stefan Prodan 7071d42152
Merge pull request #1100 from SomtochiAma/topology-spread
Update matchLabels for TopologySpreadContstraints in Deployments
2022-02-07 15:13:52 +02:00
Somtochi Onyekwere 08b1e52278 Add extra check for name
Signed-off-by: Somtochi Onyekwere <somtochionyekwere@gmail.com>
2022-02-07 13:00:43 +01:00
Stefan Prodan 801f801e02
Merge pull request #1095 from ashokhein/main
Fix for when Prometheus returns NaN
2022-02-07 13:53:32 +02:00
Stefan Prodan af5634962f
Merge pull request #1092 from northwesternmutual/main
Update metadata during subsequent promote
2022-02-07 13:45:26 +02:00
Somtochi Onyekwere fe7615afb4 Update matchLabels in LabelSelectors
Signed-off-by: Somtochi Onyekwere <somtochionyekwere@gmail.com>
2022-02-07 11:30:21 +01:00
ASHOK KUMAR KS fc6bedda23
Merge branch 'fluxcd:main' into main 2022-01-25 10:41:13 +00:00
Stefan Prodan a7f997c092
Merge pull request #1091 from fluxcd/release-0.17.0
Release v1.17.0
2022-01-25 10:48:03 +02:00
Karl Heins 121eb767cb Update metadata during subsequent promote
Signed-off-by: Karl Heins <karlheins@northwesternmutual.com>

Support updating primary Deployment/DaemonSet/HPA/Service labels and annotations after first-time rollout
2022-01-24 14:41:24 -06:00
ashokhein cd3a1d8478 fixed bug when Prometheus returns NaN
Signed-off-by: ashokhein <ashokhein@gmail.com>
2022-01-24 12:58:57 +00:00
Stefan Prodan 6f6af25467
chart: Update Prometheus to v2.32.1
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-01-21 15:13:12 +02:00
Stefan Prodan a0f1638f6c
Remove Flux deprecated marker
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-01-21 15:11:42 +02:00
Stefan Prodan fc13276f0e
Release v1.17.0
Adds support for Kuma service mesh

Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-01-21 15:11:41 +02:00
Stefan Prodan 8a0b92db19
Merge pull request #1094 from fluxcd/sbom
Publish a Software Bill of Materials (SBOM)
2022-01-21 15:11:11 +02:00
Stefan Prodan 2f0d34adb2
Publish a Software Bill of Materials (SBOM)
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
2022-01-21 14:20:48 +02:00
Stefan Prodan 617f416291
Merge pull request #1093 from aryan9600/aryan9600/fix-kuma-e2e
Fix failing kuma e2e tests
2022-01-20 10:47:32 +02:00
Sanskar Jaiswal 7a438ad323 fix failing kuma e2e tests
Kuma e2e tests were failing in CI(https://github.com/fluxcd/flagger/runs/4826617915?check_suite_focus=true)
due to prom server installed in the kuma-metrics ns not being able to
contact the kubernetes api server. Fixed by switching to flagger
prometheus and a custom kustomize build for kuma tests.

Signed-off-by: Sanskar Jaiswal <jaiswalsanskar078@gmail.com>
2022-01-20 08:20:16 +00:00
Stefan Prodan 5776f0b64b
Merge pull request #1041 from baldey-nz/baldey-nz/notification-change
Add cluster name to flagger cmd args for altering
2022-01-11 11:14:47 +02:00
Stefan Prodan 96d190a789
Merge pull request #1085 from johnharris85/add-kuma-support
Add kuma support for progressive traffic shifting canaries
2022-01-11 11:01:14 +02:00
John Harris d2038699c0 Fix newlines
Signed-off-by: John Harris <john.harris@konghq.com>
2022-01-05 07:46:56 -08:00
John Harris cb3b5cba90 Remove Prometheus from default install
Signed-off-by: John Harris <john.harris@konghq.com>
2022-01-05 07:18:41 -08:00
baldey-nz 8c881ab758 as suggested changing cluster-name to flag
Signed-off-by: baldey-nz <baldey@gmail.com>
2021-12-21 14:11:49 +13:00
John Harris caefaf73aa Add additional docs references
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-20 09:21:42 -08:00
John Harris e8d7001f5e Add RO FS back to deployment
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:51:57 -08:00
John Harris ae0f20a445 Add Kuma docs
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:45:23 -08:00
John Harris 4ddc12185f Add prometheus support
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:18:56 -08:00
John Harris e81627a96d Add tests
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:09:39 -08:00
John Harris 47be2a25f2 Add Kuma routing and metrics
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:07:59 -08:00
John Harris 6832a4ffde Add/update Kustomize configurations
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:07:05 -08:00
John Harris bd58a47862 Add/update API types
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-18 14:05:54 -08:00
John Harris 204228bc8f Add API types.
Signed-off-by: John Harris <john.harris@konghq.com>
2021-12-15 15:13:47 -08:00
baldey-nz c638edd346 If applied, this commit will add an optional canary spec field named summary for notification purposes
Signed-off-by: baldey-nz <baldey@gmail.com>
2021-10-28 07:14:24 +13:00
502 changed files with 32892 additions and 7411 deletions

3
.clomonitor.yml Normal file
View File

@ -0,0 +1,3 @@
exemptions:
- check: analytics
reason: "We don't track people"

View File

@ -1,50 +0,0 @@
# Flagger signed releases
Flagger releases published to GitHub Container Registry as multi-arch container images
are signed using [cosign](https://github.com/sigstore/cosign).
## Verify Flagger images
Install the [cosign](https://github.com/sigstore/cosign) CLI:
```sh
brew install sigstore/tap/cosign
```
Verify a Flagger release with cosign CLI:
```sh
cosign verify -key https://raw.githubusercontent.com/fluxcd/flagger/main/cosign/cosign.pub \
ghcr.io/fluxcd/flagger:1.13.0
```
Verify Flagger images before they get pulled on your Kubernetes clusters with [Kyverno](https://github.com/kyverno/kyverno/):
```yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-flagger-image
annotations:
policies.kyverno.io/title: Verify Flagger Image
policies.kyverno.io/category: Cosign
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Pod
policies.kyverno.io/minversion: 1.4.2
spec:
validationFailureAction: enforce
background: false
rules:
- name: verify-image
match:
resources:
kinds:
- Pod
verifyImages:
- image: "ghcr.io/fluxcd/flagger:*"
key: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEST+BqQ1XZhhVYx0YWQjdUJYIG5Lt
iz2+UxRIqmKBqNmce2T+l45qyqOs99qfD7gLNGmkVZ4vtJ9bM7FxChFczg==
-----END PUBLIC KEY-----
```

View File

@ -1,11 +0,0 @@
-----BEGIN ENCRYPTED COSIGN PRIVATE KEY-----
eyJrZGYiOnsibmFtZSI6InNjcnlwdCIsInBhcmFtcyI6eyJOIjozMjc2OCwiciI6
OCwicCI6MX0sInNhbHQiOiIvK1MwbTNrU3pGMFFXdVVYQkFoY2gvTDc3NVJBSy9O
cnkzUC9iMkxBZGF3PSJ9LCJjaXBoZXIiOnsibmFtZSI6Im5hY2wvc2VjcmV0Ym94
Iiwibm9uY2UiOiJBNEFYL2IyU1BsMDBuY3JUNk45QkNOb0VLZTZLZEluRCJ9LCJj
aXBoZXJ0ZXh0IjoiZ054UlJweXpraWtRMUVaRldsSnEvQXVUWTl0Vis2enBlWkIy
dUFHREMzOVhUQlAwaWY5YStaZTE1V0NTT2FQZ01XQmtSZWhrQVVjQ3dZOGF2WTZa
eFhZWWE3T1B4eFdidHJuSUVZM2hwZUk1M1dVQVZ6SXEzQjl0N0ZmV1JlVGsxdFlo
b3hwQmxUSHY4U0c2azdPYk1aQnJleitzSGRWclF6YUdMdG12V1FOMTNZazRNb25i
ZUpRSUJpUXFQTFg5NzFhSUlxU0dxYVhCanc9PSJ9
-----END ENCRYPTED COSIGN PRIVATE KEY-----

View File

@ -1,4 +0,0 @@
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEST+BqQ1XZhhVYx0YWQjdUJYIG5Lt
iz2+UxRIqmKBqNmce2T+l45qyqOs99qfD7gLNGmkVZ4vtJ9bM7FxChFczg==
-----END PUBLIC KEY-----

View File

@ -14,3 +14,7 @@ redirects:
usage/crossover-progressive-delivery: tutorials/crossover-progressive-delivery.md
usage/traefik-progressive-delivery: tutorials/traefik-progressive-delivery.md
usage/osm-progressive-delivery: tutorials/osm-progressive-delivery.md
usage/kuma-progressive-delivery: tutorials/kuma-progressive-delivery.md
usage/gatewayapi-progressive-delivery: tutorials/gatewayapi-progressive-delivery.md
usage/apisix-progressive-delivery: tutorials/apisix-progressive-delivery.md
usage/knative-progressive-delivery: tutorials/knative-progressive-delivery.md

2
.github/CODEOWNERS vendored
View File

@ -1 +1 @@
* @stefanprodan
* @stefanprodan @aryan9600

17
.github/dependabot.yml vendored Normal file
View File

@ -0,0 +1,17 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
labels: ["area/ci", "dependencies"]
groups:
# Group all updates together, so that they are all applied in a single PR.
# Grouped updates are currently in beta and is subject to change.
# xref: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#groups
ci:
patterns:
- "*"
schedule:
# By default, this will be on a monday.
interval: "weekly"

View File

@ -9,29 +9,32 @@ on:
branches:
- main
permissions:
contents: read
jobs:
container:
runs-on: ubuntu-latest
build-flagger:
runs-on:
group: "Default Larger Runners"
labels: ubuntu-latest-16-cores
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Restore Go cache
uses: actions/cache@v1
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.17.x
go-version: 1.24.x
cache-dependency-path: |
**/go.sum
**/go.mod
- name: Download modules
run: |
go mod download
go install golang.org/x/tools/cmd/goimports
- name: Run linters
run: make test-fmt test-codegen
run: make fmt test-codegen
- name: Verify CRDs
run: make verify-crd
- name: Run tests
run: go test -race -coverprofile=coverage.txt -covermode=atomic $(go list ./pkg/...)
- name: Check if working tree is dirty
@ -42,7 +45,7 @@ jobs:
exit 1
fi
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v5
with:
file: ./coverage.txt
- name: Build container image

View File

@ -9,29 +9,50 @@ on:
branches:
- main
permissions:
contents: read
jobs:
kind:
runs-on: ubuntu-latest
e2e-test:
runs-on:
group: "Default Larger Runners"
labels: ubuntu-latest-16-cores
strategy:
fail-fast: false
matrix:
provider:
# service mesh
- istio
- linkerd
- kuma
# ingress controllers
- contour
- nginx
- traefik
- gloo
- skipper
- osm
- kubernetes
- gatewayapi
- keda
- apisix
- knative
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Kubernetes
uses: engineerd/setup-kind@v0.5.0
uses: helm/kind-action@v1.12.0
if: matrix.provider != 'skipper'
with:
version: "v0.11.0"
image: kindest/node:v1.21.1@sha256:fae9a58f17f18f06aeac9772ca8b5ac680ebbed985e266f711d936e91d113bad
version: v0.23.0
cluster_name: kind
node_image: kindest/node:v1.30.0@sha256:047357ac0cfea04663786a612ba1eaba9702bef25227a794b52890dd8bcd692e
- name: Setup Kubernetes for skipper
uses: helm/kind-action@v1.12.0
if: matrix.provider == 'skipper'
with:
version: v0.23.0
cluster_name: kind
node_image: kindest/node:v1.24.12@sha256:0bdca26bd7fe65c823640b14253ea7bac4baad9336b332c94850f84d8102f873
- name: Build container image
run: |
docker build -t test/flagger:latest .

View File

@ -3,13 +3,18 @@ name: helm
on:
workflow_dispatch:
permissions:
contents: read
jobs:
build-push:
release-charts:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Publish Helm charts
uses: stefanprodan/helm-gh-pages@v1.3.0
uses: stefanprodan/helm-gh-pages@v1.7.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
charts_url: https://flagger.app

View File

@ -2,42 +2,62 @@ name: push-ld
on:
workflow_dispatch:
env:
IMAGE: "ghcr.io/fluxcd/flagger-loadtester"
permissions:
contents: read
jobs:
build-push:
runs-on: ubuntu-latest
release-load-tester:
runs-on:
group: "Default Larger Runners"
permissions:
id-token: write
packages: write
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- uses: sigstore/cosign-installer@v3.8.1
- name: Prepare
id: prep
run: |
VERSION=$(grep 'VERSION' cmd/loadtester/main.go | head -1 | awk '{ print $4 }' | tr -d '"')
echo ::set-output name=BUILD_DATE::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=VERSION::${VERSION}
echo "BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
echo "VERSION=${VERSION}" >> $GITHUB_OUTPUT
- name: Setup QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
uses: docker/setup-qemu-action@v3
- name: Setup Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
buildkitd-flags: "--debug"
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
uses: docker/login-action@v3
with:
registry: ghcr.io
username: fluxcdbot
password: ${{ secrets.GHCR_TOKEN }}
- name: Generate image meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.IMAGE }}
tags: |
type=raw,value=${{ steps.prep.outputs.VERSION }}
- name: Publish image
uses: docker/build-push-action@v2
id: build-push
uses: docker/build-push-action@v6
with:
push: true
builder: ${{ steps.buildx.outputs.name }}
context: .
file: ./Dockerfile.loadtester
platforms: linux/amd64
tags: |
ghcr.io/fluxcd/flagger-loadtester:${{ steps.prep.outputs.VERSION }}
- name: Check images
platforms: linux/amd64,linux/arm64
build-args: |
REVISION=${{ github.sha }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Sign image
env:
COSIGN_EXPERIMENTAL: 1
run: |
docker buildx imagetools inspect ghcr.io/fluxcd/flagger-loadtester:${{ steps.prep.outputs.VERSION }}
cosign sign --yes ${{ env.IMAGE }}@${{ steps.build-push.outputs.digest }}

View File

@ -3,39 +3,74 @@ on:
push:
tags:
- 'v*'
workflow_dispatch:
inputs:
tag:
description: 'image tag prefix'
default: 'rc'
required: true
permissions:
contents: read
env:
IMAGE: "ghcr.io/fluxcd/${{ github.event.repository.name }}"
jobs:
build-push:
runs-on: ubuntu-latest
release-flagger:
outputs:
hashes: ${{ steps.slsa.outputs.hashes }}
runs-on:
group: "Default Larger Runners"
permissions:
contents: write # needed to write releases
id-token: write # needed for keyless signing
packages: write # needed for ghcr access
steps:
- uses: actions/checkout@v2
- uses: sigstore/cosign-installer@main
- uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.24.x
- uses: fluxcd/flux2/action@main
- uses: sigstore/cosign-installer@v3.8.1
- name: Prepare
id: prep
run: |
if [[ ${GITHUB_EVENT_NAME} = "workflow_dispatch" ]]; then
VERSION="${{ github.event.inputs.tag }}-${GITHUB_SHA::8}"
else
VERSION=$(grep 'VERSION' pkg/version/version.go | awk '{ print $4 }' | tr -d '"')
fi
CHANGELOG="https://github.com/fluxcd/flagger/blob/main/CHANGELOG.md#$(echo $VERSION | tr -d '.')"
echo ::set-output name=BUILD_DATE::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=VERSION::${VERSION}
echo ::set-output name=CHANGELOG::${CHANGELOG}
echo "[CHANGELOG](${CHANGELOG})" > notes.md
echo "BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
echo "VERSION=${VERSION}" >> $GITHUB_OUTPUT
- name: Setup QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
uses: docker/setup-qemu-action@v3
- name: Setup Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
buildkitd-flags: "--debug"
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
uses: docker/login-action@v3
with:
registry: ghcr.io
username: fluxcdbot
password: ${{ secrets.GHCR_TOKEN }}
- name: Publish image
uses: docker/build-push-action@v2
- name: Generate image meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.IMAGE }}
tags: |
type=raw,value=${{ steps.prep.outputs.VERSION }}
- name: Publish image
id: build-push
uses: docker/build-push-action@v6
with:
sbom: true
provenance: true
push: true
builder: ${{ steps.buildx.outputs.name }}
context: .
@ -43,42 +78,76 @@ jobs:
platforms: linux/amd64,linux/arm64,linux/arm/v7
build-args: |
REVISON=${{ github.sha }}
tags: |
ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
labels: |
org.opencontainers.image.title=${{ github.event.repository.name }}
org.opencontainers.image.description=${{ github.event.repository.description }}
org.opencontainers.image.url=${{ github.event.repository.html_url }}
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.version=${{ steps.prep.outputs.VERSION }}
org.opencontainers.image.created=${{ steps.prep.outputs.BUILD_DATE }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Sign image
env:
COSIGN_EXPERIMENTAL: 1
run: |
echo -n "${{secrets.COSIGN_PASSWORD}}" | \
cosign sign -key ./.cosign/cosign.key -a git_sha=$GITHUB_SHA \
ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
- name: Check images
cosign sign --yes ${{ env.IMAGE }}@${{ steps.build-push.outputs.digest }}
- name: Publish signed manifests to GHCR
if: startsWith(github.ref, 'refs/tags/v')
env:
COSIGN_EXPERIMENTAL: 1
run: |
docker buildx imagetools inspect ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
- name: Verifiy image signature
run: |
cosign verify -key ./.cosign/cosign.pub \
ghcr.io/fluxcd/flagger:${{ steps.prep.outputs.VERSION }}
OCI_URL=$(flux push artifact \
oci://ghcr.io/fluxcd/flagger-manifests:${{ steps.prep.outputs.VERSION }} \
--path="./kustomize" \
--source="$(git config --get remote.origin.url)" \
--revision="${{ steps.prep.outputs.VERSION }}/$(git rev-parse HEAD)" \
--output json | \
jq -r '. | .repository + "@" + .digest')
cosign sign --yes ${OCI_URL}
- name: Publish Helm charts
uses: stefanprodan/helm-gh-pages@v1.3.0
if: startsWith(github.ref, 'refs/tags/v')
uses: stefanprodan/helm-gh-pages@v1.7.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
charts_url: https://flagger.app
linting: off
- name: Create release
uses: actions/create-release@latest
- uses: fluxcd/pkg/actions/helm@main
with:
version: 3.12.3
- name: Publish signed Helm chart to GHCR
if: startsWith(github.ref, 'refs/tags/v')
env:
COSIGN_EXPERIMENTAL: 1
run: |
helm package charts/flagger
helm push flagger-${{ steps.prep.outputs.VERSION }}.tgz oci://ghcr.io/fluxcd/charts |& tee .digest
cosign sign --yes ghcr.io/fluxcd/charts/flagger@$(cat .digest | awk -F "[, ]+" '/Digest/{print $NF}')
rm flagger-${{ steps.prep.outputs.VERSION }}.tgz
rm .digest
- uses: anchore/sbom-action/download-syft@v0
- name: Create release and SBOM
id: run-goreleaser
uses: goreleaser/goreleaser-action@v6
if: startsWith(github.ref, 'refs/tags/v')
with:
version: latest
args: release --release-notes=notes.md --clean --skip=validate
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate SLSA metadata
id: slsa
if: startsWith(github.ref, 'refs/tags/v')
env:
ARTIFACTS: "${{ steps.run-goreleaser.outputs.artifacts }}"
run: |
set -euo pipefail
hashes=$(echo -E $ARTIFACTS | jq --raw-output '.[] | {name, "digest": (.extra.Digest // .extra.Checksum)} | select(.digest) | {digest} + {name} | join(" ") | sub("^sha256:";"")' | base64 -w0)
echo "hashes=$hashes" >> $GITHUB_OUTPUT
release-provenance:
needs: [release-flagger]
if: startsWith(github.ref, 'refs/tags/v')
permissions:
actions: read # for detecting the Github Actions environment.
id-token: write # for creating OIDC tokens for signing.
contents: write # for uploading attestations to GitHub releases.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
with:
tag_name: ${{ github.ref }}
release_name: ${{ github.ref }}
draft: false
prerelease: false
body: |
[CHANGELOG](${{ steps.prep.outputs.CHANGELOG }})
provenance-name: "provenance.intoto.jsonl"
base64-subjects: "${{ needs.release-flagger.outputs.hashes }}"
upload-assets: true

View File

@ -8,30 +8,38 @@ on:
schedule:
- cron: '18 10 * * 3'
permissions:
contents: read
jobs:
fossa:
name: FOSSA
scan-fossa:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Run FOSSA scan and upload build data
uses: fossa-contrib/fossa-action@v1
uses: fossa-contrib/fossa-action@v3
with:
# FOSSA Push-Only API Token
fossa-api-key: 5ee8bf422db1471e0bcf2bcb289185de
github-token: ${{ github.token }}
codeql:
name: CodeQL
scan-codeql:
runs-on: ubuntu-latest
permissions:
security-events: write
steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.24.x
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v1
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
uses: github/codeql-action/analyze@v3

View File

@ -1,14 +1,31 @@
project_name: flagger
builds:
- main: ./cmd/flagger
binary: flagger
ldflags: -s -w -X github.com/fluxcd/flagger/pkg/version.REVISION={{.Commit}}
goos:
- linux
goarch:
- amd64
- skip: true
release:
prerelease: auto
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}_source_code"
sboms:
- id: source
artifacts: source
documents:
- "{{ .ProjectName }}_{{ .Version }}_sbom.spdx.json"
signs:
- cmd: cosign
env:
- CGO_ENABLED=0
archives:
- name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
files:
- none*
- COSIGN_EXPERIMENTAL=1
certificate: '${artifact}.pem'
args:
- sign-blob
- "--yes"
- '--output-certificate=${certificate}'
- '--output-signature=${signature}'
- '${artifact}'
artifacts: checksum
output: true

View File

@ -2,6 +2,971 @@
All notable changes to this project are documented in this file.
## 1.41.0
**Release date:** 2025-04-02
This release comes with major features and minor bug fixes.
Flagger now supports Knative as a networking provider. This works a bit
differently than compared to other service meshes/ingresses. Flagger does not
generate any Kubernetes objects. It instead modifies the Knative service itself
to configure weighted traffic routing. To learn more, please see the [tutorial](https://docs.flagger.app/tutorials/knative-progressive-delivery).
The session affinity canary release strategy has also been improved. Flagger can
now configure Gateway API HTTPRoutes to also set a cookie for the primary
deployment's response. For more info, see the [strategy docs](https://docs.flagger.app/usage/deployment-strategies#canary-release-with-session-affinity).
Furthermore, there's a new `.spec.service.headless` field which when set to
true, tells Flagger to generate headless Kubernetes services. Also, support has
been added for adding headers to the request Flagger sends to Prometheus for
collecting metrics during an analysis via the `.spec.headers` field in the
`MetricTemplate` object.
Finally, both Flagger and the load tester have been updated to use Go 1.24 and
their dependencies have been updated as well.
#### Improvements
- Allow headers to be added to Prometheus requests
[#1757](https://github.com/fluxcd/flagger/pull/1757)
- feat: Add support for primary backend cookies in session affinity (Gateway API)
[#1783](https://github.com/fluxcd/flagger/pull/1783)
- Update Go dependencies
[#1787](https://github.com/fluxcd/flagger/pull/1787)
- Build with Go 1.24
[#1784](https://github.com/fluxcd/flagger/pull/1784)
- Add support for Knative
[#1682](https://github.com/fluxcd/flagger/pull/1682)
- chart: add support for deploymentLabels
[#1707](https://github.com/fluxcd/flagger/pull/1707)
- chart: add support for deploymentLabels
[#1707](https://github.com/fluxcd/flagger/pull/1707)
- feat: add option to generate headless services
[#1755](https://github.com/fluxcd/flagger/pull/1755)
#### Fixes
- Fix: Do not evaluate incomplete samples from Datadog
[#1763](https://github.com/fluxcd/flagger/pull/1763)
- Prevent primary HPA collision for KEDA scaled objects when migrating from an HPA
[#1677](https://github.com/fluxcd/flagger/pull/1677)
## 1.40.0
**Release date:** 2024-12-17
This release comes with support for Splunk Observability (formerly SignalFx) as a metrics provider.
For more information on how to write `MetricTemplates` for Splunk, please see the
[Splunk metrics tutorial](https://docs.flagger.app/usage/metrics#s#splunk).
Starting with this version, Flagger is compatible with the
[AWS Gateway API Controller](https://www.gateway-api-controller.eks.aws.dev/latest/).
Both Flagger and the load tester Go dependencies have been updated to fix various CVEs.
#### Improvements
- Add Splunk as a metrics provider
[#1733](https://github.com/fluxcd/flagger/pull/1733)
- Preserve HTTPRoute annotations injected by AWS Gateway API
[#1746](https://github.com/fluxcd/flagger/pull/1746)
- Automate `zz_generated.deepcopy.go` updates with make codegen
[#1735](https://github.com/fluxcd/flagger/pull/1735)
- Update dependencies
[#1744](https://github.com/fluxcd/flagger/pull/1744)
## 1.39.0
**Release date:** 2024-11-26
This release comes with fixes and improvements. There is a new
`.spec.analysis.webhooks[].disableTLS` field which disables TLS verification
for that webhook request.
A bug in the Gateway API provider was fixed which could lead to unecessary restarts.
This release is built with Go 1.23. Lastly, all Go dependencies, Alpine and
Kubernetes libraries were updated.
#### Improvements
- Add validation for `primaryScalerReplicas` field in the CRD
[#1702](https://github.com/fluxcd/flagger/pull/1702)
- feat: add `disableTLS` option for webhooks request
[#1709](https://github.com/fluxcd/flagger/pull/1709)
- Update dependencies to Kubernetes v1.31.3
[#1723](https://github.com/fluxcd/flagger/pull/1723)
- Update generated client for Kubernetes 1.31
[#1725](https://github.com/fluxcd/flagger/pull/1725)
- Build with Go 1.23
[#1726](https://github.com/fluxcd/flagger/pull/1726)
#### Fixes
- Gateway API: Sort header filters to avoid canary restarts
[#1713](https://github.com/fluxcd/flagger/pull/1713)
- fix: fix codegen script and update generated code
[#1724](https://github.com/fluxcd/flagger/pull/1724)
- fix(helm): podinfo fails to create the hpa object
[#1721](https://github.com/fluxcd/flagger/pull/1721)
## 1.38.0
**Release date:** 2024-07-30
This release comes with several fixes and improvements. There is a new [Keptn
metrics provider](https://docs.flagger.app/usage/metrics#keptn) that can be used
for flexible grading logic and analysis.
The loadtester chart now supports ServiceAccount annotations and the Flagger
chart now supports specifying `honorLabels` for the PodMonitor.
Support for Kuma has been fixed and verified against Kuma 2.7.5. Also, the
Deployment scaling has been updated to use `Patch` instead of `Update` to avoid
intermittent conflict errors. Furthermore, a potential panic that could be
caused due to Prometheus returning a range vector has been fixed. Also, the
`request-duration` inbuilt query for Nginx has been updated to be more accurate.
Lastly, all Go dependencies, Alpine and Kubernetes libraries were updated.
#### Important
The update to Kubernetes libraries also brings an unwanted side-effect. Due to
a change in upstream Kubernetes, sidecar support is done through a new field,
which may be utilized by other services in your cluster. This would change the
hash calculated by Flagger between runs and trigger an unwanted Canary
analysis. Unfortunately, this is unavoidable. To get around this, users could
set the `.spec.suspend` field to be true before updating to this version and
switch it back when they update their application.
#### Improvements
- Bumps golang.org/x/net to v0.23.0
[#1628](https://github.com/fluxcd/flagger/pull/1628)
- feat: implement a Keptn metrics provider
[#1630](https://github.com/fluxcd/flagger/pull/1630)
- Update dependencies to Kubernetes 1.30
[#1638](https://github.com/fluxcd/flagger/pull/1638)
- loadtester: add support for annotation on service account
[#1649](https://github.com/fluxcd/flagger/pull/1649)
- Bump golang.org/x/net to v0.25.0 and other deps.
[#1653](https://github.com/fluxcd/flagger/pull/1653)
- Update Go dependencies and Alpine
[#1656](https://github.com/fluxcd/flagger/pull/1656)
- Helm - Add podMonitor.honor labels
[#1676](https://github.com/fluxcd/flagger/pull/1676)
- kuma: bump e2e version to 2.7.5
[#1683](https://github.com/fluxcd/flagger/pull/1683)
- Release loadtester 0.33.0
[#1690](https://github.com/fluxcd/flagger/pull/1690)
- Bump google.golang.org/grpc from 1.64.0 to 1.64.1
[#1675](https://github.com/fluxcd/flagger/pull/1675)
#### Fixes
- Use `Patch` instead of `Update` for Deployment scaling
[#1634](https://github.com/fluxcd/flagger/pull/1634)
- block panic when prom returns range vector
[#1637](https://github.com/fluxcd/flagger/pull/1637)
- Fix removal of empty keys from flagger chart
[#1657](https://github.com/fluxcd/flagger/pull/1657)
- doc: fix KEDA doc regarding namespaces
[#1666](https://github.com/fluxcd/flagger/pull/1666)
- Fix Nginx request-duration query
[#1686](https://github.com/fluxcd/flagger/pull/1686)
## 1.37.0
**Release date:** 2024-03-26
This release updates the Istio APIs to `v1beta1` and fixes several issues related
to Gloo routing and custom metrics.
Both Flagger and the load tester Go dependencies have been updated to fix various CVEs.
Flagger and the load tester are now built with Go 1.22.
#### Improvements
- Migrate Istio VirtualService/DestinationRule APIs to `v1beta1`
[#1602](https://github.com/fluxcd/flagger/pull/1602)
- Add `omitempty` to CRD statuses to allow better marshalling
[#1621](https://github.com/fluxcd/flagger/pull/1621)
- Update dependencies (Go 1.22)
[#1622](https://github.com/fluxcd/flagger/pull/1622)
- Update `google.golang.org/protobuf` to v1.33.0
[#1614](https://github.com/fluxcd/flagger/pull/1614)
#### Fixes
- Update reconciler to detect change in Gloo upstream spec
[#1617](https://github.com/fluxcd/flagger/pull/1617)
- Fix regression bug where query with no metric template returned an error
[#1611](https://github.com/fluxcd/flagger/pull/1611)
## 1.36.1
**Release date:** 2024-03-06
This release fixes a bug where `.spec..progressDeadlineSeconds` wasn't respected and the Canary
was stuck forever waiting for the Deployment to be ready.
Furthermore, the Go dependencies have been updated.
#### Improvements
- Update Go dependencies
[#1607](https://github.com/fluxcd/flagger/pull/1607)
#### Fixes
- Fix broken link in readme
[#1599](https://github.com/fluxcd/flagger/pull/1599)
- scheduler: fail canary according to progress deadline
[#1603](https://github.com/fluxcd/flagger/pull/1603)
- Actualize link to flux in-depth guide
[#1606](https://github.com/fluxcd/flagger/pull/1606)
## 1.36.0
**Release date:** 2024-02-07
This release comes with support for canary releases with traffic shifting using
Istio TCP routing. For more information on how to enable TCP routing please
see the [Istio tutorial](https://docs.flagger.app/tutorials/istio-progressive-delivery#canary-deployments-for-tcp-services).
Both Flagger and the load tester Go dependencies have been updated to fix various CVEs.
Flagger is now built with Go 1.21 and the container base image has been updated to Alpine 3.19.
#### Improvements
- Istio Canary TCP service support
[#1564](https://github.com/fluxcd/flagger/pull/1564)
- Update Go dependencies
[#1595](https://github.com/fluxcd/flagger/pull/1595)
- Build with Go 1.21 and Alpine 3.19
[#1594](https://github.com/fluxcd/flagger/pull/1594)
#### Fixes
- return an error for missing metric templates
[#1582](https://github.com/fluxcd/flagger/pull/1582)
- istio: make retry attempts a mandatory field
[#1571](https://github.com/fluxcd/flagger/pull/1571)
- fix(pdb): use the full capabilities comparison for PDBs
[#1511](https://github.com/fluxcd/flagger/pull/1511)
## 1.35.0
**Release date:** 2023-11-30
This release comes with support for Gateway API `v1`. Furthermore, following the
deprecation period, support for the `v1alpha2` API has been dropped.
A new field `.spec.webhooks[].retries` has been added to allow specifying the
number of retry attempts to make if the webhook server returns an unsuccessful
response.
Another new field `.spec.service.trafficPolicy.loadBalancer.warmupDurationSeconds`
has been added for the corresponding field in Istio's `DestinationRule` API.
Lastly, two bugs related to deleting a Canary object with
`.spec.revertOnDeletion: true` have been fixed.
#### Improvements
- Support Istio DestinationRule WarmupDurationSecs
[#1540](https://github.com/fluxcd/flagger/pull/1540)
- feat: Webhook retries
[#1541](https://github.com/fluxcd/flagger/pull/1541)
- gatewayapi: add support for `v1`
[#1557](https://github.com/fluxcd/flagger/pull/1557)
- Update Go dependencies
[#1558](https://github.com/fluxcd/flagger/pull/1558)
#### Fixes
- set original node selector value when finalizing service
[#1537](https://github.com/fluxcd/flagger/pull/1537)
- controller: wait for canary deployment to be ready before removing finalizers
[#1552](https://github.com/fluxcd/flagger/pull/1552)
## 1.34.0
**Release date:** 2023-10-04
This release comes with several new features. The Gateway API integration
has been significantly improved with support for
* [Canary releases with session affinty](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery#session-affinty)
* [B/G deployments with traffic mirroring](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery#traffic-mirroring)
* Filters in the generated `HTTPRoute` (`.spec.rules[].filters`)
Most of the Filters are derived from existing fields in the Canary spec like
`.spec.service.headers`. To support arbitary request mirroring through the
`RequestMirror` filter, a new field `.spec.service.mirror` has been introduced.
A new field `checksum` has been added to the Canary webhook payload. This field
is computed by hashing the `.status.lastAppliedSpec` and
`.status.trackedConfigs`. It can be used to distinguish between Canary runs.
Furthermore, the Gloo integration now uses strings for specifying time durations
in order to be better compatible with protobuf duration parsing.
Lastly, Kubernetes packages were updated to be on 1.27.
#### Improvements
- Update Kubernetes to v1.27
[#1506](https://github.com/fluxcd/flagger/pull/1506)
- gatewayapi: add support for session affinity
[#1507](https://github.com/fluxcd/flagger/pull/1507)
- gatewayapi: add support for route rule filters
[#1512](https://github.com/fluxcd/flagger/pull/1512)
- Update Linkerd tutorial to use Kubernetes Gateway API
[#1516](https://github.com/fluxcd/flagger/pull/1516)
- Add Checksum field to the Webhook payload to distinguish canary runs
[#1521](https://github.com/fluxcd/flagger/pull/1521)
- gatewayapi: add support for b/g mirroring
[#1525](https://github.com/fluxcd/flagger/pull/1525)
- Update Go dependencies
[#1528](https://github.com/fluxcd/flagger/pull/1528)
#### Fixes
- chore: fix incorrect canary name on document
[#1502](https://github.com/fluxcd/flagger/pull/1502)
- fix: Support for queryParams in canary match condition #880
[#1505](https://github.com/fluxcd/flagger/pull/1505)
- docs: fix error example in deployment strategies
[#1518](https://github.com/fluxcd/flagger/pull/1518)
- Change Gloo Duration type to string
[#1524](https://github.com/fluxcd/flagger/pull/1524)
## 1.33.0
**Release date:** 2023-08-29
This release fixes bugs related to the Canary lifecycle. The
`confirm-traffic-increase` webhook is no longer called if the Canary is in the
`WaitingPromotion` phase. Furthermore, a bug which caused downtime when
initializing the Canary deployment has been fixed.
Also, a bug in the `request-duration` metric for Traefik which assumed the
result to be in milliseconds instead of seconds has been addressed.
The loadtester now also supports running `kubectl` commands.
#### Improvements
- Helm: Add option to configure honorLabels for serviceMonitor
[#1442](https://github.com/fluxcd/flagger/pull/1442)
- Helm: Use PodDisruptionBudget API policy/v1 if available
[#1476](https://github.com/fluxcd/flagger/pull/1476)
- podinfo: Update hpa version from autoscaling/v2beta2 to autoscaling/v2
[#1477](https://github.com/fluxcd/flagger/pull/1477)
- Helm: Allow custom labels for servicemonitor
[#1483](https://github.com/fluxcd/flagger/pull/1483)
- feat: loadtester support kubectl type
[#1485](https://github.com/fluxcd/flagger/pull/1485)
- Update Istio Gateway reference format
[#1489](https://github.com/fluxcd/flagger/pull/1489)
- e2e: Update Istio to v1.18
[#1492](https://github.com/fluxcd/flagger/pull/1492)
- add docs for kubectl in loadtester
[#1494](https://github.com/fluxcd/flagger/pull/1494)
#### Fixes
- fix: typo on "Parase", should be "Parse".
[#1443](https://github.com/fluxcd/flagger/pull/1443)
- Fix Traefik request-duration metric
[#1446](https://github.com/fluxcd/flagger/pull/1446)
- Fix initial deployment downtime
[#1451](https://github.com/fluxcd/flagger/pull/1451)
- Fix FAQ templating format and change reference of $workload to $target.
[#1456](https://github.com/fluxcd/flagger/pull/1456)
- Update doc.go
[#1466](https://github.com/fluxcd/flagger/pull/1466)
- Avoid running traffic increase hooks when waiting for promotion or promoting
[#1470](https://github.com/fluxcd/flagger/pull/1470)
## 1.32.0
**Release date:** 2023-07-14
This release adds support for suspending a Canary using `.spec.suspend`.
It also fixes a bug where the target deployment gets stuck at 0 replicas
after the Canary has been deleted.
Furthermore, the Canary API has been modified to allow specifying the
HTTPRoute port using `.service.gatewayRefs[].port`.
#### Improvements
- Helm: Add option to create service and serviceMonitor
[#1425](https://github.com/fluxcd/flagger/pull/1425)
- Update Alpine to 3.18
[#1426](https://github.com/fluxcd/flagger/pull/1426)
- Add `spec.suspend` to allow suspending canary
[#1431](https://github.com/fluxcd/flagger/pull/1431)
- Add support for istio LEAST_REQUEST destination rule load balancing
[#1439](https://github.com/fluxcd/flagger/pull/1439)
- Add gatewayRef port to Canary CRD
[#1453](https://github.com/fluxcd/flagger/pull/1453)
- feat: Copy slowStartConfig for Gloo upstreams
[#1455](https://github.com/fluxcd/flagger/pull/1455)
- Update Go dependencies
[#1459](https://github.com/fluxcd/flagger/pull/1459)
#### Fixes
- Resume target scaler during finalization
[#1429](https://github.com/fluxcd/flagger/pull/1429)
- Fix panic when annotation of ingress is empty
[#1437](https://github.com/fluxcd/flagger/pull/1437)
- Fixing namespace of HelmRepository in installation docs
[#1458](https://github.com/fluxcd/flagger/pull/1458)
## 1.31.0
**Release date:** 2023-05-10
⚠️ __Breaking Changes__
This release adds support for Linkerd 2.12 and later. Due to changes in Linkerd
the default namespace for Flagger's installation had to be changed from
`linkerd` to `flagger-system` and the `flagger` Deployment is now injected with
the Linkerd proxy. Furthermore, installing Flagger for Linkerd will result in
the creation of an `AuthorizationPolicy` that allows access to the Prometheus
instance in the `linkerd-viz` namespace. To upgrade your Flagger installation,
please see the below migration guide.
If you use Kustomize, then follow these steps:
* `kubectl delete -n linkerd deploy/flagger`
* `kubectl delete -n linkerd serviceaccount flagger`
* If you're on Linkerd >= 2.12, you'll need to install the SMI extension to enable
support for `TrafficSplit`s:
```bash
curl -sL https://linkerd.github.io/linkerd-smi/install | sh
linkerd smi install | kubectl apply -f -
```
* `kubectl apply -k github.com/fluxcd/flagger//kustomize/linkerd`
Note: If you're on Linkerd < 2.12, this will report an error about missing CRDs.
It is safe to ignore this error.
If you use Helm and are on Linkerd < 2.12, then you can use `helm upgrade` to do
a regular upgrade.
If you use Helm and are on Linkerd >= 2.12, then follow these steps:
* `helm uninstall flagger -n linkerd`
* Install the Linkerd SMI extension:
```bash
helm repo add l5d-smi https://linkerd.github.io/linkerd-smi
helm install linkerd-smi l5d-smi/linkerd-smi -n linkerd-smi --create-namespace
```
* Install Flagger in the `flagger-system` namespace
and create an `AuthorizationPolicy`:
```bash
helm repo update flagger
helm install flagger flagger/flagger \
--namespace flagger-system \
--set meshProvider=linkerd \
--set metricsServer=http://prometheus.linkerd-viz:9090 \
--set linkerdAuthPolicy.create=true
```
Furthermore, a bug which led the `confirm-rollout` webhook to be executed at
every step of the Canary instead of only being executed before the canary
Deployment is scaled up, has been fixed.
#### Improvements
- Add support for Linkerd 2.13
[#1417](https://github.com/fluxcd/flagger/pull/1417)
#### Fixes
- Fix the loadtester install with flux documentation
[#1384](https://github.com/fluxcd/flagger/pull/1384)
- Run `confirm-rollout` checks only before scaling up deployment
[#1414](https://github.com/fluxcd/flagger/pull/1414)
- e2e: Remove OSM tests
[#1423](https://github.com/fluxcd/flagger/pull/1423)
## 1.30.0
**Release date:** 2023-04-12
This release fixes a bug related to the lack of updates to the generated
object's metadata according to the metadata specified in `spec.service.apex`.
Furthermore, a bug where labels were wrongfully copied over from the canary
deployment to primary deployment when no value was provided for
`--include-label-prefix` has been fixed.
This release also makes Flagger compatible with Flux's helm-controller drift
detection.
#### Improvements
- build(deps): bump actions/cache from 3.2.5 to 3.3.1
[#1385](https://github.com/fluxcd/flagger/pull/1385)
- helm: Added the option to supply additional volumes
[#1393](https://github.com/fluxcd/flagger/pull/1393)
- build(deps): bump actions/setup-go from 3 to 4
[#1394](https://github.com/fluxcd/flagger/pull/1394)
- update Kuma version and docs
[#1402](https://github.com/fluxcd/flagger/pull/1402)
- ci: bump k8s to 1.24 and kind to 1.18
[#1406](https://github.com/fluxcd/flagger/pull/1406)
- Helm: Allow configuring deployment `annotations`
[#1411](https://github.com/fluxcd/flagger/pull/1411)
- update dependencies
[#1412](https://github.com/fluxcd/flagger/pull/1412)
#### Fixes
- Enable updates for labels and annotations
[#1392](https://github.com/fluxcd/flagger/pull/1392)
- Update flagger-install-with-flux.md
[#1398](https://github.com/fluxcd/flagger/pull/1398)
- avoid copying canary labels to primary on promotion
[#1405](https://github.com/fluxcd/flagger/pull/1405)
- Disable Flux helm drift detection for managed resources
[#1408](https://github.com/fluxcd/flagger/pull/1408)
## 1.29.0
**Release date:** 2023-02-21
This release comes with support for template variables for analysis metrics.
A canary analysis metric can reference a set of custom variables with
`.spec.analysis.metrics[].templateVariables`. For more info see the [docs](https://fluxcd.io/flagger/usage/metrics/#custom-metrics).
Furthemore, a bug related to Canary releases with session affinity has been
fixed.
#### Improvements
- update dependencies
[#1374](https://github.com/fluxcd/flagger/pull/1374)
- build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
[#1373](https://github.com/fluxcd/flagger/pull/1373)
- build(deps): bump fossa-contrib/fossa-action from 1 to 2
[#1372](https://github.com/fluxcd/flagger/pull/1372)
- Allow custom affinities for flagger deployment in helm chart
[#1371](https://github.com/fluxcd/flagger/pull/1371)
- Add namespace to namespaced resources in helm chart
[#1370](https://github.com/fluxcd/flagger/pull/1370)
- build(deps): bump actions/cache from 3.2.4 to 3.2.5
[#1366](https://github.com/fluxcd/flagger/pull/1366)
- build(deps): bump actions/cache from 3.2.3 to 3.2.4
[#1362](https://github.com/fluxcd/flagger/pull/1362)
- build(deps): bump docker/build-push-action from 3 to 4
[#1361](https://github.com/fluxcd/flagger/pull/1361)
- modify release workflow to publish rc images
[#1359](https://github.com/fluxcd/flagger/pull/1359)
- build: Enable SBOM and SLSA Provenance
[#1356](https://github.com/fluxcd/flagger/pull/1356)
- Add support for custom variables in metric templates
[#1355](https://github.com/fluxcd/flagger/pull/1355)
- docs(readme.md): add additional tutorial
[#1346](https://github.com/fluxcd/flagger/pull/1346)
#### Fixes
- use regex to match against headers in istio
[#1364](https://github.com/fluxcd/flagger/pull/1364)
## 1.28.0
**Release date:** 2023-01-26
This release comes with support for setting a different autoscaling
configuration for the primary workload.
The `.spec.autoscalerRef.primaryScalerReplicas` is useful in the
situation where the user does not want to scale the canary workload
to the exact same size as the primary, especially when opting for a
canary deployment pattern where only a small portion of traffic is
routed to the canary workload pods.
#### Improvements
- Support for overriding primary scaler replicas
[#1343](https://github.com/fluxcd/flagger/pull/1343)
- Allow access to Prometheus in OpenShift via SA token
[#1338](https://github.com/fluxcd/flagger/pull/1338)
- Update Kubernetes packages to v1.26.1
[#1352](https://github.com/fluxcd/flagger/pull/1352)
## 1.27.0
**Release date:** 2022-12-15
This release comes with support for Apache APISIX. For more details see the
[tutorial](https://fluxcd.io/flagger/tutorials/apisix-progressive-delivery).
#### Improvements
- [apisix] Implement router interface and observer interface
[#1281](https://github.com/fluxcd/flagger/pull/1281)
- Bump stefanprodan/helm-gh-pages from 1.6.0 to 1.7.0
[#1326](https://github.com/fluxcd/flagger/pull/1326)
- Release loadtester v0.28.0
[#1328](https://github.com/fluxcd/flagger/pull/1328)
#### Fixes
- Update release docs
[#1324](https://github.com/fluxcd/flagger/pull/1324)
## 1.26.0
**Release date:** 2022-11-23
This release comes with support Kubernetes [Gateway API](https://gateway-api.sigs.k8s.io/) v1beta1.
For more details see the [Gateway API Progressive Delivery tutorial](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery).
Please note that starting with this version, the Gateway API v1alpha2 is considered deprecated
and will be removed from Flagger after 6 months.
#### Improvements:
- Updated Gateway API from v1alpha2 to v1beta1
[#1319](https://github.com/fluxcd/flagger/pull/1319)
- Updated Gateway API docs to v1beta1
[#1321](https://github.com/fluxcd/flagger/pull/1321)
- Update dependencies
[#1322](https://github.com/fluxcd/flagger/pull/1322)
#### Fixes:
- docs: Add `linkerd install --crds` to Linkerd tutorial
[#1316](https://github.com/fluxcd/flagger/pull/1316)
## 1.25.0
**Release date:** 2022-11-16
This release introduces a new deployment strategy combining Canary releases with session affinity
for Istio.
Furthermore, it contains a regression fix regarding metadata in alerts introduced in
[#1275](https://github.com/fluxcd/flagger/pull/1275)
#### Improvements:
- Add support for session affinity during weighted routing with Istio
[#1280](https://github.com/fluxcd/flagger/pull/1280)
#### Fixes:
- Fix cluster name inclusion in alerts metadata
[#1306](https://github.com/fluxcd/flagger/pull/1306)
- fix(faq): Update FAQ about zero downtime with correct values
[#1302](https://github.com/fluxcd/flagger/pull/1302)
## 1.24.1
**Release date:** 2022-10-26
This release comes with a fix to Gloo routing when a custom service name id used.
In addition, the Gloo ingress end-to-end testing was updated to Gloo Helm chart v1.12.31.
#### Fixes:
- fix(gloo): Use correct route table name in case service name was overwritten
[#1300](https://github.com/fluxcd/flagger/pull/1300)
## 1.24.0
**Release date:** 2022-10-23
Starting with this version, the Flagger release artifacts are published to
GitHub Container Registry, and they are signed with Cosign and GitHub ODIC.
OCI artifacts:
- `ghcr.io/fluxcd/flagger:<version>` multi-arch container images
- `ghcr.io/fluxcd/flagger-manifest:<version>` Kubernetes manifests
- `ghcr.io/fluxcd/charts/flagger:<version>` Helm charts
To verify an OCI artifact with Cosign:
```shell
export COSIGN_EXPERIMENTAL=1
cosign verify ghcr.io/fluxcd/flagger:1.24.0
cosign verify ghcr.io/fluxcd/flagger-manifests:1.24.0
cosign verify ghcr.io/fluxcd/charts/flagger:1.24.0
```
To deploy Flagger from its OCI artifacts the GitOps way,
please see the [Flux installation guide](docs/gitbook/install/flagger-install-with-flux.md).
#### Improvements:
- docs: Add guide on how to install Flagger with Flux OCI
[#1294](https://github.com/fluxcd/flagger/pull/1294)
- ci: Publish signed Helm charts and manifests to GHCR
[#1293](https://github.com/fluxcd/flagger/pull/1293)
- ci: Sign release and containers with Cosign and GitHub OIDC
[#1292](https://github.com/fluxcd/flagger/pull/1292)
- ci: Adjust GitHub workflow permissions
[#1286](https://github.com/fluxcd/flagger/pull/1286)
- docs: Add link to Flux governance document
[#1286](https://github.com/fluxcd/flagger/pull/1286)
## 1.23.0
**Release date:** 2022-10-20
This release comes with support for Slack bot token authentication.
#### Improvements:
- alerts: Add support for Slack bot token authentication
[#1270](https://github.com/fluxcd/flagger/pull/1270)
- loadtester: logCmdOutput to logger instead of stdout
[#1267](https://github.com/fluxcd/flagger/pull/1267)
- helm: Add app.kubernetes.io/version label to chart
[#1264](https://github.com/fluxcd/flagger/pull/1264)
- Update Go to 1.19
[#1264](https://github.com/fluxcd/flagger/pull/1264)
- Update Kubernetes packages to v1.25.3
[#1283](https://github.com/fluxcd/flagger/pull/1283)
- Bump Contour to v1.22 in e2e tests
[#1282](https://github.com/fluxcd/flagger/pull/1282)
#### Fixes:
- gatewayapi: Fix reconciliation of nil hostnames
[#1276](https://github.com/fluxcd/flagger/pull/1276)
- alerts: Include cluster name in all alerts
[#1275](https://github.com/fluxcd/flagger/pull/1275)
## 1.22.2
**Release date:** 2022-08-29
This release fixes a bug related scaling up the canary deployment when a
reference to an autoscaler is specified.
Furthermore, it contains updates to packages used by the project, including
updates to Helm and grpc-health-probe used in the loadtester.
CVEs fixed (originating from dependencies):
* CVE-2022-37434
* CVE-2022-27191
* CVE-2021-33194
* CVE-2021-44716
* CVE-2022-29526
* CVE-2022-1996
#### Fixes:
- If HPA is set, it uses HPA minReplicas when scaling up the canary
[#1253](https://github.com/fluxcd/flagger/pull/1253)
#### Improvements:
- Release loadtester v0.23.0
[#1246](https://github.com/fluxcd/flagger/pull/1246)
- Add target and script to keep crds in sync
[#1254](https://github.com/fluxcd/flagger/pull/1254)
- docs: add knative support to roadmap
[#1258](https://github.com/fluxcd/flagger/pull/1258)
- Update dependencies
[#1259](https://github.com/fluxcd/flagger/pull/1259)
- Release loadtester v0.24.0
[#1261](https://github.com/fluxcd/flagger/pull/1261)
## 1.22.1
**Release date:** 2022-08-01
This minor release fixes a bug related to the use of HPA v2beta2 and updates
the KEDA ScaledObject API to include `MetricType` for `ScaleTriggers`.
Furthermore, the project has been updated to use Go 1.18 and Alpine 3.16.
#### Fixes:
- Update KEDA ScaledObject API to include MetricType for Triggers
[#1241](https://github.com/fluxcd/flagger/pull/1241)
- Fix fallback logic for HPAv2 to v2beta2
[#1242](https://github.com/fluxcd/flagger/pull/1242)
#### Improvements:
- Update Go to 1.18 and Alpine to 3.16
[#1243](https://github.com/fluxcd/flagger/pull/1243)
- Clarify HPA API requirement
[#1239](https://github.com/fluxcd/flagger/pull/1239)
- Update README
[#1233](https://github.com/fluxcd/flagger/pull/1233)
## 1.22.0
**Release date:** 2022-07-11
This release with support for KEDA ScaledObjects as an alternative to HPAs. Check the
[tutorial](https://docs.flagger.app/tutorials/keda-scaledobject) to understand it's usage
with Flagger.
The `.spec.service.appProtocol` field can now be used to specify the [`appProtocol`](https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol)
of the services that Flagger generates.
In addition, a bug related to the Contour prometheus query for when service name is overwritten
along with a bug related to a Contour `HTTPProxy` annotations have been fixed.
Furthermore, the installation guide for Alibaba ServiceMesh has been updated.
#### Improvements:
- feat: Add an optional `appProtocol` field to `spec.service`
[#1185](https://github.com/fluxcd/flagger/pull/1185)
- Update Kubernetes packages to v1.24.1
[#1208](https://github.com/fluxcd/flagger/pull/1208)
- charts: Add namespace parameter to parameters table
[#1210](https://github.com/fluxcd/flagger/pull/1210)
- Introduce `ScalerReconciler` and refactor HPA reconciliation
[#1211](https://github.com/fluxcd/flagger/pull/1211)
- e2e: Update providers and Kubernetes to v1.23
[#1212](https://github.com/fluxcd/flagger/pull/1212)
- Add support for KEDA ScaledObjects as an auto scaler
[#1216](https://github.com/fluxcd/flagger/pull/1216)
- include Contour retryOn in the sample canary
[#1223](https://github.com/fluxcd/flagger/pull/1223)
#### Fixes:
- fix contour prom query for when service name is overwritten
[#1204](https://github.com/fluxcd/flagger/pull/1204)
- fix contour httproxy annotations overwrite
[#1205](https://github.com/fluxcd/flagger/pull/1205)
- Fix primary HPA label reconciliation
[#1215](https://github.com/fluxcd/flagger/pull/1215)
- fix: add finalizers to canaries
[#1219](https://github.com/fluxcd/flagger/pull/1219)
- typo: boostrap -> bootstrap
[#1220](https://github.com/fluxcd/flagger/pull/1220)
- typo: controller
[#1221](https://github.com/fluxcd/flagger/pull/1221)
- update guide for flagger on aliyun ASM
[#1222](https://github.com/fluxcd/flagger/pull/1222)
- Reintroducing empty check for metric template references.
[#1224](https://github.com/fluxcd/flagger/pull/1224)
## 1.21.0
**Release date:** 2022-05-06
This release comes with an option to disable cross-namespace references to Kubernetes
custom resources such as `AlertProivders` and `MetricProviders`. When running Flagger
on multi-tenant environments it is advised to set the `-no-cross-namespace-refs=true` flag.
In addition, this version enables Flagger to target Istio and Kuma multi-cluster setups.
When installing Flagger with Helm, the service mesh control plane kubeconfig secret
can be specified using `--set controlplane.kubeconfig.secretName`.
#### Improvements
- Add flag to disable cross namespace refs to custom resources
[#1181](https://github.com/fluxcd/flagger/pull/1181)
- Rename kubeconfig section in helm values
[#1188](https://github.com/fluxcd/flagger/pull/1188)
- Update Flagger overview diagram
[#1187](https://github.com/fluxcd/flagger/pull/1187)
#### Fixes
- Avoid setting owner refs if the service mesh/ingress is on a different cluster
[#1183](https://github.com/fluxcd/flagger/pull/1183)
## 1.20.0
**Release date:** 2022-04-15
This release comes with improvements to the AppMesh, Contour and Istio integrations.
#### Improvements
- AppMesh: Add annotation to enable Envoy access logs
[#1156](https://github.com/fluxcd/flagger/pull/1156)
- Contour: Update the httproxy API and enable RetryOn
[#1164](https://github.com/fluxcd/flagger/pull/1164)
- Istio: Add destination port when port discovery and delegation are true
[#1145](https://github.com/fluxcd/flagger/pull/1145)
- Metrics: Add canary analysis result as Prometheus metrics
[#1148](https://github.com/fluxcd/flagger/pull/1148)
#### Fixes
- Fix canary rollback behaviour
[#1171](https://github.com/fluxcd/flagger/pull/1171)
- Shorten the metric analysis cycle after confirm promotion gate is open
[#1139](https://github.com/fluxcd/flagger/pull/1139)
- Fix unit of time in the Istio Grafana dashboard
[#1162](https://github.com/fluxcd/flagger/pull/1162)
- Fix the service toggle condition in the podinfo helm chart
[#1146](https://github.com/fluxcd/flagger/pull/1146)
## 1.19.0
**Release date:** 2022-03-14
This release comes with support for Kubernetes [Gateway API](https://gateway-api.sigs.k8s.io/) v1alpha2.
For more details see the [Gateway API Progressive Delivery tutorial](https://docs.flagger.app/tutorials/gatewayapi-progressive-delivery).
#### Features
- Add Gateway API as a provider
[#1108](https://github.com/fluxcd/flagger/pull/1108)
#### Improvements
- Add arm64 support for loadtester
[#1128](https://github.com/fluxcd/flagger/pull/1128)
- Restrict source namespaces in flagger-loadtester
[#1119](https://github.com/fluxcd/flagger/pull/1119)
- Remove support for Helm v2 in loadtester
[#1130](https://github.com/fluxcd/flagger/pull/1130)
#### Fixes
- Fix potential canary finalizer duplication
[#1125](https://github.com/fluxcd/flagger/pull/1125)
- Use the primary replicas when scaling up the canary (no hpa)
[#1110](https://github.com/fluxcd/flagger/pull/1110)
## 1.18.0
**Release date:** 2022-02-14
This release comes with a new API field called `canaryReadyThreshold`
that allows setting the percentage of pods that need to be available
to consider the canary deployment as ready.
Starting with version, the canary deployment labels, annotations and
replicas fields are copied to the primary deployment at promotion time.
#### Features
- Add field `spec.analysis.canaryReadyThreshold` for configuring canary threshold
[#1102](https://github.com/fluxcd/flagger/pull/1102)
#### Improvements
- Update metadata during subsequent promote
[#1092](https://github.com/fluxcd/flagger/pull/1092)
- Set primary deployment `replicas` when autoscaler isn't used
[#1106](https://github.com/fluxcd/flagger/pull/1106)
- Update `matchLabels` for `TopologySpreadContstraints` in Deployments
[#1041](https://github.com/fluxcd/flagger/pull/1041)
#### Fixes
- Send warning and error alerts correctly
[#1105](https://github.com/fluxcd/flagger/pull/1105)
- Fix for when Prometheus returns NaN
[#1095](https://github.com/fluxcd/flagger/pull/1095)
- docs: Fix typo ExternalDNS
[#1103](https://github.com/fluxcd/flagger/pull/1103)
## 1.17.0
**Release date:** 2022-01-11
This release comes with support for [Kuma Service Mesh](https://kuma.io/).
For more details see the [Kuma Progressive Delivery tutorial](https://docs.flagger.app/tutorials/kuma-progressive-delivery).
To differentiate alerts based on the cluster name, you can configure Flagger with the `-cluster-name=my-cluster`
command flag, or with Helm `--set clusterName=my-cluster`.
#### Features
- Add kuma support for progressive traffic shifting canaries
[#1085](https://github.com/fluxcd/flagger/pull/1085)
[#1093](https://github.com/fluxcd/flagger/pull/1093)
#### Improvements
- Publish a Software Bill of Materials (SBOM)
[#1094](https://github.com/fluxcd/flagger/pull/1094)
- Add cluster name to flagger cmd args for altering
[#1041](https://github.com/fluxcd/flagger/pull/1041)
## 1.16.1
**Release date:** 2021-12-17
@ -555,7 +1520,7 @@ The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.a
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
[Alerting](https://docs.flagger.app/v/main/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features
@ -719,7 +1684,7 @@ The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.a
Two new resources were added to the API: `MetricTemplate` and `AlertProvider`.
The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics)
to query Prometheus, Datadog and AWS CloudWatch.
[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per
[Alerting](https://docs.flagger.app/v/main/usage/alerting#canary-configuration) can be configured on a per
canary basis for Slack, MS Teams, Discord and Rocket.
#### Features

View File

@ -1,4 +1,11 @@
FROM golang:1.17-alpine as builder
ARG GO_VERSION=1.24
ARG XX_VERSION=1.6.1
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine AS builder
# copy build utilities
COPY --from=xx / /
ARG TARGETPLATFORM
ARG REVISON
@ -17,11 +24,12 @@ COPY cmd/ cmd/
COPY pkg/ pkg/
# build
RUN CGO_ENABLED=0 go build \
ENV CGO_ENABLED=0
RUN xx-go build \
-ldflags "-s -w -X github.com/fluxcd/flagger/pkg/version.REVISION=${REVISON}" \
-a -o flagger ./cmd/flagger
FROM alpine:3.15
FROM alpine:3.21
RUN apk --no-cache add ca-certificates

View File

@ -1,35 +1,27 @@
FROM alpine:3.15.0 as build
RUN apk --no-cache add alpine-sdk perl curl
RUN curl -sSLo hey "https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64" && \
chmod +x hey && mv hey /usr/local/bin/hey
RUN HELM2_VERSION=2.17.0 && \
curl -sSL "https://get.helm.sh/helm-v${HELM2_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helm && \
chmod +x linux-amd64/tiller && mv linux-amd64/tiller /usr/local/bin/tiller
RUN HELM3_VERSION=3.7.2 && \
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-amd64.tar.gz" | tar xvz && \
chmod +x linux-amd64/helm && mv linux-amd64/helm /usr/local/bin/helmv3
RUN GRPC_HEALTH_PROBE_VERSION=v0.4.6 && \
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
chmod +x /usr/local/bin/grpc_health_probe
RUN GHZ_VERSION=0.105.0 && \
curl -sSL "https://github.com/bojand/ghz/releases/download/v${GHZ_VERSION}/ghz-linux-x86_64.tar.gz" | tar xz -C /tmp && \
mv /tmp/ghz /usr/local/bin && chmod +x /usr/local/bin/ghz
RUN HELM_TILLER_VERSION=0.9.3 && \
curl -sSL "https://github.com/rimusz/helm-tiller/archive/v${HELM_TILLER_VERSION}.tar.gz" | tar xz -C /tmp && \
mv /tmp/helm-tiller-${HELM_TILLER_VERSION} /tmp/helm-tiller
FROM golang:1.17-alpine as go
FROM golang:1.24-alpine AS builder
ARG TARGETPLATFORM
ARG REVISON
ARG TARGETARCH
ARG REVISION
RUN apk --no-cache add alpine-sdk perl curl bash tar
RUN HELM3_VERSION=3.17.2 && \
curl -sSL "https://get.helm.sh/helm-v${HELM3_VERSION}-linux-${TARGETARCH}.tar.gz" | tar xvz && \
chmod +x linux-${TARGETARCH}/helm && mv linux-${TARGETARCH}/helm /usr/local/bin/helm
RUN KUBECTL_VERSION=v1.31.3 && \
curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl" && \
chmod +x kubectl && mv kubectl /usr/local/bin/kubectl
RUN GRPC_HEALTH_PROBE_VERSION=v0.4.35 && \
wget -qO /usr/local/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-${TARGETARCH} && \
chmod +x /usr/local/bin/grpc_health_probe
RUN GHZ_VERSION=0.120.0 && \
curl -sSL "https://github.com/bojand/ghz/archive/refs/tags/v${GHZ_VERSION}.tar.gz" | tar xz -C /tmp && \
cd /tmp/ghz-${GHZ_VERSION}/cmd/ghz && GOARCH=$TARGETARCH go build . && mv ghz /usr/local/bin && \
chmod +x /usr/local/bin/ghz
WORKDIR /workspace
@ -47,24 +39,23 @@ COPY pkg/ pkg/
# build
RUN CGO_ENABLED=0 go build -o loadtester ./cmd/loadtester/*
FROM bash:5.0
FROM bash:5.2
ARG TARGETPLATFORM
RUN addgroup -S app && \
adduser -S -g app app && \
apk --no-cache add ca-certificates curl jq libgcc wrk
apk --no-cache add ca-certificates curl jq libgcc wrk hey git
WORKDIR /home/app
COPY --from=bats/bats:v1.1.0 /opt/bats/ /opt/bats/
COPY --from=bats/bats:1.11.1 /opt/bats/ /opt/bats/
RUN ln -s /opt/bats/bin/bats /usr/local/bin/
COPY --from=build /usr/local/bin/hey /usr/local/bin/
COPY --from=build /usr/local/bin/helm /usr/local/bin/
COPY --from=build /usr/local/bin/tiller /usr/local/bin/
COPY --from=build /usr/local/bin/ghz /usr/local/bin/
COPY --from=build /usr/local/bin/helmv3 /usr/local/bin/
COPY --from=build /usr/local/bin/grpc_health_probe /usr/local/bin/
COPY --from=build /tmp/helm-tiller /tmp/helm-tiller
COPY --from=builder /usr/local/bin/helm /usr/local/bin/
COPY --from=builder /usr/local/bin/ghz /usr/local/bin/
COPY --from=builder /usr/local/bin/grpc_health_probe /usr/local/bin/
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/
ADD https://raw.githubusercontent.com/grpc/grpc-proto/master/grpc/health/v1/health.proto /tmp/ghz/health.proto
@ -77,9 +68,6 @@ USER app
RUN hey -n 1 -c 1 https://flagger.app > /dev/null && echo $? | grep 0
RUN wrk -d 1s -c 1 -t 1 https://flagger.app > /dev/null && echo $? | grep 0
# install Helm v2 plugins
RUN helm init --stable-repo-url=https://charts.helm.sh/stable --client-only && helm plugin install /tmp/helm-tiller
COPY --from=go --chown=app:app /workspace/loadtester .
COPY --from=builder --chown=app:app /workspace/loadtester .
ENTRYPOINT ["./loadtester"]

5
GOVERNANCE.md Normal file
View File

@ -0,0 +1,5 @@
# Flagger Governance
The Flagger project is governed by the [Flux governance document](https://github.com/fluxcd/community/blob/main/GOVERNANCE.md),
involvement is defined in the [Flux community roles document](chttps://github.com/fluxcd/community/blob/main/community-roles.md),
and processes can be found in the [Flux process document](https://github.com/fluxcd/community/blob/main/PROCESS.md).

View File

@ -4,5 +4,6 @@ at https://slack.cncf.io/).
In alphabetical order:
Stefan Prodan, Weaveworks <stefan@weave.works> (github: @stefanprodan, slack: stefanprodan)
Sanskar Jaiswal, Independent <jaiswalsanskar078@gmail.com> (github: @aryan9600, slack: aryan9600)
Stefan Prodan, ControlPlane <stefan.prodan@gmail.com> (github: @stefanprodan, slack: stefanprodan)
Takeshi Yoneda, Tetrate <takeshi@tetrate.io> (github: @mathetake, slack: mathetake)

View File

@ -5,14 +5,14 @@ LT_VERSION?=$(shell grep 'VERSION' cmd/loadtester/main.go | awk '{ print $$4 }'
build:
CGO_ENABLED=0 go build -a -o ./bin/flagger ./cmd/flagger
fmt:
go mod tidy
gofmt -l -s -w ./
goimports -l -w ./
tidy:
rm -f go.sum; go mod tidy -compat=1.24
test-fmt:
gofmt -l -s ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
goimports -l ./ | grep ".*\.go"; if [ "$$?" = "0" ]; then exit 1; fi
vet:
go vet ./...
fmt:
go fmt ./...
codegen:
./hack/update-codegen.sh
@ -20,13 +20,21 @@ codegen:
test-codegen:
./hack/verify-codegen.sh
test: test-fmt test-codegen
test: fmt test-codegen
go test ./...
test-coverage: fmt test-codegen
go test -coverprofile cover.out ./...
go tool cover -html=cover.out
rm cover.out
crd:
cat artifacts/flagger/crd.yaml > charts/flagger/crds/crd.yaml
cat artifacts/flagger/crd.yaml > kustomize/base/flagger/crd.yaml
verify-crd:
./hack/verify-crd.sh
version-set:
@next="$(TAG)" && \
current="$(VERSION)" && \

187
README.md
View File

@ -1,10 +1,11 @@
# flagger
# flaggerreadme
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4783/badge)](https://bestpractices.coreinfrastructure.org/projects/4783)
[![build](https://github.com/fluxcd/flagger/workflows/build/badge.svg)](https://github.com/fluxcd/flagger/actions)
[![report](https://goreportcard.com/badge/github.com/fluxcd/flagger)](https://goreportcard.com/report/github.com/fluxcd/flagger)
[![license](https://img.shields.io/github/license/fluxcd/flagger.svg)](https://github.com/fluxcd/flagger/blob/main/LICENSE)
[![release](https://img.shields.io/github/release/fluxcd/flagger/all.svg)](https://github.com/fluxcd/flagger/releases)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4783/badge)](https://bestpractices.coreinfrastructure.org/projects/4783)
[![report](https://goreportcard.com/badge/github.com/fluxcd/flagger)](https://goreportcard.com/report/github.com/fluxcd/flagger)
[![FOSSA Status](https://app.fossa.com/api/projects/custom%2B162%2Fgithub.com%2Ffluxcd%2Fflagger.svg?type=shield)](https://app.fossa.com/projects/custom%2B162%2Fgithub.com%2Ffluxcd%2Fflagger?ref=badge_shield)
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/flagger)](https://artifacthub.io/packages/search?repo=flagger)
[![CLOMonitor](https://img.shields.io/endpoint?url=https://clomonitor.io/api/projects/cncf/flagger/badge)](https://clomonitor.io/projects/cncf/flagger)
Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes.
It reduces the risk of introducing a new software version in production
@ -13,53 +14,53 @@ by gradually shifting traffic to the new version while measuring metrics and run
![flagger-overview](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-overview.png)
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
using a service mesh (App Mesh, Istio, Linkerd, Open Service Mesh)
or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing.
For release analysis, Flagger can query Prometheus, Datadog, New Relic, CloudWatch, Dynatrace,
InfluxDB and Stackdriver and for alerting it uses Slack, MS Teams, Discord, Rocket and Google Chat.
and integrates with various Kubernetes ingress controllers, service mesh, and monitoring solutions.
Flagger is a [Cloud Native Computing Foundation](https://cncf.io/) project
and part of [Flux](https://fluxcd.io) family of GitOps tools.
and part of the [Flux](https://fluxcd.io) family of GitOps tools.
### Documentation
Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app).
Flagger documentation can be found at [fluxcd.io/flagger](https://fluxcd.io/flagger/).
* Install
* [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes)
* [Flagger install on Kubernetes](https://fluxcd.io/flagger/install/flagger-install-on-kubernetes)
* Usage
* [How it works](https://docs.flagger.app/usage/how-it-works)
* [Deployment strategies](https://docs.flagger.app/usage/deployment-strategies)
* [Metrics analysis](https://docs.flagger.app/usage/metrics)
* [Webhooks](https://docs.flagger.app/usage/webhooks)
* [Alerting](https://docs.flagger.app/usage/alerting)
* [Monitoring](https://docs.flagger.app/usage/monitoring)
* [How it works](https://fluxcd.io/flagger/usage/how-it-works)
* [Deployment strategies](https://fluxcd.io/flagger/usage/deployment-strategies)
* [Metrics analysis](https://fluxcd.io/flagger/usage/metrics)
* [Webhooks](https://fluxcd.io/flagger/usage/webhooks)
* [Alerting](https://fluxcd.io/flagger/usage/alerting)
* [Monitoring](https://fluxcd.io/flagger/usage/monitoring)
* Tutorials
* [App Mesh](https://docs.flagger.app/tutorials/appmesh-progressive-delivery)
* [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery)
* [Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery)
* [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery)
* [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery)
* [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
* [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery)
* [Open Service Mesh (OSM)](https://docs.flagger.app/tutorials/osm-progressive-delivery)
* [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green)
* [App Mesh](https://fluxcd.io/flagger/tutorials/appmesh-progressive-delivery)
* [Istio](https://fluxcd.io/flagger/tutorials/istio-progressive-delivery)
* [Linkerd](https://fluxcd.io/flagger/tutorials/linkerd-progressive-delivery)
* [Open Service Mesh (OSM)](https://dfluxcd.io/flagger/tutorials/osm-progressive-delivery)
* [Kuma Service Mesh](https://fluxcd.io/flagger/tutorials/kuma-progressive-delivery)
* [Contour](https://fluxcd.io/flagger/tutorials/contour-progressive-delivery)
* [Gloo](https://fluxcd.io/flagger/tutorials/gloo-progressive-delivery)
* [NGINX Ingress](https://fluxcd.io/flagger/tutorials/nginx-progressive-delivery)
* [Skipper](https://fluxcd.io/flagger/tutorials/skipper-progressive-delivery)
* [Traefik](https://fluxcd.io/flagger/tutorials/traefik-progressive-delivery)
* [Gateway API](https://fluxcd.io/flagger/tutorials/gatewayapi-progressive-delivery/)
* [Kubernetes Blue/Green](https://fluxcd.io/flagger/tutorials/kubernetes-blue-green)
### Who is using Flagger
### Adopters
**Our list of production users has moved to <https://fluxcd.io/adopters/#flagger>**.
If you are using Flagger, please [submit a PR to add your organization](https://github.com/fluxcd/website/tree/main/adopters#readme) to the list!
If you are using Flagger, please
[submit a PR to add your organization](https://github.com/fluxcd/website/blob/main/data/adopters/2-flagger.yaml) to the list!
### Canary CRD
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh or ingress routes).
then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh, or ingress routes).
These objects expose the application on the mesh and drive the canary analysis and promotion.
Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronised.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronized.
For a deployment named _podinfo_, a canary promotion can be defined using Flagger's custom resource:
@ -84,7 +85,7 @@ spec:
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -177,78 +178,102 @@ spec:
name: on-call-msteams
```
For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/usage/how-it-works).
For more details on how the canary analysis and promotion works please [read the docs](https://fluxcd.io/flagger/usage/how-it-works).
### Features
**Service Mesh**
| Feature | App Mesh | Istio | Linkerd | Open Service Mesh | SMI | Kubernetes CNI |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
For other SMI compatible service mesh solutions like Consul Connect or Nginx Service Mesh,
[Prometheus MetricTemplates](https://docs.flagger.app/usage/metrics#prometheus) can be used to implement
the request success rate and request duration checks.
| Feature | App Mesh | Istio | Linkerd | Kuma | OSM | Knative | Kubernetes CNI |
|--------------------------------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
| Blue/Green deployments (traffic mirroring) | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
**Ingress**
| Feature | Contour | Gloo | NGINX | Skipper | Traefik |
| ------------------------------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Feature | Contour | Gloo | NGINX | Skipper | Traefik | Apache APISIX |
|-------------------------------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Request duration check (L7 metric) | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
**Networking Interface**
| Feature | Gateway API | SMI |
|-----------------------------------------------|--------------------|--------------------|
| Canary deployments (weighted traffic) | :heavy_check_mark: | :heavy_check_mark: |
| A/B testing (headers and cookies routing) | :heavy_check_mark: | :heavy_minus_sign: |
| Blue/Green deployments (traffic switch) | :heavy_check_mark: | :heavy_check_mark: |
| Blue/Green deployments (traffic mirrroring) | :heavy_minus_sign: | :heavy_minus_sign: |
| Webhooks (acceptance/load testing) | :heavy_check_mark: | :heavy_check_mark: |
| Manual gating (approve/pause/resume) | :heavy_check_mark: | :heavy_check_mark: |
| Request success rate check (L7 metric) | :heavy_minus_sign: | :heavy_minus_sign: |
| Request duration check (L7 metric) | :heavy_minus_sign: | :heavy_minus_sign: |
| Custom metric checks | :heavy_check_mark: | :heavy_check_mark: |
For all [Gateway API](https://gateway-api.sigs.k8s.io/) implementations like
[Contour](https://projectcontour.io/guides/gateway-api/) or
[Istio](https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/)
and [SMI](https://smi-spec.io) compatible service mesh solutions like
[Nginx Service Mesh](https://docs.nginx.com/nginx-service-mesh/),
[Prometheus MetricTemplates](https://docs.flagger.app/usage/metrics#prometheus)
can be used to implement the request success rate and request duration checks.
### Roadmap
#### [GitOps Toolkit](https://github.com/fluxcd/flux2) compatibility
* Migrate Flagger to Kubernetes controller-runtime and [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
* Make the Canary status compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils)
* Make Flagger emit Kubernetes events compatible with Flux v2 notification API
* Integrate Flagger into Flux v2 as the progressive delivery component
- Migrate Flagger to Kubernetes controller-runtime and [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
- Make the Canary status compatible with [kstatus](https://github.com/kubernetes-sigs/cli-utils)
- Make Flagger emit Kubernetes events compatible with Flux v2 notification API
- Integrate Flagger into Flux v2 as the progressive delivery component
#### Integrations
* Add support for Kubernetes [Ingress v2](https://github.com/kubernetes-sigs/service-apis)
* Add support for ingress controllers like HAProxy and ALB
* Add support for metrics providers like InfluxDB, Stackdriver, SignalFX
- Add support for ingress controllers like HAProxy, ALB, and Apache APISIX
### Contributing
Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.
To start contributing please read the [development guide](https://docs.flagger.app/dev/dev-guide).
When submitting bug reports please include as much details as possible:
When submitting bug reports please include as many details as possible:
* which Flagger version
* which Flagger CRD version
* which Kubernetes version
* what configuration (canary, ingress and workloads definitions)
* what happened (Flagger and Proxy logs)
- which Flagger version
- which Kubernetes version
- what configuration (canary, ingress and workloads definitions)
- what happened (Flagger and Proxy logs)
### Getting Help
### Communication
If you have any questions about Flagger and progressive delivery:
Here is a list of good entry points into our community, how we stay in touch and how you can meet us as a team.
* Read the Flagger [docs](https://docs.flagger.app).
* Invite yourself to the [CNCF community slack](https://slack.cncf.io/)
and join the [#flagger](https://cloud-native.slack.com/messages/flagger/) channel.
* Check out the **[Flux events calendar](https://fluxcd.io/#calendar)**, both with upcoming talks, events and meetings you can attend.
* Or view the **[Flux resources section](https://fluxcd.io/resources)** with past events videos you can watch.
* File an [issue](https://github.com/fluxcd/flagger/issues/new).
- Slack: Join in and talk to us in the `#flagger` channel on [CNCF Slack](https://slack.cncf.io/).
- Public meetings: We run weekly meetings - join one of the upcoming dev meetings from the [Flux calendar](https://fluxcd.io/#calendar).
- Blog: Stay up to date with the latest news on [the Flux blog](https://fluxcd.io/blog/).
- Mailing list: To be updated on Flux and Flagger progress regularly, please [join the flux-dev mailing list](https://lists.cncf.io/g/cncf-flux-dev).
Your feedback is always welcome!
#### Subscribing to the flux-dev calendar
To add the meetings to your e.g. Google calendar
1. visit the [Flux calendar](https://lists.cncf.io/g/cncf-flux-dev/calendar)
2. click on "Subscribe to Calendar" at the very bottom of the page
3. copy the iCalendar URL
4. open e.g. your Google calendar
5. find the "add calendar" option
6. choose "add by URL"
7. paste iCalendar URL (ends with `.ics`)
8. done

View File

@ -11,7 +11,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:

View File

@ -11,7 +11,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:

View File

@ -10,7 +10,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -20,7 +20,7 @@ spec:
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
- mesh
hosts:
- app.example.com

View File

@ -11,7 +11,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -21,7 +21,7 @@ spec:
portName: http
portDiscovery: true
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
- mesh
hosts:
- app.example.com

View File

@ -0,0 +1,50 @@
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
annotations:
kuma.io/mesh: default
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
service:
port: 9898
targetPort: 9898
apex:
annotations:
9898.service.kuma.io/protocol: "http"
canary:
annotations:
9898.service.kuma.io/protocol: "http"
primary:
annotations:
9898.service.kuma.io/protocol: "http"
analysis:
interval: 15s
threshold: 15
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"

View File

@ -11,7 +11,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:

View File

@ -11,7 +11,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:

View File

@ -31,6 +31,18 @@ rules:
- update
- patch
- delete
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
@ -78,6 +90,7 @@ rules:
resources:
- canaries
- canaries/status
- canaries/finalizers
- metrictemplates
- metrictemplates/status
- alertproviders
@ -187,6 +200,57 @@ rules:
- update
- patch
- delete
- apiGroups:
- kuma.io
resources:
- trafficroutes
- trafficroutes/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
- httproutes/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- keda.sh
resources:
- scaledobjects
- scaledobjects/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apisix.apache.org
resources:
- apisixroutes
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:

View File

@ -27,6 +27,10 @@ spec:
- name: Weight
type: string
jsonPath: .status.canaryWeight
- name: Suspended
type: boolean
jsonPath: .spec.suspend
priority: 1
- name: FailedChecks
type: string
jsonPath: .status.failedChecks
@ -76,7 +80,6 @@ spec:
type: object
required:
- targetRef
- service
- analysis
properties:
provider:
@ -104,7 +107,7 @@ spec:
name:
type: string
autoscalerRef:
description: HPA selector
description: Scaler selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
@ -114,8 +117,22 @@ spec:
type: string
enum:
- HorizontalPodAutoscaler
- ScaledObject
name:
type: string
primaryScalerQueries:
type: object
additionalProperties:
type: string
primaryScalerReplicas:
type: object
properties:
minReplicas:
type: integer
minimum: 1
maxReplicas:
type: integer
minimum: 1
ingressRef:
description: Ingress selector
type: object
@ -129,6 +146,19 @@ spec:
- Ingress
name:
type: string
routeRef:
description: APISIX route selector
type: object
required: [ "apiVersion", "kind", "name" ]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- ApisixRoute
name:
type: string
upstreamRef:
description: Gloo Upstream selector
type: object
@ -158,12 +188,18 @@ spec:
portName:
description: Container port name
type: string
appProtocol:
description: Application protocol of the port
type: string
targetPort:
description: Container target port name
x-kubernetes-int-or-string: true
portDiscovery:
description: Enable port dicovery
type: boolean
headless:
description: Headless if set to true, generates headless Kubernetes services.
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
@ -450,6 +486,54 @@ spec:
uri:
format: string
type: string
authority:
format: string
type: string
type:
format: string
type: string
mirror:
description: Mirror defines a schema for a filter that mirrors requests.
type: array
items:
type: object
properties:
backendRef:
properties:
group:
default: ""
maxLength: 253
pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
kind:
default: Service
maxLength: 63
minLength: 1
pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$
type: string
name:
maxLength: 253
minLength: 1
type: string
namespace:
maxLength: 63
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
type: string
port:
format: int32
maximum: 65535
minimum: 1
type: integer
required:
- name
type: object
x-kubernetes-validations:
- message: Must have port for Service reference
rule: '(size(self.group) == 0 && self.kind == ''Service'')
? has(self.port) : true'
required:
- backendRef
headers:
description: Headers operations
type: object
@ -495,6 +579,45 @@ spec:
type: array
items:
type: string
gatewayRefs:
description: The list of parent Gateways for a HTTPRoute
maxItems: 32
type: array
items:
required:
- name
type: object
properties:
group:
default: gateway.networking.k8s.io
maxLength: 253
pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
kind:
default: Gateway
maxLength: 63
minLength: 1
pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$
type: string
name:
maxLength: 253
minLength: 1
type: string
namespace:
maxLength: 63
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
type: string
sectionName:
maxLength: 253
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
port:
format: int32
maximum: 65535
minimum: 1
type: integer
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
@ -685,6 +808,10 @@ spec:
- LEAST_CONN
- RANDOM
- PASSTHROUGH
- LEAST_REQUEST
type: string
warmupDurationSecs:
description: Represents the warmup duration of Service.
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
@ -788,6 +915,9 @@ spec:
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
suspend:
description: Suspend Canary disabling/pausing all canary runs
type: boolean
analysis:
description: Canary analysis for this canary
type: object
@ -829,6 +959,9 @@ spec:
primaryReadyThreshold:
description: Percentage of pods that need to be available to consider primary as ready
type: number
canaryReadyThreshold:
description: Percentage of pods that need to be available to consider canary as ready
type: number
match:
description: A/B testing match conditions
type: array
@ -858,6 +991,34 @@ spec:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
queryParams:
description: Query parameters for matching.
type: object
additionalProperties:
oneOf:
- not:
anyOf:
- required:
- exact
- required:
- prefix
- required:
- regex
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
type: string
prefix:
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax).
type: string
type: object
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
@ -905,6 +1066,11 @@ spec:
namespace:
description: Namespace of this metric template
type: string
templateVariables:
description: Additional variables to be used in the metrics query (key-value pairs)
type: object
additionalProperties:
type: string
alerts:
description: Alert list for this canary analysis
type: array
@ -970,11 +1136,32 @@ spec:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
retries:
description: Number of retries for this webhook
type: number
disableTLS:
description: Disable TLS verification for this webhook
type: boolean
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
sessionAffinity:
description: SessionAffinity represents the session affinity settings for a canary run.
type: object
required: [ "cookieName" ]
properties:
cookieName:
description: CookieName is the key that will be used for the session affinity cookie.
type: string
primaryCookieName:
description: CookieName is the key that will be used for the session affinity cookie.
type: string
maxAge:
description: MaxAge indicates the number of seconds until the session affinity cookie will expire.
default: 86400
type: number
status:
description: CanaryStatus defines the observed state of a canary.
type: object
@ -995,27 +1182,36 @@ spec:
- Failed
- Terminating
- Terminated
failedChecks:
description: Failed check count of the current canary analysis
type: number
canaryWeight:
description: Traffic weight routed to canary
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
trackedConfigs:
description: TrackedConfig of this canary
additionalProperties:
type: string
type: object
canaryWeight:
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastPromotedSpec:
description: LastPromotedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
sessionAffinityCookie:
description: Session affinity cookie of the current canary run
type: string
previousSessionAffinityCookie:
description: Session affinity cookie of the previous canary run
type: string
conditions:
description: Status conditions of this canary
type: array
@ -1112,9 +1308,18 @@ spec:
- newrelic
- graphite
- dynatrace
- keptn
- splunk
address:
description: API address of this provider
type: string
headers:
description: Headers to add to HTTP(S) requests
type: object
additionalProperties:
type: array
items:
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object

View File

@ -22,7 +22,7 @@ spec:
serviceAccountName: flagger
containers:
- name: flagger
image: ghcr.io/fluxcd/flagger:1.16.1
image: ghcr.io/fluxcd/flagger:1.41.0
imagePullPolicy: IfNotPresent
ports:
- name: http

View File

@ -1,8 +1,8 @@
apiVersion: v1
name: flagger
version: 1.16.1
appVersion: 1.16.1
kubeVersion: ">=1.16.0-0"
version: 1.41.0
appVersion: 1.41.0
kubeVersion: ">=1.19.0-0"
engine: gotpl
description: Flagger is a progressive delivery operator for Kubernetes
home: https://flagger.app
@ -18,11 +18,12 @@ keywords:
- istio
- appmesh
- linkerd
- kuma
- osm
- smi
- gloo
- contour
- nginx
- traefik
- osm
- smi
- gitops
- canary

View File

@ -1,22 +1,18 @@
# Flagger
[Flagger](https://github.com/fluxcd/flagger) is an operator that automates the release process of applications on Kubernetes.
[Flagger](https://github.com/fluxcd/flagger) is a progressive delivery tool that automates the release process
for applications running on Kubernetes. It reduces the risk of introducing a new software version in production
by gradually shifting traffic to the new version while measuring metrics and running conformance tests.
Flagger can run automated application analysis, testing, promotion and rollback for the following deployment strategies:
* Canary Release (progressive traffic shifting)
* A/B Testing (HTTP headers and cookies traffic routing)
* Blue/Green (traffic switching and mirroring)
Flagger works with service mesh solutions (Istio, Linkerd, AWS App Mesh, Open Service Mesh) and with Kubernetes ingress controllers
(NGINX, Skipper, Gloo, Contour, Traefik).
Flagger can be configured to send alerts to various chat platforms such as Slack, Microsoft Teams, Discord and Rocket.
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
and integrates with various Kubernetes ingress controllers, service mesh and monitoring solutions.
Flagger is a [Cloud Native Computing Foundation](https://cncf.io/) project
and part of [Flux](https://fluxcd.io) family of GitOps tools.
## Prerequisites
* Kubernetes >= 1.16
* Kubernetes >= 1.19
## Installing the Chart
@ -44,10 +40,13 @@ $ helm upgrade -i flagger flagger/flagger \
To install Flagger for **Linkerd** (requires Linkerd Viz extension):
```console
# Note that linkerdAuthPolicy.create=true is only required for Linkerd 2.12 and
# later
$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--namespace=flagger-system \
--set meshProvider=linkerd \
--set metricsServer=http://prometheus.linkerd-viz:9090
--set metricsServer=http://prometheus.linkerd-viz:9090 \
--set linkerdAuthPolicy.create=true
```
To install Flagger for **AWS App Mesh**:
@ -59,6 +58,25 @@ $ helm upgrade -i flagger flagger/flagger \
--set metricsServer=http://appmesh-prometheus:9090
```
To install Flagger for **Open Service Mesh** (requires OSM to have been installed with Prometheus):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=osm-system \
--set meshProvider=osm \
--set metricsServer=http://osm-prometheus.osm-system.svc:7070
```
To install Flagger for **Kuma Service Mesh** (requires Kuma to have been installed with Prometheus):
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=kuma-system \
--set meshProvider=kuma \
--set metricsServer=http://prometheus-server.kuma-metrics:80
```
To install Flagger and Prometheus for **NGINX** Ingress (requires controller metrics enabled):
```console
@ -96,13 +114,13 @@ $ helm upgrade -i flagger flagger/flagger \
--set meshProvider=traefik
```
To install Flagger for **Open Service Mesh (OSM)** (requires OSM to have been installed with Prometheus):
If you need to add labels to the flagger deployment or pods, you can pass the labels as parameters as shown below.
```console
$ helm upgrade -i flagger flagger/flagger \
--namespace=osm-system \
--set meshProvider=osm \
--set metricsServer=http://osm-prometheus.osm-system.svc:7070
helm upgrade -i flagger flagger/flagger \
<other parameters> \
--set podLabels.<labelName>=<labelValue> \
--set deploymentLabels.<labelName>=<labelValue>
```
The [configuration](#configuration) section lists the parameters that can be configured during installation.
@ -121,53 +139,64 @@ The command removes all the Kubernetes components associated with the chart and
The following tables lists the configurable parameters of the Flagger chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `ghcr.io/fluxcd/flagger`
`image.tag` | Image tag | `<VERSION>`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`logLevel` | Log level | `info`
`metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090`
`prometheus.install` | If `true`, installs Prometheus configured to scrape all pods in the custer | `false`
`prometheus.retention` | Prometheus data retention | `2h`
`selectorLabels` | List of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name`
`configTracking.enabled` | If `true`, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment | `true`
`eventWebhook` | If set, Flagger will publish events to the given webhook | None
`slack.url` | Slack incoming webhook | None
`slack.proxyUrl` | Slack proxy url | None
`slack.channel` | Slack channel | None
`slack.user` | Slack username | `flagger`
`msteams.url` | Microsoft Teams incoming webhook | None
`msteams.proxyUrl` | Microsoft Teams proxy url | None
`podMonitor.enabled` | If `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false`
`podMonitor.namespace` | Namespace where the PodMonitor is created | the same namespace
`podMonitor.interval` | Interval at which metrics should be scraped | `15s`
`podMonitor.podMonitor` | Additional labels to add to the PodMonitor | `{}`
`leaderElection.enabled` | If `true`, Flagger will run in HA mode | `false`
`leaderElection.replicaCount` | Number of replicas | `1`
`serviceAccount.create` | If `true`, Flagger will create service account | `true`
`serviceAccount.name` | The name of the service account to create or use. If not set and `serviceAccount.create` is `true`, a name is generated using the Flagger fullname | `""`
`serviceAccount.annotations` | Annotations for service account | `{}`
`ingressAnnotationsPrefix` | Annotations prefix for ingresses | `custom.ingress.kubernetes.io`
`includeLabelPrefix` | List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all | `""`
`rbac.create` | If `true`, create and use RBAC resources | `true`
`rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false`
`crd.create` | If `true`, create Flagger's CRDs (should be enabled for Helm v2 only) | `false`
`resources.requests/cpu` | Pod CPU request | `10m`
`resources.requests/memory` | Pod memory request | `32Mi`
`resources.limits/cpu` | Pod CPU limit | `1000m`
`resources.limits/memory` | Pod memory limit | `512Mi`
`affinity` | Node/pod affinities | None
`nodeSelector` | Node labels for pod assignment | `{}`
`threadiness` | Number of controller workers | `2`
`tolerations` | List of node taints to tolerate | `[]`
`istio.kubeconfig.secretName` | The name of the Kubernetes secret containing the Istio shared control plane kubeconfig | None
`istio.kubeconfig.key` | The name of Kubernetes secret data key that contains the Istio control plane kubeconfig | `kubeconfig`
`ingressAnnotationsPrefix` | Annotations prefix for NGINX ingresses | None
`ingressClass` | Ingress class used for annotating HTTPProxy objects, e.g. `contour` | None
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
`podDisruptionBudget.enabled` | A PodDisruptionBudget will be created if `true` | `false`
`podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1`
| Parameter | Description | Default |
|--------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|
| `image.repository` | Image repository | `ghcr.io/fluxcd/flagger` |
| `image.tag` | Image tag | `<VERSION>` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `logLevel` | Log level | `info` |
| `metricsServer` | Prometheus URL, used when `prometheus.install` is `false` | `http://prometheus.istio-system:9090` |
| `prometheus.install` | If `true`, installs Prometheus configured to scrape all pods in the custer | `false` |
| `prometheus.retention` | Prometheus data retention | `2h` |
| `selectorLabels` | List of labels that Flagger uses to create pod selectors | `app,name,app.kubernetes.io/name` |
| `serviceMonitor.enabled` | If `true`, creates service and serviceMonitor for monitoring Flagger metrics | `false` |
| `serviceMonitor.honorLabels` | If `true`, label conflicts are resolved by keeping label values from the scraped data and ignoring the conflicting server-side labels | `false` |
| `serviceMonitor.namespace` | Namespace Servicemonitor is installed in | the same namespace |
| `serviceMonitor.labels` | labels for the ServiceMonitor passed to Prometheus Operator | `{}` |
| `configTracking.enabled` | If `true`, flagger will track changes in Secrets and ConfigMaps referenced in the target deployment | `true` |
| `eventWebhook` | If set, Flagger will publish events to the given webhook | None |
| `slack.url` | Slack incoming webhook | None |
| `slack.proxyUrl` | Slack proxy url | None |
| `slack.channel` | Slack channel | None |
| `slack.user` | Slack username | `flagger` |
| `msteams.url` | Microsoft Teams incoming webhook | None |
| `msteams.proxyUrl` | Microsoft Teams proxy url | None |
| `clusterName` | When specified, Flagger will add the cluster name to alerts | `""` |
| `podMonitor.enabled` | If `true`, create a PodMonitor for [monitoring the metrics](https://docs.flagger.app/usage/monitoring#metrics) | `false` |
| `podMonitor.namespace` | Namespace where the PodMonitor is created | the same namespace |
| `podMonitor.interval` | Interval at which metrics should be scraped | `15s` |
| `podMonitor.podMonitor` | Additional labels to add to the PodMonitor | `{}` |
| `podMonitor.honorLabels` | If `true`, label conflicts are resolved by keeping label values from the scraped data and ignoring the conflicting server-side labels | `false` |
| `leaderElection.enabled` | If `true`, Flagger will run in HA mode | `false` |
| `leaderElection.replicaCount` | Number of replicas | `1` |
| `serviceAccount.create` | If `true`, Flagger will create service account | `true` |
| `serviceAccount.name` | The name of the service account to create or use. If not set and `serviceAccount.create` is `true`, a name is generated using the Flagger fullname | `""` |
| `serviceAccount.annotations` | Annotations for service account | `{}` |
| `ingressAnnotationsPrefix` | Annotations prefix for ingresses | `custom.ingress.kubernetes.io` |
| `includeLabelPrefix` | List of prefixes of labels that are copied when creating primary deployments or daemonsets. Use * to include all | `""` |
| `rbac.create` | If `true`, create and use RBAC resources | `true` |
| `rbac.pspEnabled` | If `true`, create and use a restricted pod security policy | `false` |
| `crd.create` | If `true`, create Flagger's CRDs (should be enabled for Helm v2 only) | `false` |
| `resources.requests/cpu` | Pod CPU request | `10m` |
| `resources.requests/memory` | Pod memory request | `32Mi` |
| `resources.limits/cpu` | Pod CPU limit | `1000m` |
| `resources.limits/memory` | Pod memory limit | `512Mi` |
| `affinity` | Node/pod affinities | prefer spread across hosts |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `threadiness` | Number of controller workers | `2` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `controlplane.kubeconfig.secretName` | The name of the Kubernetes secret containing the service mesh control plane kubeconfig | None |
| `controlplane.kubeconfig.key` | The name of Kubernetes secret data key that contains the service mesh control plane kubeconfig | `kubeconfig` |
| `ingressAnnotationsPrefix` | Annotations prefix for NGINX ingresses | None |
| `ingressClass` | Ingress class used for annotating HTTPProxy objects, e.g. `contour` | None |
| `podPriorityClassName` | PriorityClass name for pod priority configuration | "" |
| `podDisruptionBudget.enabled` | A PodDisruptionBudget will be created if `true` | `false` |
| `podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1` |
| `podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1` |
| `noCrossNamespaceRefs` | If `true`, cross namespace references to custom resources will be disabled | `false` |
| `namespace` | When specified, Flagger will restrict itself to watching Canary objects from that namespace | `""` |
| `deploymentLabels` | Labels to add to Flagger deployment | `{}` |
| `podLabels` | Labels to add to pods of Flagger deployment | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,

View File

@ -27,6 +27,10 @@ spec:
- name: Weight
type: string
jsonPath: .status.canaryWeight
- name: Suspended
type: boolean
jsonPath: .spec.suspend
priority: 1
- name: FailedChecks
type: string
jsonPath: .status.failedChecks
@ -76,7 +80,6 @@ spec:
type: object
required:
- targetRef
- service
- analysis
properties:
provider:
@ -104,7 +107,7 @@ spec:
name:
type: string
autoscalerRef:
description: HPA selector
description: Scaler selector
type: object
required: ["apiVersion", "kind", "name"]
properties:
@ -114,8 +117,22 @@ spec:
type: string
enum:
- HorizontalPodAutoscaler
- ScaledObject
name:
type: string
primaryScalerQueries:
type: object
additionalProperties:
type: string
primaryScalerReplicas:
type: object
properties:
minReplicas:
type: integer
minimum: 1
maxReplicas:
type: integer
minimum: 1
ingressRef:
description: Ingress selector
type: object
@ -129,6 +146,19 @@ spec:
- Ingress
name:
type: string
routeRef:
description: APISIX route selector
type: object
required: [ "apiVersion", "kind", "name" ]
properties:
apiVersion:
type: string
kind:
type: string
enum:
- ApisixRoute
name:
type: string
upstreamRef:
description: Gloo Upstream selector
type: object
@ -158,12 +188,18 @@ spec:
portName:
description: Container port name
type: string
appProtocol:
description: Application protocol of the port
type: string
targetPort:
description: Container target port name
x-kubernetes-int-or-string: true
portDiscovery:
description: Enable port dicovery
type: boolean
headless:
description: Headless if set to true, generates headless Kubernetes services.
type: boolean
timeout:
description: HTTP or gRPC request timeout
type: string
@ -450,6 +486,54 @@ spec:
uri:
format: string
type: string
authority:
format: string
type: string
type:
format: string
type: string
mirror:
description: Mirror defines a schema for a filter that mirrors requests.
type: array
items:
type: object
properties:
backendRef:
properties:
group:
default: ""
maxLength: 253
pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
kind:
default: Service
maxLength: 63
minLength: 1
pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$
type: string
name:
maxLength: 253
minLength: 1
type: string
namespace:
maxLength: 63
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
type: string
port:
format: int32
maximum: 65535
minimum: 1
type: integer
required:
- name
type: object
x-kubernetes-validations:
- message: Must have port for Service reference
rule: '(size(self.group) == 0 && self.kind == ''Service'')
? has(self.port) : true'
required:
- backendRef
headers:
description: Headers operations
type: object
@ -495,6 +579,45 @@ spec:
type: array
items:
type: string
gatewayRefs:
description: The list of parent Gateways for a HTTPRoute
maxItems: 32
type: array
items:
required:
- name
type: object
properties:
group:
default: gateway.networking.k8s.io
maxLength: 253
pattern: ^$|^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
kind:
default: Gateway
maxLength: 63
minLength: 1
pattern: ^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$
type: string
name:
maxLength: 253
minLength: 1
type: string
namespace:
maxLength: 63
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$
type: string
sectionName:
maxLength: 253
minLength: 1
pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
type: string
port:
format: int32
maximum: 65535
minimum: 1
type: integer
corsPolicy:
description: Istio Cross-Origin Resource Sharing policy (CORS)
type: object
@ -685,6 +808,10 @@ spec:
- LEAST_CONN
- RANDOM
- PASSTHROUGH
- LEAST_REQUEST
type: string
warmupDurationSecs:
description: Represents the warmup duration of Service.
type: string
outlierDetection:
description: Settings controlling eviction of unhealthy hosts from the load balancing pool.
@ -788,6 +915,9 @@ spec:
revertOnDeletion:
description: Revert mutated resources to original spec on deletion
type: boolean
suspend:
description: Suspend Canary disabling/pausing all canary runs
type: boolean
analysis:
description: Canary analysis for this canary
type: object
@ -829,6 +959,9 @@ spec:
primaryReadyThreshold:
description: Percentage of pods that need to be available to consider primary as ready
type: number
canaryReadyThreshold:
description: Percentage of pods that need to be available to consider canary as ready
type: number
match:
description: A/B testing match conditions
type: array
@ -858,6 +991,34 @@ spec:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax)
format: string
type: string
queryParams:
description: Query parameters for matching.
type: object
additionalProperties:
oneOf:
- not:
anyOf:
- required:
- exact
- required:
- prefix
- required:
- regex
- required:
- exact
- required:
- prefix
- required:
- regex
properties:
exact:
type: string
prefix:
type: string
regex:
description: RE2 style regex-based match (https://github.com/google/re2/wiki/Syntax).
type: string
type: object
sourceLabels:
description: Applicable only when the 'mesh' gateway is included in the service.gateways list
type: object
@ -905,6 +1066,11 @@ spec:
namespace:
description: Namespace of this metric template
type: string
templateVariables:
description: Additional variables to be used in the metrics query (key-value pairs)
type: object
additionalProperties:
type: string
alerts:
description: Alert list for this canary analysis
type: array
@ -970,11 +1136,32 @@ spec:
description: Request timeout for this webhook
type: string
pattern: "^[0-9]+(m|s)"
retries:
description: Number of retries for this webhook
type: number
disableTLS:
description: Disable TLS verification for this webhook
type: boolean
metadata:
description: Metadata (key-value pairs) for this webhook
type: object
additionalProperties:
type: string
sessionAffinity:
description: SessionAffinity represents the session affinity settings for a canary run.
type: object
required: [ "cookieName" ]
properties:
cookieName:
description: CookieName is the key that will be used for the session affinity cookie.
type: string
primaryCookieName:
description: CookieName is the key that will be used for the session affinity cookie.
type: string
maxAge:
description: MaxAge indicates the number of seconds until the session affinity cookie will expire.
default: 86400
type: number
status:
description: CanaryStatus defines the observed state of a canary.
type: object
@ -995,27 +1182,36 @@ spec:
- Failed
- Terminating
- Terminated
failedChecks:
description: Failed check count of the current canary analysis
type: number
canaryWeight:
description: Traffic weight routed to canary
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
trackedConfigs:
description: TrackedConfig of this canary
additionalProperties:
type: string
type: object
canaryWeight:
description: Traffic weight routed to canary
type: number
failedChecks:
description: Failed check count of the current canary analysis
type: number
iterations:
description: Iteration count of the current canary analysis
type: number
lastAppliedSpec:
description: LastAppliedSpec of this canary
type: string
lastPromotedSpec:
description: LastPromotedSpec of this canary
type: string
lastTransitionTime:
description: LastTransitionTime of this canary
format: date-time
type: string
sessionAffinityCookie:
description: Session affinity cookie of the current canary run
type: string
previousSessionAffinityCookie:
description: Session affinity cookie of the previous canary run
type: string
conditions:
description: Status conditions of this canary
type: array
@ -1112,9 +1308,18 @@ spec:
- newrelic
- graphite
- dynatrace
- keptn
- splunk
address:
description: API address of this provider
type: string
headers:
description: Headers to add to HTTP(S) requests
type: object
additionalProperties:
type: array
items:
type: string
secretRef:
description: Kubernetes secret reference containing the provider credentials
type: object

View File

@ -3,8 +3,9 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "flagger.serviceAccountName" . }}
annotations:
namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccount.annotations }}
annotations:
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
{{- end }}
labels:

View File

@ -0,0 +1,16 @@
{{- if .Values.linkerdAuthPolicy.create }}
apiVersion: policy.linkerd.io/v1alpha1
kind: AuthorizationPolicy
metadata:
namespace: {{ .Values.linkerdAuthPolicy.namespace }}
name: prometheus-admin-flagger
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: prometheus-admin
requiredAuthenticationRefs:
- kind: ServiceAccount
name: {{ template "flagger.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -2,11 +2,22 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "flagger.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
{{- if .Values.deploymentLabels }}
{{- range $key, $value := .Values.deploymentLabels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.leaderElection.replicaCount }}
{{- if eq .Values.leaderElection.enabled false }}
@ -22,6 +33,7 @@ spec:
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
{{- if .Values.podLabels }}
{{- range $key, $value := .Values.podLabels }}
{{ $key }}: {{ $value | quote }}
@ -33,25 +45,22 @@ spec:
{{- end }}
spec:
serviceAccountName: {{ template "flagger.serviceAccountName" . }}
{{- if .Values.affinity }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
topologyKey: kubernetes.io/hostname
{{- tpl (toYaml .Values.affinity) . | nindent 8 }}
{{- end }}
{{- if .Values.image.pullSecret }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
{{- end }}
{{- if .Values.controlplane.kubeconfig.secretName }}
volumes:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
secret:
secretName: "{{ .Values.istio.kubeconfig.secretName }}"
secretName: "{{ .Values.controlplane.kubeconfig.secretName }}"
{{- end }}
{{- if .Values.additionalVolumes }}
{{- toYaml .Values.additionalVolumes | nindent 8 -}}
{{- end }}
{{- if .Values.podPriorityClassName }}
priorityClassName: {{ .Values.podPriorityClassName }}
@ -62,10 +71,10 @@ spec:
securityContext:
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- end }}
{{- if .Values.controlplane.kubeconfig.secretName }}
volumeMounts:
{{- if .Values.istio.kubeconfig.secretName }}
- name: kubeconfig
mountPath: "/tmp/istio-host"
mountPath: "/tmp/controlplane"
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
@ -132,12 +141,18 @@ spec:
{{- if .Values.kubeconfigBurst }}
- -kubeconfig-burst={{ .Values.kubeconfigBurst }}
{{- end }}
{{- if .Values.istio.kubeconfig.secretName }}
- -kubeconfig-service-mesh=/tmp/istio-host/{{ .Values.istio.kubeconfig.key }}
{{- if .Values.controlplane.kubeconfig.secretName }}
- -kubeconfig-service-mesh=/tmp/controlplane/{{ .Values.controlplane.kubeconfig.key }}
{{- end }}
{{- if .Values.threadiness }}
- -threadiness={{ .Values.threadiness }}
{{- end }}
{{- if .Values.clusterName }}
- -cluster-name={{ .Values.clusterName }}
{{- end }}
{{- if .Values.noCrossNamespaceRefs }}
- -no-cross-namespace-refs={{ .Values.noCrossNamespaceRefs }}
{{- end }}
livenessProbe:
exec:
command:

View File

@ -1,8 +1,13 @@
{{- if .Values.podDisruptionBudget.enabled }}
{{- if .Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" -}}
apiVersion: policy/v1
{{- else }}
apiVersion: policy/v1beta1
{{- end }}
kind: PodDisruptionBudget
metadata:
name: {{ template "flagger.name" . }}
namespace: {{ .Release.Namespace }}
spec:
minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
selector:

View File

@ -17,6 +17,7 @@ spec:
- interval: {{ .Values.podMonitor.interval }}
path: /metrics
port: http
honorLabels: {{ .Values.podMonitor.honorLabels }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}

View File

@ -50,6 +50,7 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "flagger.fullname" . }}-psp
namespace: {{ .Release.Namespace }}
labels:
helm.sh/chart: {{ template "flagger.chart" . }}
app.kubernetes.io/name: {{ template "flagger.name" . }}

View File

@ -27,6 +27,18 @@ rules:
- update
- patch
- delete
- apiGroups:
- "coordination.k8s.io"
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
@ -74,6 +86,7 @@ rules:
resources:
- canaries
- canaries/status
- canaries/finalizers
- metrictemplates
- metrictemplates/status
- alertproviders
@ -195,10 +208,87 @@ rules:
- update
- patch
- delete
- apiGroups:
- kuma.io
resources:
- trafficroutes
- trafficroutes/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
- httproutes/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- keda.sh
resources:
- scaledobjects
- scaledobjects/finalizers
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apisix.apache.org
resources:
- apisixroutes
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- metrics.keptn.sh
resources:
- keptnmetrics
- analyses
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- nonResourceURLs:
- /version
verbs:
- get
- apiGroups:
- serving.knative.dev
resources:
- services
verbs:
- get
- update
- apiGroups:
- serving.knative.dev
resources:
- revisions
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding

View File

@ -0,0 +1,19 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "flagger.name" . }}
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
ports:
- name: http
port: 8080
targetPort: http
protocol: TCP
selector:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@ -0,0 +1,29 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "flagger.name" . }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ .Release.Namespace }}
{{- end }}
labels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- path: /metrics
port: http
interval: 30s
scrapeTimeout: 30s
honorLabels: {{ .Values.serviceMonitor.honorLabels }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ template "flagger.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@ -1,8 +1,11 @@
# Default values for flagger.
## Deployment annotations
# annotations: {}
image:
repository: ghcr.io/fluxcd/flagger
tag: 1.16.1
tag: 1.41.0
pullPolicy: IfNotPresent
pullSecret:
@ -13,13 +16,23 @@ podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
linkerd.io/inject: enabled
# priority class name for pod priority configuration
podPriorityClassName: ""
metricsServer: "http://prometheus:9090"
# accepted values are kubernetes, istio, linkerd, appmesh, contour, nginx, gloo, skipper, traefik, osm
# creates serviceMonitor for monitoring Flagger metrics
serviceMonitor:
enabled: false
honorLabels: false
# Set the namespace the ServiceMonitor should be deployed
# namespace: monitoring
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
# labels:
# accepted values are kubernetes, istio, linkerd, appmesh, contour, nginx, gloo, skipper, traefik, apisix, osm
meshProvider: ""
# single namespace restriction
@ -50,6 +63,9 @@ securityContext:
# when specified, flagger will publish events to the provided webhook
eventWebhook: ""
# when specified, flagger will add the cluster name to alerts
clusterName: ""
slack:
user: flagger
channel:
@ -66,6 +82,7 @@ podMonitor:
namespace:
interval: 15s
additionalLabels: {}
honorLabels: false
#env:
#- name: SLACK_URL
@ -117,6 +134,13 @@ crd:
# crd.create: `true` if custom resource definitions should be created
create: false
linkerdAuthPolicy:
# linkerdAuthPolicy.create: Whether to create an AuthorizationPolicy in
# linkerd viz' namespace to allow flagger to reach viz' prometheus service
create: false
# linkerdAuthPolicy.namespace: linkerd-viz' namespace
namespace: linkerd-viz
nameOverride: ""
fullnameOverride: ""
@ -132,10 +156,21 @@ nodeSelector: {}
tolerations: []
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: '{{ template "flagger.name" . }}'
app.kubernetes.io/instance: '{{ .Release.Name }}'
topologyKey: kubernetes.io/hostname
prometheus:
# to be used with ingress controllers
install: false
image: docker.io/prom/prometheus:v2.23.0
image: docker.io/prom/prometheus:v2.41.0
pullSecret:
retention: 2h
# when enabled, it will add a security context for the prometheus pod
@ -148,17 +183,27 @@ prometheus:
kubeconfigQPS: ""
kubeconfigBurst: ""
# Istio multi-cluster service mesh (shared control plane single-network)
# https://istio.io/docs/setup/install/multicluster/shared-vpn/
istio:
# Multi-cluster service mesh (shared control plane single-network)
controlplane:
kubeconfig:
# istio.kubeconfig.secretName: The name of the secret containing the Istio control plane kubeconfig
# controlplane.kubeconfig.secretName: The name of the secret containing the mesh control plane kubeconfig
secretName: ""
# istio.kubeconfig.key: The name of secret data key that contains the Istio control plane kubeconfig
# controlplane.kubeconfig.key: The name of secret data key that contains the mesh control plane kubeconfig
key: "kubeconfig"
podDisruptionBudget:
enabled: false
minAvailable: 1
# Additional labels to be added to pods
podLabels: {}
# Additional labels to be added to deployments
deploymentLabels: { }
noCrossNamespaceRefs: false
#Placeholder to supply additional volumes to the flagger pod
additionalVolumes: {}
# - name: tmpfs
# emptyDir: {}

View File

@ -1,6 +1,6 @@
apiVersion: v1
name: grafana
version: 1.6.0
version: 1.7.0
appVersion: 7.2.0
description: Grafana dashboards for monitoring Flagger canary deployments
icon: https://raw.githubusercontent.com/fluxcd/flagger/main/docs/logo/flagger-icon.png

View File

@ -403,7 +403,7 @@
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@ -411,7 +411,7 @@
"refId": "A"
},
{
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@ -419,7 +419,7 @@
"refId": "B"
},
{
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$primary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@ -509,7 +509,7 @@
"steppedLine": false,
"targets": [
{
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.50, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@ -517,7 +517,7 @@
"refId": "A"
},
{
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@ -525,7 +525,7 @@
"refId": "B"
},
{
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le))",
"expr": "histogram_quantile(0.99, sum(irate(istio_request_duration_milliseconds_bucket{reporter=\"destination\",destination_workload=~\"$canary\", destination_workload_namespace=~\"$namespace\"}[1m])) by (le)) / 1000",
"format": "time_series",
"hide": false,
"intervalFactor": 1,

View File

@ -1,8 +1,8 @@
apiVersion: v1
name: loadtester
version: 0.21.0
appVersion: 0.21.0
kubeVersion: ">=1.11.0-0"
version: 0.35.0
appVersion: 0.35.0
kubeVersion: ">=1.19.0-0"
engine: gotpl
description: Flagger's load testing services based on rakyll/hey and bojand/ghz that generates traffic during canary analysis when configured as a webhook.
home: https://docs.flagger.app

View File

@ -7,7 +7,7 @@ It can be used to generate HTTP and gRPC traffic during canary analysis when con
## Prerequisites
* Kubernetes >= 1.11
* Kubernetes >= 1.19
## Installing the Chart
@ -44,34 +44,37 @@ The command removes all the Kubernetes components associated with the chart and
The following tables lists the configurable parameters of the load tester chart and their default values.
Parameter | Description | Default
--- | --- | ---
`image.repository` | Image repository | `quay.io/stefanprodan/flagger-loadtester`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`image.tag` | Image tag | `<VERSION>`
`replicaCount` | Desired number of pods | `1`
`serviceAccountName` | Kubernetes service account name | `none`
`resources.requests.cpu` | CPU requests | `10m`
`resources.requests.memory` | Memory requests | `64Mi`
`tolerations` | List of node taints to tolerate | `[]`
`affinity` | node/pod affinities | `node`
`nodeSelector` | Node labels for pod assignment | `{}`
`service.type` | Type of service | `ClusterIP`
`service.port` | ClusterIP port | `80`
`cmd.timeout` | Command execution timeout | `1h`
`logLevel` | Log level can be debug, info, warning, error or panic | `info`
`appmesh.enabled` | Create AWS App Mesh v1beta2 virtual node | `false`
`appmesh.backends` | AWS App Mesh virtual services | `none`
`istio.enabled` | Create Istio virtual service | `false`
`istio.host` | Loadtester hostname | `flagger-loadtester.flagger`
`istio.gateway.enabled` | Create Istio gateway in namespace | `false`
`istio.tls.enabled` | Enable TLS in gateway ( TLS secrets should be in namespace ) | `false`
`istio.tls.httpsRedirect` | Redirect traffic to TLS port | `false`
`podPriorityClassName` | PriorityClass name for pod priority configuration | ""
`securityContext.enabled` | Add securityContext to container | ""
`securityContext.context` | securityContext to add | ""
`podDisruptionBudget.enabled` | A PodDisruptionBudget will be created if `true` | `false`
`podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1`
| Parameter | Description | Default |
|------------------------------------|--------------------------------------------------------------------------------------|-------------------------------------|
| `image.repository` | Image repository | `ghcr.io/fluxcd/flagger-loadtester` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.tag` | Image tag | `<VERSION>` |
| `replicaCount` | Desired number of pods | `1` |
| `serviceAccountName` | Kubernetes service account name | `none` |
| `resources.requests.cpu` | CPU requests | `10m` |
| `resources.requests.memory` | Memory requests | `64Mi` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `affinity` | node/pod affinities | `node` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `service.type` | Type of service | `ClusterIP` |
| `service.port` | ClusterIP port | `80` |
| `cmd.timeout` | Command execution timeout | `1h` |
| `cmd.namespaceRegexp` | Restrict access to canaries in matching namespaces | "" |
| `logLevel` | Log level can be debug, info, warning, error or panic | `info` |
| `appmesh.enabled` | Create AWS App Mesh v1beta2 virtual node | `false` |
| `appmesh.backends` | AWS App Mesh virtual services | `none` |
| `istio.enabled` | Create Istio virtual service | `false` |
| `istio.host` | Loadtester hostname | `flagger-loadtester.flagger` |
| `istio.gateway.enabled` | Create Istio gateway in namespace | `false` |
| `istio.tls.enabled` | Enable TLS in gateway ( TLS secrets should be in namespace ) | `false` |
| `istio.tls.httpsRedirect` | Redirect traffic to TLS port | `false` |
| `podPriorityClassName` | PriorityClass name for pod priority configuration | "" |
| `securityContext.enabled` | Add securityContext to container | `false` |
| `SecurityContext.context` | securityContext to add | "" |
| `podSecurityContext.enabled` | Add securityContext to pod | `false` |
| `podSecurityContext.context` | securityContext to add | "" |
| `podDisruptionBudget.enabled` | A PodDisruptionBudget will be created if `true` | `false` |
| `podDisruptionBudget.minAvailable` | The minimal number of available replicas that will be set in the PodDisruptionBudget | `1` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm upgrade`. For example,

View File

@ -24,7 +24,7 @@ spec:
appmesh.k8s.aws/ports: "444"
openservicemesh.io/inbound-port-exclusion-list: "80, 8080"
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- if .Values.serviceAccountName }}
@ -39,7 +39,7 @@ spec:
- name: {{ .Chart.Name }}
{{- if .Values.securityContext.enabled }}
securityContext:
{{ toYaml .Values.securityContext.context | indent 12 }}
{{- toYaml .Values.securityContext.context | nindent 12 }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
@ -51,6 +51,7 @@ spec:
- -port=8080
- -log-level={{ .Values.logLevel }}
- -timeout={{ .Values.cmd.timeout }}
- -namespace-regexp={{ .Values.cmd.namespaceRegexp }}
livenessProbe:
exec:
command:
@ -101,3 +102,7 @@ spec:
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.podSecurityContext.enabled }}
securityContext:
{{- toYaml .Values.podSecurityContext.context | nindent 12 }}
{{- end }}

View File

@ -1,5 +1,5 @@
{{- if and (.Values.istio.enabled) (.Values.istio.gateway.enabled) }}
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: {{ include "loadtester.fullname" . }}

View File

@ -1,5 +1,5 @@
{{- if .Values.istio.enabled }}
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: {{ include "loadtester.fullname" . }}

View File

@ -1,5 +1,9 @@
{{- if .Values.podDisruptionBudget.enabled }}
{{- if .Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" -}}
apiVersion: policy/v1
{{- else }}
apiVersion: policy/v1beta1
{{- end }}
kind: PodDisruptionBudget
metadata:
name: {{ include "loadtester.fullname" . }}

View File

@ -51,4 +51,7 @@ metadata:
app.kubernetes.io/name: {{ template "loadtester.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.rbac.serviceAccountAnnotations }}
annotations: {{ tpl (toYaml .Values.rbac.serviceAccountAnnotations) . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -2,7 +2,7 @@ replicaCount: 1
image:
repository: ghcr.io/fluxcd/flagger-loadtester
tag: 0.21.0
tag: 0.35.0
pullPolicy: IfNotPresent
pullSecret:
@ -17,6 +17,7 @@ podPriorityClassName: ""
logLevel: info
cmd:
timeout: 1h
namespaceRegexp: ""
nameOverride: ""
fullnameOverride: ""
@ -53,6 +54,8 @@ rbac:
# resources: ["pods"]
# verbs: ["list", "get"]
rules: []
# annotations to add to the service account
serviceAccountAnnotations: {}
# name of an existing service account to use - if not creating rbac resources
serviceAccountName: ""
@ -88,6 +91,12 @@ securityContext:
runAsUser: 100
runAsGroup: 101
podSecurityContext:
enabled: false
context:
fsGroup: 101
fsGroupChangePolicy: "OnRootMismatch"
podDisruptionBudget:
enabled: false
minAvailable: 1

View File

@ -1,6 +1,6 @@
apiVersion: v1
version: 5.0.0
appVersion: 5.0.0
version: 6.1.4
appVersion: 6.1.3
name: podinfo
engine: gotpl
description: Flagger canary deployment demo application

View File

@ -20,7 +20,7 @@ helm upgrade -i frontend flagger/podinfo \
--set backend=http://backend.test:9898/echo \
--set canary.enabled=true \
--set canary.istioIngress.enabled=true \
--set canary.istioIngress.gateway=public-gateway.istio-system.svc.cluster.local \
--set canary.istioIngress.gateway=istio-system/public-gateway \
--set canary.istioIngress.host=frontend.istio.example.com
```

View File

@ -14,7 +14,7 @@ spec:
kind: Deployment
name: {{ template "podinfo.fullname" . }}
autoscalerRef:
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: {{ template "podinfo.fullname" . }}
service:

View File

@ -1,5 +1,5 @@
{{- if .Values.hpa.enabled -}}
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "podinfo.fullname" . }}
@ -20,12 +20,16 @@ spec:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.hpa.cpu }}
target:
type: Utilization
averageUtilization: {{ .Values.hpa.cpu }}
{{- end }}
{{- if .Values.hpa.memory }}
- type: Resource
resource:
name: memory
targetAverageValue: {{ .Values.hpa.memory }}
target:
type: AverageValue
averageValue: {{ .Values.hpa.memory }}
{{- end }}
{{- end }}

View File

@ -1,4 +1,4 @@
{{- if not .Values.canary.enabled }}
{{- if and .Values.service.enabled (not .Values.canary.enabled) }}
apiVersion: v1
kind: Service
metadata:

View File

@ -1,7 +1,7 @@
# Default values for podinfo.
image:
repository: ghcr.io/stefanprodan/podinfo
tag: 5.0.0
tag: 6.1.3
pullPolicy: IfNotPresent
podAnnotations: {}
@ -25,7 +25,7 @@ canary:
istioIngress:
enabled: false
# Istio ingress gateway name
gateway: public-gateway.istio-system.svc.cluster.local
gateway: istio-system/public-gateway
# external host name eg. podinfo.example.com
host:
analysis:

View File

@ -51,6 +51,8 @@ import (
"github.com/fluxcd/flagger/pkg/server"
"github.com/fluxcd/flagger/pkg/signals"
"github.com/fluxcd/flagger/pkg/version"
knative "knative.dev/serving/pkg/client/clientset/versioned"
)
var (
@ -66,6 +68,7 @@ var (
msteamsProxyURL string
includeLabelPrefix string
slackURL string
slackToken string
slackProxyURL string
slackUser string
slackChannel string
@ -83,6 +86,8 @@ var (
enableConfigTracking bool
ver bool
kubeconfigServiceMesh string
clusterName string
noCrossNamespaceRefs bool
)
func init() {
@ -95,6 +100,7 @@ func init() {
flag.StringVar(&logLevel, "log-level", "debug", "Log level can be: debug, info, warning, error.")
flag.StringVar(&port, "port", "8080", "Port to listen on.")
flag.StringVar(&slackURL, "slack-url", "", "Slack hook URL.")
flag.StringVar(&slackToken, "slack-token", "", "Slack bot token.")
flag.StringVar(&slackProxyURL, "slack-proxy-url", "", "Slack proxy URL.")
flag.StringVar(&slackUser, "slack-user", "flagger", "Slack user name.")
flag.StringVar(&slackChannel, "slack-channel", "", "Slack channel.")
@ -106,7 +112,7 @@ func init() {
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
flag.StringVar(&namespace, "namespace", "", "Namespace that flagger would watch canary object.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, contour, gloo, nginx, skipper, traefik or osm.")
flag.StringVar(&meshProvider, "mesh-provider", "istio", "Service mesh provider, can be istio, linkerd, appmesh, contour, knative, gloo, nginx, skipper, traefik, apisix, osm or kuma.")
flag.StringVar(&selectorLabels, "selector-labels", "app,name,app.kubernetes.io/name", "List of pod labels that Flagger uses to create pod selectors.")
flag.StringVar(&ingressAnnotationsPrefix, "ingress-annotations-prefix", "nginx.ingress.kubernetes.io", "Annotations prefix for NGINX ingresses.")
flag.StringVar(&ingressClass, "ingress-class", "", "Ingress class used for annotating HTTPProxy objects.")
@ -115,6 +121,8 @@ func init() {
flag.BoolVar(&enableConfigTracking, "enable-config-tracking", true, "Enable secrets and configmaps tracking.")
flag.BoolVar(&ver, "version", false, "Print version")
flag.StringVar(&kubeconfigServiceMesh, "kubeconfig-service-mesh", "", "Path to a kubeconfig for the service mesh control plane cluster.")
flag.StringVar(&clusterName, "cluster-name", "", "Cluster name to be included in alert msgs.")
flag.BoolVar(&noCrossNamespaceRefs, "no-cross-namespace-refs", false, "When set to true, Flagger can only refer to resources in the same namespace.")
}
func main() {
@ -160,19 +168,24 @@ func main() {
logger.Fatalf("Error building flagger clientset: %s", err.Error())
}
knativeClient, err := knative.NewForConfig(cfg)
if err != nil {
logger.Fatalf("Error building knative clientset: %s", err.Error())
}
// use a remote cluster for routing if a service mesh kubeconfig is specified
if kubeconfigServiceMesh == "" {
kubeconfigServiceMesh = kubeconfig
}
cfgHost, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfigServiceMesh)
serviceMeshCfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfigServiceMesh)
if err != nil {
logger.Fatalf("Error building host kubeconfig: %v", err)
}
cfgHost.QPS = float32(kubeconfigQPS)
cfgHost.Burst = kubeconfigBurst
serviceMeshCfg.QPS = float32(kubeconfigQPS)
serviceMeshCfg.Burst = kubeconfigBurst
meshClient, err := clientset.NewForConfig(cfgHost)
meshClient, err := clientset.NewForConfig(serviceMeshCfg)
if err != nil {
logger.Fatalf("Error building mesh clientset: %v", err)
}
@ -208,7 +221,14 @@ func main() {
// start HTTP server
go server.ListenAndServe(port, 3*time.Second, logger, stopCh)
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, ingressAnnotationsPrefix, ingressClass, logger, meshClient)
setOwnerRefs := true
// Router shouldn't set OwnerRefs on resources that they create since the
// service mesh/ingress controller is in a different cluster.
if cfg.Host != serviceMeshCfg.Host {
setOwnerRefs = false
}
routerFactory := router.NewFactory(cfg, kubeClient, flaggerClient, knativeClient, ingressAnnotationsPrefix, ingressClass, logger, meshClient, setOwnerRefs)
var configTracker canary.Tracker
if enableConfigTracking {
@ -223,10 +243,11 @@ func main() {
includeLabelPrefixArray := strings.Split(includeLabelPrefix, ",")
canaryFactory := canary.NewFactory(kubeClient, flaggerClient, configTracker, labels, includeLabelPrefixArray, logger)
canaryFactory := canary.NewFactory(kubeClient, flaggerClient, knativeClient, configTracker, labels, includeLabelPrefixArray, logger)
c := controller.NewController(
kubeClient,
knativeClient,
flaggerClient,
infos,
controlLoopInterval,
@ -238,6 +259,9 @@ func main() {
meshProvider,
version.VERSION,
fromEnv("EVENT_WEBHOOK_URL", eventWebhook),
clusterName,
noCrossNamespaceRefs,
cfg,
)
// leader election context
@ -312,7 +336,7 @@ func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient
id = id + "_" + string(uuid.NewUUID())
lock, err := resourcelock.New(
resourcelock.ConfigMapsResourceLock,
resourcelock.LeasesResourceLock,
ns,
configMapName,
kubeClient.CoreV1(),
@ -352,6 +376,7 @@ func startLeaderElection(ctx context.Context, run func(), ns string, kubeClient
func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
provider := "slack"
token := fromEnv("SLACK_TOKEN", slackToken)
notifierURL := fromEnv("SLACK_URL", slackURL)
notifierProxyURL := fromEnv("SLACK_PROXY_URL", slackProxyURL)
if msteamsURL != "" || os.Getenv("MSTEAMS_URL") != "" {
@ -359,7 +384,7 @@ func initNotifier(logger *zap.SugaredLogger) (client notifier.Interface) {
notifierURL = fromEnv("MSTEAMS_URL", msteamsURL)
notifierProxyURL = fromEnv("MSTEAMS_PROXY_URL", msteamsProxyURL)
}
notifierFactory := notifier.NewFactory(notifierURL, notifierProxyURL, slackUser, slackChannel)
notifierFactory := notifier.NewFactory(notifierURL, token, notifierProxyURL, slackUser, slackChannel)
var err error
client, err = notifierFactory.Notifier(provider)

View File

@ -1,5 +1,5 @@
/*
Copyright 2020 The Flux authors
Copyright 2020, 2022 The Flux authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -19,19 +19,22 @@ package main
import (
"flag"
"log"
"regexp"
"time"
"go.uber.org/zap"
"github.com/fluxcd/flagger/pkg/loadtester"
"github.com/fluxcd/flagger/pkg/logger"
"github.com/fluxcd/flagger/pkg/signals"
"go.uber.org/zap"
)
var VERSION = "0.21.0"
var VERSION = "0.35.0"
var (
logLevel string
port string
timeout time.Duration
namespaceRegexp string
zapReplaceGlobals bool
zapEncoding string
)
@ -40,6 +43,7 @@ func init() {
flag.StringVar(&logLevel, "log-level", "debug", "Log level can be: debug, info, warning, error.")
flag.StringVar(&port, "port", "9090", "Port to listen on.")
flag.DurationVar(&timeout, "timeout", time.Hour, "Load test exec timeout.")
flag.StringVar(&namespaceRegexp, "namespace-regexp", "", "Restrict access to canaries in matching namespaces.")
flag.BoolVar(&zapReplaceGlobals, "zap-replace-globals", false, "Whether to change the logging level of the global zap logger.")
flag.StringVar(&zapEncoding, "zap-encoding", "json", "Zap logger encoding.")
}
@ -66,5 +70,12 @@ func main() {
logger.Infof("Starting load tester v%s API on port %s", VERSION, port)
gateStorage := loadtester.NewGateStorage("in-memory")
loadtester.ListenAndServe(port, time.Minute, logger, taskRunner, gateStorage, stopCh)
var namespaceRegexpCompiled *regexp.Regexp
if namespaceRegexp != "" {
namespaceRegexpCompiled = regexp.MustCompile(namespaceRegexp)
}
authorizer := loadtester.NewAuthorizer(namespaceRegexpCompiled)
loadtester.ListenAndServe(port, time.Minute, logger, taskRunner, gateStorage, authorizer, stopCh)
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -10,16 +10,17 @@ version in production by gradually shifting traffic to the new version while mea
and running conformance tests.
Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring)
using a service mesh (App Mesh, Istio, Linkerd, Open Service Mesh)
or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik) for traffic routing.
For release analysis, Flagger can query Prometheus, Datadog, New Relic, CloudWatch or Graphite
and for alerting it uses Slack, MS Teams, Discord and Rocket.
using a service mesh (App Mesh, Istio, Linkerd, Kuma, Open Service Mesh)
or an ingress controller (Contour, Gloo, NGINX, Skipper, Traefik, APISIX) for traffic routing.
For release analysis, Flagger can query Prometheus, InfluxDB, Datadog, New Relic, CloudWatch, Stackdriver
or Graphite and for alerting it uses Slack, MS Teams, Discord and Rocket.
![Flagger overview diagram](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-overview.png)
Flagger can be configured with Kubernetes custom resources and is compatible with
any CI/CD solutions made for Kubernetes. Since Flagger is declarative and reacts to Kubernetes events,
it can be used in **GitOps** pipelines together with tools like Flux, JenkinsX, Carvel, Argo, etc.
it can be used in **GitOps** pipelines together with tools like [Flux](install/flagger-install-with-flux.md),
JenkinsX, Carvel, Argo, etc.
Flagger is a [Cloud Native Computing Foundation](https://cncf.io/) project
and part of [Flux](https://fluxcd.io) family of GitOps tools.
@ -36,7 +37,9 @@ After installing Flagger, you can follow one of these tutorials to get started:
* [Istio](tutorials/istio-progressive-delivery.md)
* [Linkerd](tutorials/linkerd-progressive-delivery.md)
* [AWS App Mesh](tutorials/appmesh-progressive-delivery.md)
* [AWS App Mesh: Canary Deployment Using Flagger](https://www.eksworkshop.com/advanced/340_appmesh_flagger/)
* [Open Service Mesh](tutorials/osm-progressive-delivery.md)
* [Kuma](tutorials/kuma-progressive-delivery.md)
**Ingress controller tutorials**
@ -45,9 +48,13 @@ After installing Flagger, you can follow one of these tutorials to get started:
* [NGINX Ingress](tutorials/nginx-progressive-delivery.md)
* [Skipper Ingress](tutorials/skipper-progressive-delivery.md)
* [Traefik](tutorials/traefik-progressive-delivery.md)
* [Apache APISIX](tutorials/apisix-progressive-delivery.md)
**Hands-on GitOps workshops**
* [Istio](https://github.com/stefanprodan/gitops-istio)
* [Linkerd](https://helm.workshop.flagger.dev)
* [AWS App Mesh](https://eks.handson.flagger.dev)
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation,
please see our [Trademark Usage page](https://www.linuxfoundation.org/legal/trademark-usage).

View File

@ -6,6 +6,7 @@
## Install
* [Flagger Install on Kubernetes](install/flagger-install-on-kubernetes.md)
* [Flagger Install with Flux](install/flagger-install-with-flux.md)
* [Flagger Install on GKE Istio](install/flagger-install-on-google-cloud.md)
* [Flagger Install on EKS App Mesh](install/flagger-install-on-eks-appmesh.md)
* [Flagger Install on Alibaba ServiceMesh](install/flagger-install-on-alibaba-servicemesh.md)
@ -30,9 +31,14 @@
* [NGINX Canary Deployments](tutorials/nginx-progressive-delivery.md)
* [Skipper Canary Deployments](tutorials/skipper-progressive-delivery.md)
* [Traefik Canary Deployments](tutorials/traefik-progressive-delivery.md)
* [Apache APISIX Canary Deployments](tutorials/apisix-progressive-delivery.md)
* [Open Service Mesh Deployments](tutorials/osm-progressive-delivery.md)
* [Kuma Canary Deployments](tutorials/kuma-progressive-delivery.md)
* [Gateway API Canary Deployments](tutorials/gatewayapi-progressive-delivery.md)
* [Knative Canary Deployments](tutorials/knative-progressive-delivery.md)
* [Blue/Green Deployments](tutorials/kubernetes-blue-green.md)
* [Canary analysis with Prometheus Operator](tutorials/prometheus-operator.md)
* [Canary analysis with KEDA ScaledObjects](tutorials/keda-scaledobject.md)
* [Zero downtime deployments](tutorials/zero-downtime-deployments.md)
## Dev

View File

@ -8,7 +8,7 @@ Flagger is written in Go and uses Go modules for dependency management.
On your dev machine install the following tools:
* go >= 1.17
* go >= 1.19
* git >;= 2.20
* bash >= 5.0
* make >= 3.81

View File

@ -4,13 +4,28 @@ This document describes how to release Flagger.
## Release
### Flagger
To release a new Flagger version (e.g. `2.0.0`) follow these steps:
* create a branch `git checkout -b prep-2.0.0`
* create a branch `git checkout -b release-2.0.0`
* set the version in code and manifests `TAG=2.0.0 make version-set`
* commit changes and merge PR
* checkout master `git checkout main && git pull`
* tag master `make release`
* checkout main `git checkout main && git pull`
* tag main `make release`
### Flagger load tester
To release a new Flagger load tester version (e.g. `2.0.0`) follow these steps:
* create a branch `git checkout -b release-ld-2.0.0`
* set the version in code (`cmd/loadtester/main.go#VERSION`)
* set the version in the Helm chart (`charts/loadtester/Chart.yaml` and `values.yaml`)
* set the version in manifests (`kustomize/tester/deployment.yaml`)
* commit changes and push the branch upstream
* in GitHub UI, navigate to Actions and run the `push-ld` workflow selecting the release branch
* after the workflow finishes, open the PR which will run the e2e tests using the new tester version
* merge the PR if the tests pass
## CI
@ -18,7 +33,9 @@ After the tag has been pushed to GitHub, the CI release pipeline does the follow
* creates a GitHub release
* pushes the Flagger binary and change log to GitHub release
* pushes the Flagger container image to Docker Hub
* pushes the Flagger container image to GitHub Container Registry
* pushed the Flagger install manifests to GitHub Container Registry
* signs all OCI artifacts and release assets with Cosign and GitHub OIDC
* pushes the Helm chart to github-pages branch
* GitHub pages publishes the new chart version on the Helm repository
@ -32,3 +49,6 @@ After a Flagger release, publish the docs with:
* `git checkout docs`
* `git rebase main`
* `git push origin docs`
Lastly open a PR with all the docs changes on [fluxcd/website](https://github.com/fluxcd/website) to
update [fluxcd.io/flagger](https://fluxcd.io/flagger/).

View File

@ -49,11 +49,39 @@ spec:
timestamp: "2020-03-10T14:24:48+0000"
```
#### How to change replicas for a deployment when not using HPA?
To change replicas for a deployment when not using HPA, you have to update the canary deployment with the desired replica count
and trigger an analysis by annotating the template. After the analysis finishes, Flagger will promote the `spec.replicas` changes to the primary deployment.
Example:
```yaml
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 4 #update replicas
template:
metadata:
annotations:
timestamp: "2022-02-10T14:24:48+0000" #add annotation to trigger analysis
```
#### Why is there a window of downtime during the canary initializing process when analysis is disabled?
A window of downtime is the intended behavior when the analysis is disabled. This allows instant rollback and also mimics the way
a Kubernetes deployment initialization works. To avoid this, enable the analysis (`skipAnalysis: true`), wait for the initialization
to finish, and disable it afterward (`skipAnalysis: false`).
a Kubernetes deployment initialization works. To avoid this, enable the analysis (`skipAnalysis: false`), wait for the initialization
to finish, and disable it afterward (`skipAnalysis: true`).
#### How to disable cross namespace references?
Flagger by default can access resources across namespaces (`AlertProivder`, `MetricProvider` and Gloo `Upsteream`).
If you're in a multi-tenant environment and wish to disable this, you can do so through the `no-cross-namespace-refs` flag.
```
flagger \
-no-cross-namespace-refs=true \
...
```
## Kubernetes services
@ -363,10 +391,10 @@ sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}",
response_code!~"5.*"
}[$interval]
}[{{ interval }}]
)
)
/
@ -374,9 +402,9 @@ sum(
rate(
istio_requests_total{
reporter="destination",
destination_workload_namespace=~"$namespace",
destination_workload=~"$workload"
}[$interval]
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}"
}[{{ interval }}]
)
)
```
@ -387,19 +415,19 @@ Envoy query (App Mesh):
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload",
kubernetes_namespace="{{ namespace }}",
kubernetes_pod_name=~"{{ target }}",
envoy_response_code!~"5.*"
}[$interval]
}[{{ interval }}]
)
)
/
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload"
}[$interval]
kubernetes_namespace="{{ namespace }}",
kubernetes_pod_name=~"{{ target }}"
}[{{ interval }}]
)
)
```
@ -410,17 +438,17 @@ Envoy query (Contour and Gloo):
sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"$namespace-$workload",
envoy_cluster_name=~"{{ namespace }}-{{ target }}",
envoy_response_code!~"5.*"
}[$interval]
}[{{ interval }}]
)
)
/
sum(
rate(
envoy_cluster_upstream_rq{
envoy_cluster_name=~"$namespace-$workload",
}[$interval]
envoy_cluster_name=~"{{ namespace }}-{{ target }}",
}[{{ interval }}]
)
)
```
@ -448,9 +476,9 @@ histogram_quantile(0.99,
irate(
istio_request_duration_milliseconds_bucket{
reporter="destination",
destination_workload=~"$workload",
destination_workload_namespace=~"$namespace"
}[$interval]
destination_workload=~"{{ target }}",
destination_workload_namespace=~"{{ namespace }}"
}[{{ interval }}]
)
) by (le)
)
@ -463,9 +491,9 @@ histogram_quantile(0.99,
sum(
irate(
envoy_cluster_upstream_rq_time_bucket{
kubernetes_pod_name=~"$workload",
kubernetes_namespace=~"$namespace"
}[$interval]
kubernetes_pod_name=~"{{ target }}",
kubernetes_namespace=~"{{ namespace }}"
}[{{ interval }}]
)
) by (le)
)
@ -478,6 +506,33 @@ histogram_quantile(0.99,
The analysis can be extended with metrics provided by Prometheus, Datadog, AWS CloudWatch, New Relic and Graphite.
For more details on how custom metrics can be used, please read the [metrics docs](usage/metrics.md).
#### Istio Gateway API
If you're using Istio with Gateway API, the Prometheus query needs to include `reporter="source"`. For example, to calculate HTTP requests error percentage, the query would be:
```javascript
100 - sum(
rate(
istio_requests_total{
reporter="source",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}",
response_code!~"5.*"
}[{{ interval }}]
)
)
/
sum(
rate(
istio_requests_total{
reporter="source",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}"
}[{{ interval }}]
)
) * 100
```
## Istio routing
#### How does Flagger interact with Istio?
@ -503,7 +558,7 @@ spec:
portName: http-frontend
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
- mesh
# Istio virtual service host names (optional)
hosts:
@ -545,7 +600,7 @@ spec:
For the above spec Flagger will generate the following virtual service:
```yaml
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: frontend
@ -559,7 +614,7 @@ metadata:
uid: 3a4a40dd-3875-11e9-8e1d-42010a9c0fd1
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
- mesh
hosts:
- frontend.example.com
@ -598,7 +653,7 @@ spec:
For each destination in the virtual service a rule is generated:
```yaml
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: frontend-primary
@ -609,7 +664,7 @@ spec:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: frontend-canary
@ -696,7 +751,7 @@ spec:
Based on the above spec, Flagger will create the following virtual service:
```yaml
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: backend
@ -722,14 +777,14 @@ spec:
Therefore, the following virtual service forwards the traffic to `/podinfo` by the above delegate VirtualService.
```yaml
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: frontend
namespace: test
spec:
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
- mesh
hosts:
- frontend.example.com
@ -766,7 +821,7 @@ spec:
service:
port: 8080
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
hosts:
- my-site.com
match:
@ -783,7 +838,7 @@ spec:
service:
port: 8080
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
hosts:
- my-site.com
match:
@ -896,4 +951,4 @@ this will create conflicts!
- --source=istio-virtualservice # of these two
```
[Checkout ExeternalDNS documentation](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/istio.md)
[Checkout ExternalDNS documentation](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/istio.md)

View File

@ -4,106 +4,19 @@ This guide walks you through setting up Flagger on Alibaba ServiceMesh.
## Prerequisites
- Created an ACK([Alibabacloud Container Service for Kubernetes](https://cs.console.aliyun.com)) cluster instance.
- Created an ASM([Alibaba ServiceMesh](https://servicemesh.console.aliyun.com)) instance, and added ACK cluster.
- Create an ASM([Alibaba ServiceMesh](https://servicemesh.console.aliyun.com)) enterprise instance and add ACK cluster.
### Variables declaration
- `$ACK_CONFIG`: the kubeconfig file path of ACK, which be treated as`$HOME/.kube/config` in the rest of guide.
- `$MESH_CONFIG`: the kubeconfig file path of ASM.
- `$ISTIO_RELEASE`: see https://github.com/istio/istio/releases
- `$FLAGGER_SRC`: see https://github.com/fluxcd/flagger
## Install Prometheus
Install Prometheus:
### Enable Data-plane KubeAPI access in ASM
```bash
kubectl apply -f $ISTIO_RELEASE/samples/addons/prometheus.yaml
```
In the Alibaba Cloud Service Mesh (ASM) console, on the basic information page, make sure Data-plane KubeAPI access is enabled. When enabled, the Istio resources of the control plane can be managed through the Kubeconfig of the data plane cluster.
it' same with the below cmd:
## Enable Prometheus
```bash
kubectl --kubeconfig "$ACK_CONFIG" apply -f $ISTIO_RELEASE/samples/addons/prometheus.yaml
```
Append the below configs to `scrape_configs` in prometheus configmap, to support telemetry:
```yaml
scrape_configs:
# Mixer scrapping. Defaults to Prometheus and mixer on same namespace.
- job_name: 'istio-mesh'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labeldrop
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'istio-policy'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-policy-monitoring
- job_name: 'istio-telemetry'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istiod;http-monitoring
- source_labels: [__meta_kubernetes_service_label_app]
target_label: app
- job_name: 'sidecar-injector'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-sidecar-injector;http-monitoring
```
In the Alibaba Cloud Service Mesh (ASM) console, click Settings to enable the collection of Prometheus monitoring metrics. You can use the self-built Prometheus monitoring, or you can use the Alibaba Cloud ARMS Prometheus monitoring plug-in that has joined the ACK cluster, and use ARMS Prometheus to collect monitoring indicators.
## Install Flagger
@ -111,26 +24,34 @@ Add Flagger Helm repository:
```bash
helm repo add flagger https://flagger.app
helm repo update
```
Install Flagger's Canary CRD:
```yaml
kubectl apply -f $FLAGGER_SRC/artifacts/flagger/crd.yaml
```bash
kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/v1.21.0/artifacts/flagger/crd.yaml
```
## Deploy Flagger for Istio
Deploy Flagger for Alibaba ServiceMesh:
### Add data plane cluster to Alibaba Cloud Service Mesh (ASM)
In the Alibaba Cloud Service Mesh (ASM) console, click Cluster & Workload Management, select the Kubernetes cluster, select the target ACK cluster, and add it to ASM.
### Prometheus address
If you are using Alibaba Cloud Container Service for Kubernetes (ACK) ARMS Prometheus monitoring, replace {Region-ID} in the link below with your region ID, such as cn-hangzhou. {ACKID} is the ACK ID of the data plane cluster that you added to Alibaba Cloud Service Mesh (ASM). Visit the following links to query the public and intranet addresses monitored by ACK's ARMS Prometheus:
[https://arms.console.aliyun.com/#/promDetail/{Region-ID}/{ACK-ID}/setting](https://arms.console.aliyun.com/)
An example of an intranet address is as follows:
[http://{Region-ID}-intranet.arms.aliyuncs.com:9090/api/v1/prometheus/{Prometheus-ID}/{u-id}/{ACK-ID}/{Region-ID}](https://arms.console.aliyun.com/)
## Deploy Flagger
Replace the value of metricsServer with your Prometheus address.
```bash
cp $MESH_CONFIG kubeconfig
kubectl -n istio-system create secret generic istio-kubeconfig --from-file kubeconfig
kubectl -n istio-system label secret istio-kubeconfig istio/multiCluster=true
helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090 \
--set istio.kubeconfig.secretName=istio-kubeconfig \
--set istio.kubeconfig.key=kubeconfig
--set metricsServer=http://prometheus:9090
```

View File

@ -374,7 +374,7 @@ helm upgrade -i flagger-grafana flagger/grafana \
Expose Grafana through the public gateway by creating a virtual service \(replace `example.com` with your domain\):
```yaml
apiVersion: networking.istio.io/v1alpha3
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: grafana
@ -383,7 +383,7 @@ spec:
hosts:
- "grafana.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
http:
- route:
- destination:

View File

@ -43,8 +43,8 @@ helm upgrade -i flagger flagger/flagger \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://istio-cluster-prometheus:9090 \
--set istio.kubeconfig.secretName=istio-kubeconfig \
--set istio.kubeconfig.key=kubeconfig
--set controlplane.kubeconfig.secretName=istio-kubeconfig \
--set controlplane.kubeconfig.key=kubeconfig
```
Note that the Istio kubeconfig must be stored in a Kubernetes secret with a data key named `kubeconfig`.
@ -81,6 +81,15 @@ $ helm upgrade -i flagger flagger/flagger \
--set metricsServer=http://osm-prometheus.osm-system.svc:7070
```
If you need to add labels to the flagger deployment or pods, you can pass the labels as parameters as shown below.
```console
helm upgrade -i flagger flagger/flagger \
<other parameters> \
--set podLabels.<labelName>=<labelValue> \
--set deploymentLabels.<labelName>=<labelValue>
```
You can install Flagger in any namespace as long as it can talk to the Prometheus service on port 9090.
For ingress controllers, the install instructions are:
@ -90,6 +99,7 @@ For ingress controllers, the install instructions are:
* [NGINX](https://docs.flagger.app/tutorials/nginx-progressive-delivery)
* [Skipper](https://docs.flagger.app/tutorials/skipper-progressive-delivery)
* [Traefik](https://docs.flagger.app/tutorials/traefik-progressive-delivery)
* [APISIX](https://docs.flagger.app/tutorials/apisix-progressive-delivery)
You can use the helm template command and apply the generated yaml with kubectl:
@ -199,7 +209,7 @@ kustomize build https://github.com/fluxcd/flagger/kustomize/linkerd?ref=v1.0.0 |
**Generic installer**
Install Flagger and Prometheus for Contour, Gloo, NGINX, Skipper, or Traefik ingress:
Install Flagger and Prometheus for Contour, Gloo, NGINX, Skipper, APISIX or Traefik ingress:
```bash
kustomize build https://github.com/fluxcd/flagger/kustomize/kubernetes?ref=main | kubectl apply -f -
@ -220,7 +230,7 @@ metadata:
name: app
namespace: test
spec:
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, traefik, osm
# can be: kubernetes, istio, linkerd, appmesh, nginx, skipper, gloo, traefik, osm, apisix
# use the kubernetes provider for Blue/Green style deployments
provider: nginx
```

View File

@ -0,0 +1,158 @@
# Flagger Install on Kubernetes with Flux
This guide walks you through setting up Flagger on a Kubernetes cluster the GitOps way.
You'll configure Flux to scan the Flagger OCI artifacts and deploy the
latest stable version on Kubernetes.
## Flagger OCI artifacts
Flagger OCI artifacts (container images, Helm charts, Kustomize overlays) are published to
GitHub Container Registry, and they are signed with Cosign at every release.
OCI artifacts
- `ghcr.io/fluxcd/flagger:<version>` multi-arch container images
- `ghcr.io/fluxcd/flagger-manifest:<version>` Kubernetes manifests
- `ghcr.io/fluxcd/charts/flagger:<version>` Helm charts
## Prerequisites
To follow this guide youll need a Kubernetes cluster with Flux installed on it.
Please see the Flux [get started guide](https://fluxcd.io/flux/get-started/)
or the Flux [installation guide](https://fluxcd.io/flux/installation/).
## Deploy Flagger with Flux
First define the namespace where Flagger will be installed:
```yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: flagger-system
labels:
toolkit.fluxcd.io/tenant: sre-team
```
Define a Flux `HelmRepository` that points to where the Flagger Helm charts are stored:
```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: flagger
namespace: flagger-system
spec:
interval: 1h
url: oci://ghcr.io/fluxcd/charts
type: oci
```
Define a Flux `HelmRelease` that verifies and installs Flagger's latest version on the cluster:
```yaml
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: flagger
namespace: flagger-system
spec:
interval: 1h
releaseName: flagger
install: # override existing Flagger CRDs
crds: CreateReplace
upgrade: # update Flagger CRDs
crds: CreateReplace
chart:
spec:
chart: flagger
version: 1.x # update Flagger to the latest minor version
interval: 6h # scan for new versions every six hours
sourceRef:
kind: HelmRepository
name: flagger
verify: # verify the chart signature with Cosign keyless
provider: cosign
values:
nodeSelector:
kubernetes.io/os: linux
```
Copy the above manifests into a file called `flagger.yaml`, place the YAML file
in the Git repository bootstrapped with Flux, then commit and push it to upstream.
After Flux reconciles the changes on your cluster, you can check if Flagger got deployed with:
```console
$ helm list -n flagger-system
NAME NAMESPACE REVISION STATUS CHART APP VERSION
flagger flagger-system 1 deployed flagger-1.23.0 1.23.0
```
To uninstall Flagger, delete the `flagger.yaml` from your repository, then Flux will uninstall
the Helm release and will remove the namespace from your cluster.
## Deploy Flagger load tester with Flux
Flagger comes with a load testing service that generates traffic during analysis when configured as a webhook.
The load tester container images and deployment manifests are published to GitHub Container Registry.
The container images and the manifests are signed with Cosign and GitHub Actions OIDC.
Assuming the applications managed by Flagger are in the `apps` namespace, you can configure Flux to
deploy the load tester there.
Define a Flux `OCIRepository` that points to where the Flagger Kustomize overlays are stored:
```yaml
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: OCIRepository
metadata:
name: flagger-loadtester
namespace: apps
spec:
interval: 6h # scan for new versions every six hours
url: oci://ghcr.io/fluxcd/flagger-manifests
ref:
semver: 1.x # update to the latest version
verify: # verify the artifact signature with Cosign keyless
provider: cosign
```
Define a Flux `Kustomization` that deploys the Flagger load tester to the `apps` namespace:
```yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: flagger-loadtester
namespace: apps
spec:
interval: 6h
wait: true
timeout: 5m
prune: true
sourceRef:
kind: OCIRepository
name: flagger-loadtester
path: ./tester
targetNamespace: apps
```
Copy the above manifests into a file called `flagger-loadtester.yaml`, place the YAML file
in the Git repository bootstrapped with Flux, then commit and push it to upstream.
After Flux reconciles the changes on your cluster, you can check if the load tester got deployed with:
```console
$ flux -n apps get kustomization flagger-loadtester
NAME READY MESSAGE
flagger-loadtester True Applied revision: v1.23.0/a80af71e001
```
To uninstall the load tester, delete the `flagger-loadtester.yaml` from your repository,
and Flux will delete the load tester deployment from the cluster.

View File

@ -0,0 +1,351 @@
# Apache APISIX Canary Deployments
This guide shows you how to use the [Apache APISIX](https://apisix.apache.org/) and Flagger to automate canary deployments.
![Flagger Apache APISIX Overview](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-apisix-overview.png)
## Prerequisites
Flagger requires a Kubernetes cluster **v1.19** or newer and Apache APISIX **v2.15** or newer and Apache APISIX Ingress Controller **v1.5.0** or newer.
Install Apache APISIX and Apache APISIX Ingress Controller with Helm v3:
```bash
helm repo add apisix https://charts.apiseven.com
kubectl create ns apisix
helm upgrade -i apisix apisix/apisix --version=0.11.3 \
--namespace apisix \
--set apisix.podAnnotations."prometheus\.io/scrape"=true \
--set apisix.podAnnotations."prometheus\.io/port"=9091 \
--set apisix.podAnnotations."prometheus\.io/path"=/apisix/prometheus/metrics \
--set pluginAttrs.prometheus.export_addr.ip=0.0.0.0 \
--set pluginAttrs.prometheus.export_addr.port=9091 \
--set pluginAttrs.prometheus.export_uri=/apisix/prometheus/metrics \
--set pluginAttrs.prometheus.metric_prefix=apisix_ \
--set ingress-controller.enabled=true \
--set ingress-controller.config.apisix.serviceNamespace=apisix
```
Install Flagger and the Prometheus add-on in the same namespace as Apache APISIX:
```bash
helm repo add flagger https://flagger.app
helm upgrade -i flagger flagger/flagger \
--namespace apisix \
--set prometheus.install=true \
--set meshProvider=apisix
```
## Bootstrap
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler \(HPA\), then creates a series of objects \(Kubernetes deployments, ClusterIP services and an ApisixRoute\). These objects expose the application outside the cluster and drive the canary analysis and promotion.
Create a test namespace:
```bash
kubectl create ns test
```
Create a deployment and a horizontal pod autoscaler:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
```
Deploy the load testing service to generate traffic during the canary analysis:
```bash
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test
```
Create an Apache APISIX `ApisixRoute`, Flagger will reference and generate the canary Apache APISIX `ApisixRoute` \(replace `app.example.com` with your own domain\):
```yaml
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: podinfo
namespace: test
spec:
http:
- backends:
- serviceName: podinfo
servicePort: 80
match:
hosts:
- app.example.com
methods:
- GET
paths:
- /*
name: method
plugins:
- name: prometheus
enable: true
config:
disable: false
prefer_name: true
```
Save the above resource as podinfo-apisixroute.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-apisixroute.yaml
```
Create a canary custom resource \(replace `app.example.com` with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: apisix
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# apisix route reference
routeRef:
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
service:
# ClusterIP port number
port: 80
# container port number or name
targetPort: 9898
analysis:
# schedule interval (default 60s)
interval: 10s
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
# APISIX Prometheus checks
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: request-duration
# builtin Prometheus check
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
type: rollout
metadata:
cmd: |-
hey -z 1m -q 10 -c 2 -h2 -host app.example.com http://apisix-gateway.apisix/api/info
```
Save the above resource as podinfo-canary.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
apisixroute/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
apisixroute/podinfo-podinfo-canary
```
## Automated canary promotion
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.
![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-steps.png)
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Conditions:
Message: Canary analysis completed successfully, promotion finished.
Reason: Succeeded
Status: True
Type: Promoted
Failed Checks: 1
Iterations: 0
Phase: Succeeded
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Synced 2m59s flagger podinfo-primary.test not ready: waiting for rollout to finish: observed deployment generation less than desired generation
Warning Synced 2m50s flagger podinfo-primary.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available
Normal Synced 2m40s (x3 over 2m59s) flagger all the metrics providers are available!
Normal Synced 2m39s flagger Initialization done! podinfo.test
Normal Synced 2m20s flagger New revision detected! Scaling up podinfo.test
Warning Synced 2m (x2 over 2m10s) flagger canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available
Normal Synced 110s flagger Starting canary analysis for podinfo.test
Normal Synced 109s flagger Advance podinfo.test canary weight 10
Warning Synced 100s flagger Halt advancement no values found for apisix metric request-success-rate probably podinfo.test is not receiving traffic: running query failed: no values found
Normal Synced 90s flagger Advance podinfo.test canary weight 20
Normal Synced 80s flagger Advance podinfo.test canary weight 30
Normal Synced 69s flagger Advance podinfo.test canary weight 40
Normal Synced 59s flagger Advance podinfo.test canary weight 50
Warning Synced 30s (x2 over 40s) flagger podinfo-primary.test not ready: waiting for rollout to finish: 1 old replicas are pending termination
Normal Synced 9s (x3 over 50s) flagger (combined from similar events): Promotion completed! Scaling down podinfo.test
```
**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo-2 Progressing 10 2022-11-23T05:00:54Z
test podinfo Succeeded 0 2022-11-23T06:00:54Z
```
## Automated rollback
During the canary analysis you can generate HTTP 500 errors to test if Flagger pauses and rolls back the faulted version.
Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
```bash
kubectl -n test exec -it deploy/flagger-loadtester bash
```
Generate HTTP 500 errors:
```bash
hey -z 1m -c 5 -q 5 -host app.example.com http://apisix-gateway.apisix/status/500
```
Generate latency:
```bash
watch -n 1 curl -H \"host: app.example.com\" http://apisix-gateway.apisix/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.
```text
kubectl -n apisix logs deploy/flagger -f | jq .msg
"New revision detected! Scaling up podinfo.test"
"canary deployment podinfo.test not ready: waiting for rollout to finish: 0 of 1 (readyThreshold 100%) updated replicas are available"
"Starting canary analysis for podinfo.test"
"Advance podinfo.test canary weight 10"
"Halt podinfo.test advancement success rate 0.00% < 99%"
"Halt podinfo.test advancement success rate 26.76% < 99%"
"Halt podinfo.test advancement success rate 34.19% < 99%"
"Halt podinfo.test advancement success rate 37.32% < 99%"
"Halt podinfo.test advancement success rate 39.04% < 99%"
"Halt podinfo.test advancement success rate 40.13% < 99%"
"Halt podinfo.test advancement success rate 48.28% < 99%"
"Halt podinfo.test advancement success rate 50.35% < 99%"
"Halt podinfo.test advancement success rate 56.92% < 99%"
"Halt podinfo.test advancement success rate 67.70% < 99%"
"Rolling back podinfo.test failed checks threshold reached 10"
"Canary failed! Scaling down podinfo.test"
```
## Custom metrics
The canary analysis can be extended with Prometheus queries.
Create a metric template and apply it on the cluster:
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: not-found-percentage
namespace: test
spec:
provider:
type: prometheus
address: http://flagger-prometheus.apisix:9090
query: |
sum(
rate(
apisix_http_status{
route=~"{{ namespace }}_{{ route }}-{{ target }}-canary_.+",
code!~"4.."
}[{{ interval }}]
)
)
/
sum(
rate(
apisix_http_status{
route=~"{{ namespace }}_{{ route }}-{{ target }}-canary_.+"
}[{{ interval }}]
)
) * 100
```
Edit the canary analysis and add the not found error rate check:
```yaml
analysis:
metrics:
- name: "404s percentage"
templateRef:
name: not-found-percentage
thresholdRange:
max: 5
interval: 1m
```
The above configuration validates the canary by checking if the HTTP 404 req/sec percentage is below 5 percent of the total traffic. If the 404s rate reaches the 5% threshold, then the canary fails.
The above procedures can be extended with more [custom metrics](../usage/metrics.md) checks, [webhooks](../usage/webhooks.md), [manual promotion](../usage/webhooks.md#manual-gating) approval and [Slack or MS Teams](../usage/alerting.md) notifications.

View File

@ -62,6 +62,9 @@ Create a canary definition:
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
annotations:
# Enable Envoy access logging to stdout.
appmesh.flagger.app/accesslog: enabled
name: podinfo
namespace: test
spec:
@ -77,7 +80,7 @@ spec:
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -168,7 +171,7 @@ virtualservice.appmesh.k8s.aws/podinfo
virtualservice.appmesh.k8s.aws/podinfo-canary
```
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test`
After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test`
will be routed to the primary pods.
During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
@ -242,7 +245,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -307,7 +310,7 @@ Trigger a canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
@ -399,7 +402,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Flagger detects that the deployment revision changed and starts the A/B test:

View File

@ -50,7 +50,7 @@ helm upgrade -i frontend flagger/podinfo \
--set backend=http://backend.test:9898/echo \
--set canary.enabled=true \
--set canary.istioIngress.enabled=true \
--set canary.istioIngress.gateway=public-gateway.istio-system.svc.cluster.local \
--set canary.istioIngress.gateway=istio-system/public-gateway \
--set canary.istioIngress.host=frontend.istio.example.com
```
@ -278,7 +278,7 @@ spec:
enabled: true
istioIngress:
enabled: true
gateway: public-gateway.istio-system.svc.cluster.local
gateway: istio-system/public-gateway
host: frontend.istio.example.com
loadtest:
enabled: true
@ -320,7 +320,7 @@ After a couple of seconds Flux will apply the Kubernetes resources from Git and
A CI/CD pipeline for the `frontend` release could look like this:
* cut a release from the master branch of the podinfo code repo with the git tag `3.1.1`
* CI builds the image and pushes the `podinfo:3.1.1` image to the container registry
* CI builds the image and pushes the `podinfo:6.0.1` image to the container registry
* Flux scans the registry and updates the Helm release `image.tag` to `3.1.1`
* Flux commits and push the change to the cluster repo
* Flux applies the updated Helm release on the cluster
@ -343,5 +343,5 @@ A canary deployment can fail due to any of the following reasons:
* the Istio telemetry service is unable to collect traffic metrics
* the metrics server \(Prometheus\) can't be reached
If you want to find out more about managing Helm releases with Flux here are two in-depth guides: [gitops-helm](https://github.com/stefanprodan/gitops-helm) and [gitops-istio](https://github.com/stefanprodan/gitops-istio).
If you want to find out more about managing Helm releases with Flux here are two in-depth guides: [flux2-kustomize-helm-example](https://github.com/fluxcd/flux2-kustomize-helm-example) and [gitops-istio](https://github.com/stefanprodan/gitops-istio).

View File

@ -76,7 +76,7 @@ spec:
name: podinfo
# HPA reference
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -90,6 +90,8 @@ spec:
retries:
attempts: 3
perTryTimeout: 5s
# supported values for retryOn - https://projectcontour.io/docs/main/config/api/#projectcontour.io/v1.RetryOn
retryOn: "5xx"
# define the canary analysis timing and KPIs
analysis:
# schedule interval (default 60s)
@ -157,7 +159,7 @@ service/podinfo-primary
httpproxy.projectcontour.io/podinfo
```
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
## Expose the app outside the cluster
@ -224,7 +226,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -281,7 +283,7 @@ Trigger a canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
@ -369,7 +371,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Flagger detects that the deployment revision changed and starts the A/B test:

View File

@ -0,0 +1,730 @@
# Gateway API Canary Deployments
This guide shows you how to use [Gateway API](https://gateway-api.sigs.k8s.io/) and Flagger to automate canary deployments and A/B testing.
![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-gatewayapi-canary.png)
## Prerequisites
Flagger requires a Kubernetes cluster **v1.19** or newer and any mesh/ingress that implements the `v1beta1` or the `v1` version of Gateway API.
We'll be using Istio for the sake of this tutorial, but you can use any other implementation.
Install the Gateway API CRDs
```bash
kubectl apply -k "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0"
```
Install Istio:
```bash
istioctl install --set profile=minimal -y
# Suggestion: Please change release-1.20 in below command, to your real istio version.
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/prometheus.yaml
```
Install Flagger in the `flagger-system` namespace:
```bash
kubectl create ns flagger-system
helm repo add flagger https://flagger.app
helm upgrade -i flagger flagger/flagger \
--namespace flagger-system \
--set prometheus.install=false \
--set meshProvider=gatewayapi:v1 \
--set metricsServer=http://prometheus.istio-system:9090
```
> Note: The above installation sets the mesh provider to be `gatewayapi:v1`. If your Gateway API implementation uses the `v1beta1` CRDs, then
set the `--meshProvider` value to `gatewayapi:v1beta1`.
Create a namespace for the `Gateway`:
```bash
kubectl create ns istio-ingress
```
Create a `Gateway` that configures load balancing, traffic ACL, etc:
```yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-ingress
spec:
gatewayClassName: istio
listeners:
- name: default
hostname: "*.example.com"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
```
## Bootstrap
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler \(HPA\), then creates a series of objects \(Kubernetes deployments, ClusterIP services, HTTPRoutes for the Gateway\). These objects expose the application inside the mesh and drive the canary analysis and promotion.
Create a test namespace:
```bash
kubectl create ns test
```
Create a deployment and a horizontal pod autoscaler:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
```
Deploy the load testing service to generate traffic during the canary analysis:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
```
Create metric templates targeting the Prometheus server in the `flagger-system` namespace. The PromQL queries below are meant for `Envoy`, but you can [change it to your ingress/mesh provider](https://docs.flagger.app/faq#metrics) accordingly.
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: latency
namespace: flagger-system
spec:
provider:
type: prometheus
address: http://prometheus.istio-system:9090
query: |
histogram_quantile(0.99,
sum(
rate(
istio_request_duration_milliseconds_bucket{
reporter="source",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}",
}[{{ interval }}]
)
) by (le)
)/1000
---
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: error-rate
namespace: flagger-system
spec:
provider:
type: prometheus
address: http://prometheus.istio-system:9090
query: |
100 - sum(
rate(
istio_requests_total{
reporter="source",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}",
response_code!~"5.*"
}[{{ interval }}]
)
)
/
sum(
rate(
istio_requests_total{
reporter="source",
destination_workload_namespace=~"{{ namespace }}",
destination_workload=~"{{ target }}",
}[{{ interval }}]
)
)
* 100
```
Save the above resource as metric-templates.yaml and then apply it:
```bash
kubectl apply -f metric-templates.yaml
```
Create a canary custom resource \(replace "www.example.com" with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service port number
port: 9898
# container port number or name (optional)
targetPort: 9898
# Gateway API HTTPRoute host names
hosts:
- www.example.com
# Reference to the Gateway that the generated HTTPRoute would attach to.
gatewayRefs:
- name: gateway
namespace: istio-ingress
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
metrics:
- name: error-rate
# max error rate (5xx responses)
# percentage (0-100)
templateRef:
name: error-rate
namespace: flagger-system
thresholdRange:
max: 1
interval: 1m
- name: latency
templateRef:
name: latency
namespace: flagger-system
# seconds
thresholdRange:
max: 0.5
interval: 30s
# testing (optional)
webhooks:
- name: smoke-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: bash
cmd: "curl -sd 'anon' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 2m -q 10 -c 2 -host www.example.com http://gateway-istio.istio-ingress/"
```
Save the above resource as podinfo-canary.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every minute.
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
httproutes.gateway.networking.k8s.io/podinfo
```
## Expose the app outside the cluster
Find the external address of Istio's load balancer:
```bash
export ADDRESS="$(kubectl -n istio-ingress get svc/gateway-istio -ojson \
| jq -r ".status.loadBalancer.ingress[].hostname")"
echo $ADDRESS
```
Configure your DNS server with a CNAME record \(AWS\) or A record \(GKE/AKS/DOKS\) and point a domain e.g. `www.example.com` to the LB address.
Now you can access the podinfo UI using your domain address.
Note that you should be using HTTPS when exposing production workloads on internet. You can obtain free TLS certs from Let's Encrypt, read this
[guide](https://github.com/stefanprodan/istio-gke) on how to configure cert-manager to secure Istio with TLS certificates.
If you're using a local cluster via kind/k3s you can port forward the Envoy LoadBalancer service:
```bash
kubectl port-forward -n istio-ingress svc/gateway-istio 8080:80
```
Now you can access podinfo via `curl -H "Host: www.example.com" localhost:8080`
## Automated canary promotion
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 0
Phase: Succeeded
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger New revision detected podinfo.test
Normal Synced 3m flagger Scaling up podinfo.test
Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 2m flagger Advance podinfo.test canary weight 20
Normal Synced 2m flagger Advance podinfo.test canary weight 25
Normal Synced 1m flagger Advance podinfo.test canary weight 30
Normal Synced 1m flagger Advance podinfo.test canary weight 35
Normal Synced 55s flagger Advance podinfo.test canary weight 40
Normal Synced 45s flagger Advance podinfo.test canary weight 45
Normal Synced 35s flagger Advance podinfo.test canary weight 50
Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test
Warning Synced 15s flagger Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test
```
**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.
A canary deployment is triggered by changes in any of the following objects:
* Deployment PodSpec \(container image, command, ports, env, resources, etc\)
* ConfigMaps mounted as volumes or mapped to environment variables
* Secrets mounted as volumes or mapped to environment variables
You can monitor how Flagger progressively changes the weights of the HTTPRoute object that is attahed to the Gateway with:
```bash
watch kubectl get httproute -n test podinfo -o=jsonpath='{.spec.rules}'
```
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2022-01-16T14:05:07Z
prod frontend Succeeded 0 2022-01-15T16:15:07Z
prod backend Failed 0 2022-01-14T17:05:07Z
```
## Automated rollback
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.
Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
```bash
kubectl -n test exec -it flagger-loadtester-xx-xx sh
```
Generate HTTP 500 errors:
```bash
watch curl http://podinfo-canary:9898/status/500
```
Generate latency:
```bash
watch curl http://podinfo-canary:9898/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 10
Phase: Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger Starting canary deployment for podinfo.test
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 3m flagger Halt podinfo.test advancement error rate 69.17% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 61.39% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 55.06% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 47.00% > 1%
Normal Synced 2m flagger (combined from similar events): Halt podinfo.test advancement error rate 38.08% > 1%
Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
```
## Session Affinity
While Flagger can perform weighted routing and A/B testing individually, with Gateway API it can combine the two leading to a Canary
release with session affinity.
For more information you can read the [deployment strategies docs](../usage/deployment-strategies.md#canary-release-with-session-affinity).
> **Note:** The implementation must have support for the [`ResponseHeaderModifier`](https://github.com/kubernetes-sigs/gateway-api/blob/3d22aa5a08413222cb79e6b2e245870360434614/apis/v1beta1/httproute_types.go#L651) API.
Create a canary custom resource \(replace www.example.com with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service port number
port: 9898
# container port number or name (optional)
targetPort: 9898
# Gateway API HTTPRoute host names
hosts:
- www.example.com
# Reference to the Gateway that the generated HTTPRoute would attach to.
gatewayRefs:
- name: gateway
namespace: istio-ingress
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
# session affinity config
sessionAffinity:
# name of the cookie used
cookieName: flagger-cookie
# max age of the cookie (in seconds)
# optional; defaults to 86400
maxAge: 21600
metrics:
- name: error-rate
# max error rate (5xx responses)
# percentage (0-100)
templateRef:
name: error-rate
namespace: flagger-system
thresholdRange:
max: 1
interval: 1m
- name: latency
templateRef:
name: latency
namespace: flagger-system
# seconds
thresholdRange:
max: 0.5
interval: 30s
# testing (optional)
webhooks:
- name: smoke-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: bash
cmd: "curl -sd 'anon' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 2m -q 10 -c 2 -host www.example.com http://gateway-istio.istio-ingress/"
```
Save the above resource as podinfo-canary-session-affinity.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary-session-affinity.yaml
```
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
You can load `www.example.com` in your browser and refresh it until you see the requests being served by `podinfo:6.0.1`.
All subsequent requests after that will be served by `podinfo:6.0.1` and not `podinfo:6.0.0` because of the session affinity
configured by Flagger in the HTTPRoute object.
To configure stickiness for the Primary deployment to ensure fair weighted traffic routing, please
checkout the [deployment strategies docs](../usage/deployment-strategies.md#canary-release-with-session-affinity).
# A/B Testing
Besides weighted routing, Flagger can be configured to route traffic to the canary based on HTTP match conditions. In an A/B testing scenario, you'll be using HTTP headers or cookies to target a certain segment of your users. This is particularly useful for frontend applications that require session affinity.
![Flagger A/B Testing Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-abtest-steps.png)
Create a canary custom resource \(replace "www.example.com" with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service port number
port: 9898
# container port number or name (optional)
targetPort: 9898
# Gateway API HTTPRoute host names
hosts:
- www.example.com
# Reference to the Gateway that the generated HTTPRoute would attach to.
gatewayRefs:
- name: gateway
namespace: istio-ingress
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
metrics:
- name: error-rate
# max error rate (5xx responses)
# percentage (0-100)
templateRef:
name: error-rate
namespace: flagger-system
thresholdRange:
max: 1
interval: 1m
- name: latency
templateRef:
name: latency
namespace: flagger-system
# seconds
thresholdRange:
max: 0.5
interval: 30s
# testing (optional)
webhooks:
- name: smoke-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 15s
metadata:
type: bash
cmd: "curl -sd 'anon' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 2m -q 10 -c 2 -host www.example.com -H 'X-Canary: insider' http://gateway-istio.istio-ingress/"
```
The above configuration will run an analysis for ten minutes targeting those users that have an insider cookie.
Save the above resource as podinfo-ab-canary.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-ab-canary.yaml
```
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.3
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Failed Checks: 0
Phase: Succeeded
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger New revision detected podinfo.test
Normal Synced 3m flagger Scaling up podinfo.test
Warning Synced 3m flagger Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m flagger Advance podinfo.test canary iteration 1/10
Normal Synced 3m flagger Advance podinfo.test canary iteration 2/10
Normal Synced 3m flagger Advance podinfo.test canary iteration 3/10
Normal Synced 2m flagger Advance podinfo.test canary iteration 4/10
Normal Synced 2m flagger Advance podinfo.test canary iteration 5/10
Normal Synced 1m flagger Advance podinfo.test canary iteration 6/10
Normal Synced 1m flagger Advance podinfo.test canary iteration 7/10
Normal Synced 55s flagger Advance podinfo.test canary iteration 8/10
Normal Synced 45s flagger Advance podinfo.test canary iteration 9/10
Normal Synced 35s flagger Advance podinfo.test canary iteration 10/10
Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test
Warning Synced 15s flagger Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test
```
## Traffic mirroring
![Flagger Canary Traffic Shadowing](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-traffic-mirroring.png)
For applications that perform read operations, Flagger can be configured to do B/G tests with traffic mirroring.
Gateway API traffic mirroring will copy each incoming request, sending one request to the primary and one to the canary service.
The response from the primary is sent back to the user and the response from the canary is discarded.
Metrics are collected on both requests so that the deployment will only proceed if the canary metrics are within the threshold values.
Note that mirroring should be used for requests that are **idempotent** or capable of being processed twice \(once by the primary and once by the canary\).
You can enable mirroring by replacing `stepWeight` with `iterations` and by setting `analysis.mirror` to `true`:
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
service:
# service port number
port: 9898
# container port number or name (optional)
targetPort: 9898
# Gateway API HTTPRoute host names
hosts:
- www.example.com
# Reference to the Gateway that the generated HTTPRoute would attach to.
gatewayRefs:
- name: gateway
namespace: istio-ingress
analysis:
# schedule interval
interval: 1m
# max number of failed metric checks before rollback
threshold: 5
# total number of iterations
iterations: 10
# enable traffic shadowing
mirror: true
# Gateway API HTTPRoute host names
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1m
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 2m -q 10 -c 2 -host www.example.com http://gateway-istio.istio-ingress/"
```
With the above configuration, Flagger will run a canary release with the following steps:
* detect new revision \(deployment spec, secrets or configmaps changes\)
* scale from zero the canary deployment
* wait for the HPA to set the canary minimum replicas
* check canary pods health
* run the acceptance tests
* abort the canary release if tests fail
* start the load tests
* mirror 100% of the traffic from primary to canary
* check request success rate and request duration every minute
* abort the canary release if the metrics check failure threshold is reached
* stop traffic mirroring after the number of iterations is reached
* route live traffic to the canary pods
* promote the canary \(update the primary secrets, configmaps and deployment spec\)
* wait for the primary deployment rollout to finish
* wait for the HPA to set the primary minimum replicas
* check primary pods health
* switch live traffic back to primary
* scale to zero the canary
* send notification with the canary analysis result
The above procedures can be extended with [custom metrics](../usage/metrics.md) checks, [webhooks](../usage/webhooks.md), [manual promotion](../usage/webhooks.md#manual-gating) approval and [Slack or MS Teams](../usage/alerting.md) notifications.

View File

@ -110,7 +110,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -209,7 +209,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -264,7 +264,7 @@ Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Generate HTTP 500 errors:
@ -365,7 +365,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Generate 404s:
@ -427,7 +427,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.4
podinfod=ghcr.io/stefanprodan/podinfo:6.0.4
```
Flagger detects that the deployment revision changed and starts the A/B test:

View File

@ -15,7 +15,7 @@ Install Istio with telemetry support and Prometheus:
```bash
istioctl manifest install --set profile=default
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/addons/prometheus.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/prometheus.yaml
```
Install Flagger in the `istio-system` namespace:
@ -84,7 +84,7 @@ spec:
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -92,7 +92,7 @@ spec:
port: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
# Istio virtual service host names (optional)
hosts:
- app.example.com
@ -173,13 +173,13 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/abtest
kubectl -n test describe canary/podinfo
Status:
Failed Checks: 0

View File

@ -13,7 +13,8 @@ Install Istio with telemetry support and Prometheus:
```bash
istioctl manifest install --set profile=default
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/addons/prometheus.yaml
# Suggestion: Please change release-1.8 in below command, to your real istio version.
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.18/samples/addons/prometheus.yaml
```
Install Flagger in the `istio-system` namespace:
@ -84,7 +85,7 @@ spec:
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -94,7 +95,7 @@ spec:
targetPort: 9898
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
# Istio virtual service host names (optional)
hosts:
- app.example.com
@ -185,7 +186,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -245,7 +246,7 @@ Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
@ -291,6 +292,118 @@ Events:
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
```
## Session Affinity
While Flagger can perform weighted routing and A/B testing individually, with Istio it can combine the two leading to a Canary
release with session affinity. For more information you can read the [deployment strategies docs](../usage/deployment-strategies.md#canary-release-with-session-affinity).
Create a canary custom resource \(replace app.example.com with your own domain\):
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
# service port number
port: 9898
# container port number or name (optional)
targetPort: 9898
# Istio gateways (optional)
gateways:
- istio-system/public-gateway
# Istio virtual service host names (optional)
hosts:
- app.example.com
# Istio traffic policy (optional)
trafficPolicy:
tls:
# use ISTIO_MUTUAL when mTLS is enabled
mode: DISABLE
# Istio retry policy (optional)
retries:
attempts: 3
perTryTimeout: 1s
retryOn: "gateway-error,connect-failure,refused-stream"
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
# session affinity config
sessionAffinity:
# name of the cookie used
cookieName: flagger-cookie
# max age of the cookie (in seconds)
# optional; defaults to 86400
maxAge: 21600
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: request-duration
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 30s
# testing (optional)
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
```
Save the above resource as podinfo-canary-session-affinity.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary-session-affinity.yaml
```
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
You can load `app.example.com` in your browser and refresh it until you see the requests being served by `podinfo:6.0.1`.
All subsequent requests after that will be served by `podinfo:6.0.1` and not `podinfo:6.0.0` because of the session affinity
configured by Flagger with Istio.
## Traffic mirroring
![Flagger Canary Traffic Shadowing](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-traffic-mirroring.png)
@ -367,3 +480,61 @@ With the above configuration, Flagger will run a canary release with the followi
The above procedure can be extended with [custom metrics](../usage/metrics.md) checks, [webhooks](../usage/webhooks.md), [manual promotion](../usage/webhooks.md#manual-gating) approval and [Slack or MS Teams](../usage/alerting.md) notifications.
## Canary Deployments for TCP Services
Performing a Canary deployment on a TCP (non HTTP) service is nearly identical to an HTTP Canary. Besides updating your `Gateway` document to support the `TCP` routing, the only difference is you have to set the `appProtocol` field to `TCP` inside of the `service` section of your `Canary` document.
#### Example:
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 7070
name: tcp-service
protocol: TCP # <== set the protocol to tcp here
hosts:
- "*"
```
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
# omitted for brevity
spec:
service:
port: 7070
appProtocol: TCP # <== set the appProtocol here
targetPort: 7070
portName: "tcp-service-port"
```
If the `appProtocol` equals `TCP` then Flagger will treat this as a Canary deployment for a `TCP` service. When it creates the `VirtualService` document it will add a `TCP` section to route requests between the `primary` and `canary` services. See Istio documentation for more information on this [spec](https://istio.io/latest/docs/reference/config/networking/virtual-service/#TCPRoute).
The resulting `VirtualService` will include a `tcp` section similar to what is shown below:
```yaml
tcp:
- route:
- destination:
host: tcp-service-primary
port:
number: 7070
weight: 100
- destination:
host: tcp-service-canary
port:
number: 7070
weight: 0
```
Once the Canary analysis begins, Flagger will be able to adjust the weights inside of this `tcp` section to advance the Canary deployment until it either runs into an error (and is halted) or it successfully reaches the end of the analysis and is Promoted.
It is also important to note that if you set `appProtocol` to anything other than `TCP`, for example if you set it to `HTTP`, it will perform the Canary and treat it as an `HTTP` service. The same remains true if you do not set `appProtocol` at all. It will __ONLY__ treat a Canary as a `TCP` service if `appProtocal` equals `TCP`.

View File

@ -0,0 +1,243 @@
# Canary analysis with KEDA ScaledObjects
This guide shows you how to use Flagger with KEDA ScaledObjects to autoscale workloads during a Canary analysis run.
We will be using a Blue/Green deployment strategy with the Kubernetes provider for the sake of this tutorial, but
you can use any deployment strategy combined with any supported provider.
## Prerequisites
Flagger requires a Kubernetes cluster **v1.16** or newer. For this tutorial, we'll need KEDA **2.7.1** or newer.
Install KEDA:
```bash
helm repo add kedacore https://kedacore.github.io/charts
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
```
Install Flagger:
```bash
helm repo add flagger https://flagger.app
kubectl create namespace flagger
helm upgrade -i flagger flagger/flagger \
--namespace flagger \
--set prometheus.install=true \
--set meshProvider=kubernetes
```
## Bootstrap
Flagger takes a Kubernetes deployment and a KEDA ScaledObject targeting the deployment. It then creates a series of objects
(Kubernetes deployments, ClusterIP services and another KEDA ScaledObject targeting the created Deployment).
These objects expose the application inside the mesh and drive the Canary analysis and Blue/Green promotion.
Create a test namespace:
```bash
kubectl create ns test
```
Create a deployment named `podinfo`:
```bash
kubectl apply -n test -f https://raw.githubusercontent.com/fluxcd/flagger/main/kustomize/podinfo/deployment.yaml
```
Deploy the load testing service to generate traffic during the analysis:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
```
Create a ScaledObject which targets the `podinfo` deployment and uses Prometheus as a trigger:
```yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: podinfo-so
namespace: test
spec:
scaleTargetRef:
name: podinfo
pollingInterval: 10
cooldownPeriod: 20
minReplicaCount: 1
maxReplicaCount: 3
triggers:
- type: prometheus
metadata:
name: prom-trigger
serverAddress: http://flagger-prometheus.flagger:9090
metricName: http_requests_total
query: sum(rate(http_requests_total{ app="podinfo" }[30s]))
threshold: '5'
```
Create a canary custom resource for the `podinfo` deployment:
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: kubernetes
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
# Scaler reference
autoscalerRef:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
# ScaledObject targeting the canary deployment
name: podinfo-so
# Mapping between trigger names and the related query to use for the generated
# ScaledObject targeting the primary deployment. (Optional)
primaryScalerQueries:
prom-trigger: sum(rate(http_requests_total{ app="podinfo-primary" }[30s]))
# Overriding replica scaling configuration for the generated ScaledObject
# targeting the primary deployment. (Optional)
primaryScalerReplicas:
minReplicas: 2
maxReplicas: 5
# the maximum time in seconds for the canary deployment
# to make progress before rollback (default 600s)
progressDeadlineSeconds: 60
service:
port: 80
targetPort: 9898
name: podinfo-svc
portDiscovery: true
analysis:
# schedule interval (default 60s)
interval: 15s
# max number of failed checks before rollback
threshold: 5
# number of checks to run before promotion
iterations: 5
# Prometheus checks based on
# http_request_duration_seconds histogram
metrics:
- name: request-success-rate
interval: 1m
thresholdRange:
min: 99
- name: request-duration
interval: 30s
thresholdRange:
max: 500
# load testing hooks
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 2m -q 20 -c 2 http://podinfo-svc-canary.test/"
```
Save the above resource as `podinfo-canary.yaml` and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
scaledobject.keda.sh/podinfo-so
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
scaledobject.keda.sh/podinfo-so-primary
```
We refer to our ScaledObject for the canary deployment using `.spec.autoscalerRef`. Flagger will use this to generate a ScaledObject which will scale the primary deployment.
By default, Flagger will try to guess the query to use for the primary ScaledObject, by replacing all mentions of `.spec.targetRef.Name` and `{.spec.targetRef.Name}-canary`
with `{.spec.targetRef.Name}-primary`, for all triggers.
For eg, if your ScaledObject has a trigger query defined as: `sum(rate(http_requests_total{ app="podinfo" }[30s]))` or `sum(rate(http_requests_total{ app="podinfo-primary" }[30s]))`, then the primary ScaledObject will have the same trigger with a query defined as `sum(rate(http_requests_total{ app="podinfo-primary" }[30s]))`.
If, the generated query does not meet your requirements, you can specify the query for autoscaling the primary deployment explicitly using
`.spec.autoscalerRef.primaryScalerQueries`, which lets you define a query for each trigger. Please note that, your ScaledObject's `.spec.triggers[@].name` must
not be blank, as Flagger needs that to identify each trigger uniquely.
In the situation when it is desired to have different scaling replica configuration between the canary and primary deployment ScaledObject you can use
the `.spec.autoscalerRef.primaryScalerReplicas` to override these values for the generated primary ScaledObject.
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. To keep the podinfo deployment
at 0 replicas and pause auto scaling, Flagger will add an annotation to your ScaledObject: `autoscaling.keda.sh/paused-replicas: 0`.
During the canary analysis, the annotation is removed, to enable auto scaling for the podinfo deployment.
The `podinfo-canary.test` address can be used to target directly the canary pods.
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The Blue/Green deployment will run for five iterations while validating the HTTP metrics and rollout hooks every 15 seconds.
## Automated Blue/Green promotion
Trigger a deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Pre-rollout check acceptance-test passed
Advance podinfo.test canary iteration 1/10
Advance podinfo.test canary iteration 2/10
Advance podinfo.test canary iteration 3/10
Advance podinfo.test canary iteration 4/10
Advance podinfo.test canary iteration 5/10
Advance podinfo.test canary iteration 6/10
Advance podinfo.test canary iteration 7/10
Advance podinfo.test canary iteration 8/10
Advance podinfo.test canary iteration 9/10
Advance podinfo.test canary iteration 10/10
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
```
**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 100 2019-06-16T14:05:07Z
```
You can monitor the scaling of the deployments with:
```bash
watch kubectl -n test get deploy podinfo
NAME READY UP-TO-DATE AVAILABLE AGE
flagger-loadtester 1/1 1 1 4m21s
podinfo 3/3 3 3 4m28s
podinfo-primary 3/3 3 3 3m14s
```
You can mointor how Flagger edits the annotations of your ScaledObject with:
```bash
watch "kubectl get -n test scaledobjects podinfo-so -o=jsonpath='{.metadata.annotations}'"
```

View File

@ -0,0 +1,249 @@
# Knative Canary Deployments
This guide shows you how to use [Knative](https://knative.dev/) and Flagger to automate canary deployments.
![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-gatewayapi-canary.png)
## Prerequisites
Flagger requires a Kubernetes cluster **v1.19** or newer and a Knative Serving installation that supports
the resources with `serving.knative.dev/v1` as their API version.
Install Knative v1.17.0:
```bash
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.17.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.17.0/serving-core.yaml
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.17.0/kourier.yaml
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
```
Install Flagger in the `flagger-system` namespace:
```bash
kubectl apply -k github.com/fluxcd/flagger//kustomize/knative
```
Create a namespace for your Kntive Service:
```bash
kubectl create namespace test
```
Create a Knative Service that deploys podinfo:
```yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: podinfo
namespace: test
spec:
template:
spec:
containers:
- image: ghcr.io/stefanprodan/podinfo:6.0.0
ports:
- containerPort: 9898
protocol: TCP
command:
- ./podinfo
- --port=9898
- --port-metrics=9797
- --grpc-port=9999
- --grpc-service-name=podinfo
- --level=info
- --random-delay=false
- --random-error=false
```
Deploy the load testing service to generate traffic during the canary analysis:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
```
Create a Canary custom resource:
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
provider: knative
# knative service ref
targetRef:
apiVersion: serving.knative.dev/v1
kind: Service
name: podinfo
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
analysis:
# schedule interval (default 60s)
interval: 15s
# max number of failed metric checks before rollback
threshold: 15
# max traffic percentage routed to canary
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
metrics:
- name: request-success-rate
# min success rate (non-5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: request-duration
# milliseconds
thresholdRange:
max: 500
interval: 1m
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 5 -c 2 http://podinfo.test"
logCmdOutput: "true"
```
> Note: Please note that for a Canary resource with `.spec.provider` set to `knative`, the resource is only valid if the
`.spec.targetRef.kind` is `Service` and `.spec.targetRef.apiVersion` is `serving.knative.dev/v1`.
Save the above resource as podinfo-canary.yaml and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary.
The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every minute.
After a couple of seconds Flagger will make the following changes the Knative Service `podinfo`:
* Add an annotation to the object with the name `flagger.app/primary-revision`.
* Modify the `.spec.traffic` section of the object such that it can manipulate the traffic spread between
the primary and canary Knative Revision.
## Automated canary promotion
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 0
Phase: Succeeded
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger New revision detected podinfo.test
Normal Synced 3m flagger Scaling up podinfo.test
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 2m flagger Advance podinfo.test canary weight 20
Normal Synced 2m flagger Advance podinfo.test canary weight 25
Normal Synced 1m flagger Advance podinfo.test canary weight 30
Normal Synced 1m flagger Advance podinfo.test canary weight 35
Normal Synced 55s flagger Advance podinfo.test canary weight 40
Normal Synced 45s flagger Advance podinfo.test canary weight 45
Normal Synced 35s flagger Advance podinfo.test canary weight 50
Normal Synced 25s flagger Copying podinfo.test template spec to podinfo-primary.test
Normal Synced 5s flagger Promotion completed! Scaling down podinfo.test
```
A canary deployment is triggered everytime a new Knative Revision is created.
**Note** that if you apply new changes to the Knative Service during the canary analysis, Flagger will restart the analysis.
You can monitor how Flagger progressively changes the Knative Service object to spread traffic between Knative Revisions:
```bash
watch kubectl get httproute -n test podinfo -o=jsonpath='{.spec.traffic}'
```
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2025-03-16T14:05:07Z
prod frontend Succeeded 0 2025-03-16T16:15:07Z
prod backend Failed 0 2025-03-16T17:05:07Z
```
## Automated rollback
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.
Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
```bash
kubectl -n test exec -it flagger-loadtester-xx-xx sh
```
Generate HTTP 500 errors:
```bash
watch curl http://podinfo-canary:9898/status/500
```
Generate latency:
```bash
watch curl http://podinfo-canary:9898/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary
Knative Revision and the rollout is marked as failed.
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 10
Phase: Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Synced 3m flagger Starting canary deployment for podinfo.test
Normal Synced 3m flagger Advance podinfo.test canary weight 5
Normal Synced 3m flagger Advance podinfo.test canary weight 10
Normal Synced 3m flagger Advance podinfo.test canary weight 15
Normal Synced 3m flagger Halt podinfo.test advancement error rate 69.17% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 61.39% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 55.06% > 1%
Normal Synced 2m flagger Halt podinfo.test advancement error rate 47.00% > 1%
Normal Synced 2m flagger (combined from similar events): Halt podinfo.test advancement error rate 38.08% > 1%
Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
```

View File

@ -59,7 +59,8 @@ kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
Deploy the load testing service to generate traffic during the analysis:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test
```
Create a canary custom resource:
@ -83,7 +84,7 @@ spec:
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -171,7 +172,7 @@ Trigger a deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -311,7 +312,7 @@ Trigger a deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Generate 404s:

View File

@ -0,0 +1,252 @@
# Kuma Canary Deployments
This guide shows you how to use Kuma and Flagger to automate canary deployments.
![Flagger Kuma Canary](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-kuma-canary.png)
## Prerequisites
Flagger requires a Kubernetes cluster **v1.19** or newer and Kuma **1.7** or newer.
Install Kuma and Prometheus (part of Kuma Metrics):
```bash
kumactl install control-plane | kubectl apply -f -
kumactl install observability --components "grafana,prometheus" | kubectl apply -f -
```
Install Flagger in the `kong-mesh-system` namespace:
```bash
kubectl apply -k github.com/fluxcd/flagger//kustomize/kuma
```
## Bootstrap
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and Kuma `TrafficRoute`).
These objects expose the application inside the mesh and drive the canary analysis and promotion.
Create a test namespace and enable Kuma sidecar injection:
```bash
kubectl create ns test
kubectl annotate namespace test kuma.io/sidecar-injection=enabled
```
Install the load testing service to generate traffic during the canary analysis:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
```
Create a deployment and a horizontal pod autoscaler:
```bash
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
```
Create a canary custom resource for the `podinfo` deployment:
```yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: podinfo
namespace: test
annotations:
kuma.io/mesh: default
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
service:
port: 9898
targetPort: 9898
apex:
annotations:
9898.service.kuma.io/protocol: "http"
canary:
annotations:
9898.service.kuma.io/protocol: "http"
primary:
annotations:
9898.service.kuma.io/protocol: "http"
analysis:
# schedule interval (default 60s)
interval: 30s
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 5
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 30s
webhooks:
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 30s
metadata:
type: bash
cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
- name: load-test
type: rollout
url: http://flagger-loadtester.test/
metadata:
cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"
```
Save the above resource as `podinfo-canary.yaml` and then apply it:
```bash
kubectl apply -f ./podinfo-canary.yaml
```
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute.
After a couple of seconds Flagger will create the canary objects:
```bash
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
ingresses.extensions/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
trafficroutes.kuma.io/podinfo
```
After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
## Automated canary promotion
Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack.
![Flagger Canary Stages](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-canary-steps.png)
Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 0
Phase: Succeeded
Events:
New revision detected! Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Pre-rollout check acceptance-test passed
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
```
**Note** that if you apply new changes to the deployment during the canary analysis, Flagger will restart the analysis.
A canary deployment is triggered by changes in any of the following objects:
* Deployment PodSpec \(container image, command, ports, env, resources, etc\)
* ConfigMaps mounted as volumes or mapped to environment variables
* Secrets mounted as volumes or mapped to environment variables
You can monitor all canaries with:
```bash
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-06-30T14:05:07Z
prod frontend Succeeded 0 2019-06-30T16:15:07Z
prod backend Failed 0 2019-06-30T17:05:07Z
```
## Automated rollback
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version.
Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
```bash
kubectl -n test exec -it flagger-loadtester-xx-xx sh
```
Generate HTTP 500 errors:
```bash
watch -n 1 curl http://podinfo-canary.test:9898/status/500
```
Generate latency:
```bash
watch -n 1 curl http://podinfo-canary.test:9898/delay/1
```
When the number of failed checks reaches the canary analysis threshold, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.
```text
kubectl -n test describe canary/podinfo
Status:
Canary Weight: 0
Failed Checks: 10
Phase: Failed
Events:
Starting canary analysis for podinfo.test
Pre-rollout check acceptance-test passed
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement request duration 1.20s > 0.5s
Halt podinfo.test advancement request duration 1.45s > 0.5s
Rolling back podinfo.test failed checks threshold reached 5
Canary failed! Scaling down podinfo.test
```
The above procedures can be extended with [custom metrics](../usage/metrics.md) checks, [webhooks](../usage/webhooks.md), [manual promotion](../usage/webhooks.md#manual-gating) approval and [Slack or MS Teams](../usage/alerting.md) notifications.

View File

@ -2,25 +2,54 @@
This guide shows you how to use Linkerd and Flagger to automate canary deployments.
![Flagger Linkerd Traffic Split](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/diagrams/flagger-linkerd-traffic-split.png)
## Prerequisites
Flagger requires a Kubernetes cluster **v1.16** or newer and Linkerd **2.10** or newer.
Flagger requires a Kubernetes cluster **v1.21** or newer and Linkerd **2.14** or newer.
Install Linkerd the Promethues (part of Linkerd Viz):
Install Linkerd and Prometheus (part of Linkerd Viz):
```bash
# The CRDs need to be installed beforehand
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
linkerd viz install | kubectl apply -f -
# For linkerd versions 2.12 and later, the SMI extension needs to be install in
# order to enable TrafficSplits
curl -sL https://linkerd.github.io/linkerd-smi/install | sh
linkerd smi install | kubectl apply -f -
```
Install Flagger in the linkerd namespace:
Install Flagger in the flagger-system namespace:
```bash
kubectl apply -k github.com/fluxcd/flagger//kustomize/linkerd
```
If you prefer Helm, these are the commands to install Linkerd, Linkerd Viz,
Linkerd-SMI and Flagger:
```bash
helm repo add linkerd https://helm.linkerd.io/stable
helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace
# See https://linkerd.io/2/tasks/generate-certificates/ for how to generate the
# certs referred below
helm install linkerd-control-plane linkerd/linkerd-control-plane \
-n linkerd \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
helm install linkerd-viz linkerd/linkerd-viz -n linkerd-viz --create-namespace
helm install flagger flagger/flagger \
--n flagger-system \
--set meshProvider=gatewayapi:v1beta1 \
--set metricsServer=http://prometheus.linkerd-viz:9090 \
--set linkerdAuthPolicy.create=true
```
## Bootstrap
Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
@ -46,9 +75,65 @@ Create a deployment and a horizontal pod autoscaler:
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
```
Create a canary custom resource for the podinfo deployment:
Create a metrics template and canary custom resources for the podinfo deployment:
```yaml
---
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: success-rate
namespace: test
spec:
provider:
type: prometheus
address: http://prometheus.linkerd-viz:9090
query: |
sum(
rate(
response_total{
namespace="{{ namespace }}",
deployment=~"{{ target }}",
classification!="failure",
direction="{{ variables.direction }}"
}[{{ interval }}]
)
)
/
sum(
rate(
response_total{
namespace="{{ namespace }}",
deployment=~"{{ target }}",
direction="{{ variables.direction }}"
}[{{ interval }}]
)
)
* 100
---
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: latency
namespace: test
spec:
provider:
type: prometheus
address: http://prometheus.linkerd-viz:9090
query: |
histogram_quantile(
0.99,
sum(
rate(
response_latency_ms_bucket{
namespace="{{ namespace }}",
deployment=~"{{ target }}",
direction="{{ variables.direction }}"
}[{{ interval }}]
)
) by (le)
)
---
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
@ -62,7 +147,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
@ -73,6 +158,13 @@ spec:
port: 9898
# container port number or name (optional)
targetPort: 9898
# Reference to the Service that the generated HTTPRoute would attach to.
gatewayRefs:
- name: podinfo
namespace: test
group: core
kind: Service
port: 9898
analysis:
# schedule interval (default 60s)
interval: 30s
@ -86,18 +178,28 @@ spec:
stepWeight: 5
# Linkerd Prometheus checks
metrics:
- name: request-success-rate
- name: success-rate
templateRef:
name: success-rate
namespace: test
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: request-duration
templateVariables:
direction: inbound
- name: latency
templateRef:
name: latency
namespace: test
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 30s
templateVariables:
direction: inbound
# testing (optional)
webhooks:
- name: acceptance-test
@ -140,7 +242,7 @@ service/podinfo-primary
trafficsplits.split.smi-spec.io/podinfo
```
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
After the bootstrap, the podinfo deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods. During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
## Automated canary promotion
@ -152,7 +254,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -211,7 +313,7 @@ Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
@ -297,7 +399,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Generate 404s:
@ -309,7 +411,7 @@ watch -n 1 curl http://podinfo-canary:9898/status/404
Watch Flagger logs:
```text
kubectl -n linkerd logs deployment/flagger -f | jq .msg
kubectl -n flagger-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Pre-rollout check acceptance-test passed
@ -390,7 +492,7 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
service:
@ -442,7 +544,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.4
podinfod=ghcr.io/stefanprodan/podinfo:6.0.4
```
Flagger detects that the deployment revision changed and starts the A/B testing:

View File

@ -110,7 +110,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
@ -192,7 +192,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout:
@ -246,7 +246,7 @@ Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Generate HTTP 500 errors:
@ -334,7 +334,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Generate high response latency:
@ -407,7 +407,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.4
podinfod=ghcr.io/stefanprodan/podinfo:6.0.4
```
Flagger detects that the deployment revision changed and starts the A/B testing:

View File

@ -86,7 +86,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment
@ -166,7 +166,7 @@ service/podinfo-primary
trafficsplits.split.smi-spec.io/podinfo
```
After the boostrap, the `podinfo` deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods.
After the bootstrap, the `podinfo` deployment will be scaled to zero and the traffic to `podinfo.test` will be routed to the primary pods.
During the canary analysis, the `podinfo-canary.test` address can be used to target directly the canary pods.
## Automated Canary Promotion
@ -180,7 +180,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.1
podinfod=ghcr.io/stefanprodan/podinfo:6.0.1
```
Flagger detects that the deployment revision changed and starts a new rollout.
@ -240,7 +240,7 @@ Trigger another canary deployment:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.2
podinfod=ghcr.io/stefanprodan/podinfo:6.0.2
```
Exec into the load tester pod with:
@ -330,7 +330,7 @@ Trigger a canary deployment by updating the container image:
```bash
kubectl -n test set image deployment/podinfo \
podinfod=stefanprodan/podinfo:3.1.3
podinfod=ghcr.io/stefanprodan/podinfo:6.0.3
```
Exec into the load tester pod with:

View File

@ -113,7 +113,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment

View File

@ -103,7 +103,7 @@ spec:
name: podinfo
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
# the maximum time in seconds for the canary deployment

View File

@ -139,7 +139,7 @@ Note that without resource requests the horizontal pod autoscaler can't determin
A production environment should be able to handle traffic bursts without impacting the quality of service. This can be achieved with Kubernetes autoscaling capabilities. Autoscaling in Kubernetes has two dimensions: the Cluster Autoscaler that deals with node scaling operations and the Horizontal Pod Autoscaler that automatically scales the number of pods in a deployment.
```yaml
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
spec:
scaleTargetRef:
@ -172,7 +172,7 @@ spec:
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
- istio-system/public-gateway
hosts:
- app.example.com
retries:

View File

@ -20,9 +20,10 @@ Once the webhook has been generated. Flagger can be configured to send Slack not
```bash
helm upgrade -i flagger flagger/flagger \
--set slack.url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK \
--set slack.proxy-url=my-http-proxy.com \ # optional http/s proxy
--set slack.proxy=my-http-proxy.com \ # optional http/s proxy
--set slack.channel=general \
--set slack.user=flagger
--set slack.user=flagger \
--set clusterName=my-cluster
```
Once configured with a Slack incoming **webhook**,
@ -36,6 +37,8 @@ or if the analysis reached the maximum number of failed checks:
![Slack Notifications](https://raw.githubusercontent.com/fluxcd/flagger/main/docs/screens/slack-canary-failed.png)
For using a Slack bot token, you should add `token` to a secret and use **secretRef**.
### Microsoft Teams
Flagger can be configured to send notifications to Microsoft Teams:
@ -72,6 +75,7 @@ spec:
channel: on-call-alerts
username: flagger
# webhook address (ignored if secretRef is specified)
# or https://slack.com/api/chat.postMessage if you use token in the secret
address: https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
# optional http/s proxy
proxy: http://my-http-proxy.com
@ -86,6 +90,7 @@ metadata:
namespace: flagger
data:
address: <encoded-url>
token: <encoded-token>
```
The alert provider **type** can be: `slack`, `msteams`, `rocket` or `discord`. When set to `discord`,
@ -127,6 +132,9 @@ Alert fields:
When the severity is set to `warn`, Flagger will alert when waiting on manual confirmation or if the analysis fails.
When the severity is set to `error`, Flagger will alert only if the canary analysis fails.
To differentiate alerts based on the cluster name, you can configure Flagger with the `-cluster-name=my-cluster`
command flag, or with Helm `--set clusterName=my-cluster`.
## Prometheus Alert Manager
You can use Alertmanager to trigger alerts when a canary deployment failed:

View File

@ -3,13 +3,15 @@
Flagger can run automated application analysis, promotion and rollback for the following deployment strategies:
* **Canary Release** \(progressive traffic shifting\)
* Istio, Linkerd, App Mesh, NGINX, Skipper, Contour, Gloo Edge, Traefik, Open Service Mesh
* Istio, Linkerd, App Mesh, NGINX, Skipper, Contour, Gloo Edge, Traefik, Open Service Mesh, Kuma, Gateway API, Apache APISIX, Knative
* **A/B Testing** \(HTTP headers and cookies traffic routing\)
* Istio, App Mesh, NGINX, Contour, Gloo Edge
* Istio, App Mesh, NGINX, Contour, Gloo Edge, Gateway API
* **Blue/Green** \(traffic switching\)
* Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Contour, Gloo Edge, Open Service Mesh
* Kubernetes CNI, Istio, Linkerd, App Mesh, NGINX, Contour, Gloo Edge, Open Service Mesh, Gateway API
* **Blue/Green Mirroring** \(traffic shadowing\)
* Istio
* Istio, Gateway API
* **Canary Release with Session Affinity** \(progressive traffic shifting combined with cookie based routing\)
* Istio, Gateway API
For Canary releases and A/B testing you'll need a Layer 7 traffic management solution like
a service mesh or an ingress controller. For Blue/Green deployments no service mesh or ingress controller is required.
@ -124,9 +126,9 @@ the step and the maximum weight value in 0 to 100 range.
Example:
```yaml
canary:
# canary.yaml
spec:
analysis:
promotion:
maxWeight: 50
stepWeight: 20
```
@ -146,9 +148,9 @@ In order to enable non-linear promotion a new parameter was introduced:
Example:
```yaml
canary:
# canary.yaml
spec:
analysis:
promotion:
stepWeights: [1, 2, 10, 80]
```
@ -351,8 +353,6 @@ you should consider what will happen if a write is duplicated and handled by the
To use mirroring, set `spec.analysis.mirror` to `true`.
Istio example:
```yaml
analysis:
# schedule interval (default 60s)
@ -361,9 +361,10 @@ Istio example:
iterations: 10
# max number of failed iterations before rollback
threshold: 2
# Traffic shadowing (compatible with Istio only)
# Traffic shadowing
mirror: true
# Weight of the traffic mirrored to your canary (defaults to 100%)
# Only applicable for Istio.
mirrorWeight: 100
```
@ -393,3 +394,103 @@ After the analysis finishes, the traffic is routed to the canary (green) before
triggering the primary (blue) rolling update, this ensures a smooth transition
to the new version avoiding dropping in-flight requests during the Kubernetes deployment rollout.
## Canary Release with Session Affinity
This deployment strategy mixes a Canary Release with A/B testing. A Canary Release is helpful when
we're trying to expose new features to users progressively, but because of the very nature of its
routing (weight based), users can land on the application's old version even after they have been
routed to the new version previously. This can be annoying, or worse break how other services interact
with our application. To address this issue, we borrow some things from A/B testing.
Since A/B testing is particularly helpful for applications that require session affinity, we integrate
cookie based routing with regular weight based routing. This means once a user is exposed to the new
version of our application (based on the traffic weights), they're always routed to that version, i.e.
they're never routed back to the old version of our application.
You can enable this, by specifying `.spec.analysis.sessionAffinity` in the Canary:
```yaml
analysis:
# schedule interval (default 60s)
interval: 1m
# max number of failed metric checks before rollback
threshold: 10
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 2
# session affinity config
sessionAffinity:
# name of the cookie used
cookieName: flagger-cookie
# max age of the cookie (in seconds)
# optional; defaults to 86400
maxAge: 21600
```
`.spec.analysis.sessionAffinity.cookieName` is the name of the Cookie that is stored. The value of the
cookie is a randomly generated string of characters that act as a unique identifier. For the above
config, the response header of a request routed to the canary deployment during a Canary run will look like:
```
Set-Cookie: flagger-cookie=LpsIaLdoNZ; Max-Age=21600
```
After a Canary run is over and all traffic is shifted back to the primary deployment, all responses will
have the following header:
```
Set-Cookie: flagger-cookie=LpsIaLdoNZ; Max-Age=-1
```
This tells the client to delete the cookie, making sure there are no junk cookies lying around in the user's
system.
If a new Canary run is triggered, the response header will set a new cookie for all requests routed to
the Canary deployment:
```
Set-Cookie: flagger-cookie=McxKdLQoIN; Max-Age=21600
```
### Configuring stickiness for Primary deployment
The above strategy is helpful because it makes sure that any user that's routed to the Canary deployment
once is always routed to that deployment. But, this can results in an imbalance in the traffic shifting,
as over time, most of the traffic flows to the Canary deployment. To ensure fair traffic distribution, we
can also configure stickiness for the Primary deployment. You can configure this by specifying a
`primaryCookieName` field:
```yaml
analysis:
# schedule interval (default 60s)
interval: 1m
sessionAffinity:
# name of the cookie used
cookieName: flagger-cookie
# max age of the cookie (in seconds)
# optional; defaults to 86400
maxAge: 21600
# name of the cookie to use for the primary backend
# optional; unset means no primary stickiness
primaryCookieName: primary-flagger-cookie
```
> Note: This is only supported for the Gateway API provider for now.
Let's understand what the above configuration does. All the session affinity stuff in the above section
still occurs, but now the response header for requests routed to the primary deployment also include a
`Set-Cookie` header:
```
Set-Cookie: primary-flagger-cookie=ApvLdqCoMF; Max-Age=60
```
Note that the age of the cookie is the same as the Canary analysis's interval. This means that the cookie
expires when a new steps of the analysis begins and a new cookie is generated like so:
```
Set-Cookie: primary-flagger-cookie=BRtlVaQoPC; Max-Age=60
```
This ensures that, if the first request of a user during a particular step is routed to the primary deployment,
then all subsequent requests will be routed to the same until the next step starts. During a new step, a new cookie
value is generated which is then included in the headers of responses from the primary workload. This allows for
weighted traffic routing to happen while ensuring that users don't ever switch back to the primary deployment from
the canary deployment during a Canary analysis.

View File

@ -65,9 +65,12 @@ spec:
kind: Deployment
name: podinfo
autoscalerRef:
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
name: podinfo
primaryScalerReplicas:
minReplicas: 2
maxReplicas: 5
```
Based on the above configuration, Flagger generates the following Kubernetes objects:
@ -80,6 +83,11 @@ by default all traffic is routed to this version and the target deployment is sc
Flagger will detect changes to the target deployment (including secrets and configmaps)
and will perform a canary analysis before promoting the new version as primary.
Use `.spec.autoscalerRef.primaryScalerReplicas` to override the replica scaling
configuration for the generated primary HorizontalPodAutoscaler. This is useful
for situations when you want to have a different scaling configuration for the
primary workload as opposed to using the same values from the original workload HorizontalPodAutoscaler.
**Note** that the target deployment must have a single label selector in the format `app: <DEPLOYMENT-NAME>`:
```yaml
@ -120,6 +128,8 @@ in the primary autoscaler when a rollout for the deployment starts and completes
Optionally, you can create two HPAs, one for canary and one for the primary to update the HPA without
doing a new rollout. As the canary deployment will be scaled to 0, the HPA on the canary will be inactive.
**Note** Flagger requires `autoscaling/v2` or `autoscaling/v2beta2` API version for HPAs.
The progress deadline represents the maximum time in seconds for the canary deployment to
make progress before it is rolled back, defaults to ten minutes.
@ -134,14 +144,18 @@ spec:
name: podinfo
port: 9898
portName: http
appProtocol: http
targetPort: 9898
portDiscovery: true
headless: false
```
The container port from the target workload should match the `service.port` or `service.targetPort`.
The `service.name` is optional, defaults to `spec.targetRef.name`.
The `service.targetPort` can be a container port number or name.
The `service.portName` is optional (defaults to `http`), if your workload uses gRPC then set the port name to `grpc`.
The `service.appProtocol` is optional, more details can be found [here](https://kubernetes.io/docs/concepts/services-networking/service/#application-protocol).
If port discovery is enabled, Flagger scans the target workload and extracts the containers ports
excluding the port specified in the canary service and service mesh sidecar ports.
@ -192,6 +206,9 @@ Note that the `apex` annotations are added to both the generated Kubernetes Serv
generated service mesh/ingress object. This allows using external-dns with Istio `VirtualServices`
and `TraefikServices`. Beware of configuration conflicts [here](../faq.md#ExternalDNS).
If you want for the generated Kubernetes ClusterIP services to be [headless](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services),
then set `service.headless` to true.
Besides port mapping and metadata, the service specification can
contain URI match and rewrite rules, timeout and retry polices:
@ -343,6 +360,10 @@ Spec:
# before starting rollout. this is optional and the default is 100
# percentage (0-100)
primaryReadyThreshold: 100
# threshold of canary pods that need to be available to consider it ready
# before starting rollout. this is optional and the default is 100
# percentage (0-100)
canaryReadyThreshold: 100
# canary match conditions
# used for A/B Testing
match:
@ -363,3 +384,10 @@ On each run, Flagger calls the webhooks, checks the metrics and if the failed ch
stops the analysis and rolls back the canary.
If alerting is configured, Flagger will post the analysis result using the alert providers.
## Canary suspend
The `suspend` field can be set to true to suspend the Canary. If a Canary is suspended,
its reconciliation is completely paused. This means that changes to target workloads,
tracked ConfigMaps and Secrets don't trigger a Canary run and changes to resources generated
by Flagger are not corrected. If the Canary was suspended during an active Canary run,
then the run is paused without disturbing the workloads or the traffic weights.

View File

@ -29,7 +29,7 @@ Flagger comes with two builtin metric checks: HTTP request success rate and dura
For each metric you can specify a range of accepted values with `thresholdRange` and
the window size or the time series with `interval`.
The builtin checks are available for every service mesh / ingress controlle
The builtin checks are available for every service mesh / ingress controller
and are implemented with [Prometheus queries](../faq.md#metrics).
## Custom metrics
@ -62,6 +62,7 @@ The following variables are available in query templates:
* `service` (canary.spec.service.name)
* `ingress` (canary.spec.ingresRef.name)
* `interval` (canary.spec.analysis.metrics[].interval)
* `variables` (canary.spec.analysis.metrics[].templateVariables)
A canary analysis metric can reference a template with `templateRef`:
@ -82,6 +83,50 @@ A canary analysis metric can reference a template with `templateRef`:
interval: 1m
```
A canary analysis metric can reference a set of custom variables with `templateVariables`. These variables will be then injected into the query defined in the referred `MetricTemplate` object during canary analysis:
```yaml
analysis:
metrics:
- name: "my metric"
templateRef:
name: my-metric
namespace: flagger
# accepted values
thresholdRange:
min: 10
max: 1000
# metric query time window
interval: 1m
# custom variables used within the referenced metric template
templateVariables:
direction: inbound
```
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: my-metric
spec:
provider:
type: prometheus
address: http://prometheus.linkerd-viz:9090
query: |
histogram_quantile(
0.99,
sum(
rate(
response_latency_ms_bucket{
namespace="{{ namespace }}",
deployment=~"{{ target }}",
direction="{{ variables.direction }}"
}[{{ interval }}]
)
) by (le)
)
```
## Prometheus
You can create custom metric checks targeting a Prometheus server by
@ -184,13 +229,25 @@ as the `MetricTemplate` with the basic-auth credentials:
apiVersion: v1
kind: Secret
metadata:
name: prom-basic-auth
name: prom-auth
namespace: flagger
data:
username: your-user
password: your-password
```
or if you require bearer token authentication (via a SA token):
```yaml
apiVersion: v1
kind: Secret
metadata:
name: prom-auth
namespace: flagger
data:
token: ey1234...
```
Then reference the secret in the `MetricTemplate`:
```yaml
@ -204,7 +261,7 @@ spec:
type: prometheus
address: http://prometheus.monitoring:9090
secretRef:
name: prom-basic-auth
name: prom-auth
```
## Datadog
@ -478,7 +535,7 @@ spec:
name: graphite-basic-auth
```
## Google CLoud Monitoring (Stackdriver)
## Google Cloud Monitoring (Stackdriver)
Enable Workload Identity on your cluster, create a service account key that has read access to the
Cloud Monitoring API and then create an IAM policy binding between the GCP service account and the Flagger
@ -528,18 +585,20 @@ spec:
The reference for the query language can be found [here](https://cloud.google.com/monitoring/mql/reference)
## Influxdb
## InfluxDB
The influxdb provider uses the [flux](https://docs.influxdata.com/influxdb/v2.0/query-data/get-started/) scripting language.
The InfluxDB provider uses the [flux](https://docs.influxdata.com/influxdb/v2.0/query-data/get-started/) query language.
Create a secret that contains your authentication token that can be gotthen from the InfluxDB UI.
Create a secret that contains your authentication token that can be found in the InfluxDB UI.
```
kubectl create secret generic gcloud-sa --from-literal=token=<token>
kubectl create secret generic influx-token --from-literal=token=<token>
```
Then reference the secret in the metric template.qq
Then reference the secret in the metric template.
Note: The particular MQL query used here works if [Istio is installed on GKE](https://cloud.google.com/istio/docs/istio-on-gke/installing).
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
@ -609,3 +668,116 @@ Reference the template in the canary analysis:
max: 1000
interval: 1m
```
## Keptn
You can create custom metric checks using the Keptn provider.
This Provider allows to verify either the value of a single [KeptnMetric](https://keptn.sh/stable/docs/reference/crd-reference/metric/),
representing the value of a single metric,
or of a [Keptn Analysis](https://keptn.sh/stable/docs/reference/crd-reference/analysis/),
which provides a flexible grading logic for analysing and prioritising a number of different
metric values coming from different data sources.
This provider requires [Keptn](https://keptn.sh/stable/docs/installation/) to be installed in the cluster.
Example for a Keptn metric template:
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: response-time
namespace: istio-system
spec:
provider:
type: keptn
query: keptnmetric/my-namespace/response-time/2m/reporter=destination
```
This will reference the `KeptnMetric` with the name `response-time` in
the namespace `my-namespace`, which could look like the following:
```yaml
apiVersion: metrics.keptn.sh/v1beta1
kind: KeptnMetric
metadata:
name: response-time
namespace: my-namespace
spec:
fetchIntervalSeconds: 10
provider:
name: my-prometheus-keptn-provider
query: histogram_quantile(0.8, sum by(le) (rate(http_server_request_latency_seconds_bucket{status_code='200',
job='simple-go-backend'}[5m[])))
```
The `query` contains the following components, which are divided by `/` characters:
```
<type>/<namespace>/<resource-name>/<timeframe>/<arguments>
```
* **type (required)**: Must be either `keptnmetric` or `analysis`.
* **namespace (required)**: The namespace of the referenced `KeptnMetric`/`AnalysisDefinition`.
* **resource-name (required):** The name of the referenced `KeptnMetric`/`AnalysisDefinition`.
* **timeframe (optional)**: The timeframe used for the Analysis.
This will usually be set to the same value as the analysis interval of a `Canary`.
Only relevant if the `type` is set to `analysis`.
* **arguments (optional)**: Arguments to be passed to an `Analysis`.
Arguments are passed as a list of key value pairs, separated by `;` characters,
e.g. `foo=bar;bar=foo`.
Only relevant if the `type` is set to `analysis`.
For the type `analysis`, the value returned by the provider is either `0`
(if the analysis failed), or `1` (analysis passed).
## Splunk
You can create custom metric checks using the Splunk provider.
Create a secret that contains your authentication token that can be found in the Splunk o11y UI.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: splunk
namespace: istio-system
data:
sf_token_key: your-access-token
```
Splunk template example:
```yaml
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: success-rate
namespace: istio-system
spec:
provider:
type: splunk
address: https://api.<REALM>.signalfx.com
secretRef:
name: splunk
query: |
total = data('traces.count', filter=filter('sf_service', '{{target}}')).sum().publish(enable=False)
success = data('traces.count', filter=filter('sf_service', '{{target}}') and filter('sf_error', 'false')).sum().publish(enable=False)
((success/total) * 100).publish()
```
The query format documentation can be found [here](https://dev.splunk.com/observability/docs/signalflow).
Reference the template in the canary analysis:
```yaml
analysis:
metrics:
- name: "success rate"
templateRef:
name: success-rate
namespace: istio-system
thresholdRange:
max: 99
interval: 1m
```

View File

@ -117,4 +117,8 @@ flagger_canary_duration_seconds_bucket{name="podinfo",namespace="test",le="10"}
flagger_canary_duration_seconds_bucket{name="podinfo",namespace="test",le="+Inf"} 6
flagger_canary_duration_seconds_sum{name="podinfo",namespace="test"} 17.3561329
flagger_canary_duration_seconds_count{name="podinfo",namespace="test"} 6
# Last canary metric analysis result per different metrics
flagger_canary_metric_analysis{metric="podinfo-http-successful-rate",name="podinfo",namespace="test"} 1
flagger_canary_metric_analysis{metric="podinfo-custom-metric",name="podinfo",namespace="test"} 0.918223108974359
```

View File

@ -41,6 +41,7 @@ Spec:
- name: "start gate"
type: confirm-rollout
url: http://flagger-loadtester.test/gate/approve
retries: 5
- name: "helm test"
type: pre-rollout
url: http://flagger-helmtester.flagger/
@ -72,6 +73,7 @@ Spec:
- name: "send to Slack"
type: event
url: http://event-recevier.notifications/slack
retries: 3
metadata:
environment: "test"
cluster: "flagger-test"
@ -86,6 +88,7 @@ Webhook payload (HTTP POST):
"name": "podinfo",
"namespace": "test",
"phase": "Progressing",
"checksum": "85d557f47b",
"metadata": {
"test": "all",
"token": "16688eb5e9f289f1991c"
@ -93,6 +96,8 @@ Webhook payload (HTTP POST):
}
```
The checksum field is hashed from the TrackedConfigs and LastAppliedSpec of the Canary, it can be used to identify a Canary for a specific configuration of the deployed resources.
Response status codes:
* 200-202 - advance canary by increasing the traffic weight
@ -107,6 +112,7 @@ Event payload (HTTP POST):
"name": "string (canary name)",
"namespace": "string (canary namespace)",
"phase": "string (canary phase)",
"checksum": "string (canary checksum"),
"metadata": {
"eventMessage": "string (canary event message)",
"eventType": "string (canary event type)",
@ -118,6 +124,11 @@ Event payload (HTTP POST):
The event receiver can create alerts based on the received phase
(possible values: `Initialized`, `Waiting`, `Progressing`, `Promoting`, `Finalising`, `Succeeded` or `Failed`).
Options:
* retries: The webhook request can be retried by specifying a positive integer in the `retries` field. This helps ensure reliability if the webhook fails due to transient network issues.
* disable TLS: Set `disableTLS` to `true` in the webhook spec to bypass TLS verification. This is useful in cases where the target service uses self-signed certificates, or you need to connect to an insecure service for testing purposes.
## Load Testing
For workloads that are not receiving constant traffic Flagger can be configured with a webhook,
@ -143,7 +154,8 @@ helm repo add flagger https://flagger.app
helm upgrade -i flagger-loadtester flagger/loadtester \
--namespace=test \
--set cmd.timeout=1h
--set cmd.timeout=1h \
--set cmd.namespaceRegexp=''
```
When deployed the load tester API will be available at `http://flagger-loadtester.test/`.
@ -386,6 +398,22 @@ This can be done via mounting a Kubernetes secret in the tester's Deployment.
to see if the process has finished (Default is 5s). `pollTimeout` represents the time in seconds
the web-hook will try to call Concord before timing out (Default is 30s).
If you need to start a Pod/Job to run tests, you can do so using `kubectl`.
```yaml
analysis:
webhooks:
- name: "smoke test"
type: pre-rollout
url: http://flagger-kubectltester.kube-system/
timeout: 3m
metadata:
type: "kubectl"
cmd: "run test --image=alpine --overrides='{ "spec": { "serviceAccount": "default:default" } }'"
```
Note that you need to setup RBAC for the load tester service account in order to run `kubectl` and `helm` commands.
## Manual Gating
For manual approval of a canary deployment you can use the `confirm-rollout` and `confirm-promotion` webhooks.

Some files were not shown because too many files have changed in this diff Show More