Compare commits

...

103 Commits

Author SHA1 Message Date
Ai Ranthem 6554cfcf08
Changelog: v0.6.1 (#270)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-05-08 19:12:37 +08:00
Ai Ranthem 402cb9cd90
Chore: upgrade e2e ubuntu version from 20.04 to 22.04 (#268)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-05-08 12:41:00 +08:00
handagou 2aa692dc2f
Fix order of object in batchrelease event handler (#265)
Signed-off-by: z760087139 <z760087139@gmail.com>
2025-04-10 16:24:51 +08:00
Ai Ranthem ca0a71ff52
Chore: add e2e workflows for k8s 1.26 (#263)
* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-04-08 18:27:46 +08:00
Ai Ranthem 744430356d
Fix: blue-green batch-id e2e fails sometime (#261)
* Fix: blue-green batch-id e2e fails sometime

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-04-01 13:20:48 +08:00
PersistentJZH 6094e966ac
supoort patch batch id in cacary release (#251)
Signed-off-by: zhihao jian <zhihao.jian@shopee.com>
Co-authored-by: zhihao jian <zhihao.jian@shopee.com>
2025-04-01 10:09:45 +08:00
Ai Ranthem 3e66fa1ad8
Feature: support batch-id labeling for bluegreen strategy (#250)
* Feature: support batch-id labeling for blue-green strategy

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-03-20 19:26:46 +08:00
liheng.zms 7baf47d70e v0.5.1 changelog
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2025-03-20 18:03:41 +08:00
Ai Ranthem 334fa1cbf3
add docker-image workflow (#253)
* add docker-image workflow

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* add docker-image workflow

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* fix typo

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-02-07 11:16:27 +08:00
Ai Ranthem 3562934ae5
Release note 0.6.0 (#252)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-01-21 15:51:12 +08:00
zhihao jian faa2d03338 fix patch rollout batch id
Signed-off-by: zhihao jian <zhihao.jian@shopee.com>

add rollback prefix to identify in rollback pods

fix patch rollout id

fix test

fix

add prefix when rollout id is empty

fix test
2025-01-03 09:55:23 +08:00
yunbo 5bbbc046b0 fix the pod-recreate issue in partition style
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
2024-12-27 10:00:08 +08:00
myname4423 efbd8ba8f9
support bluegreen release: support workload of deployment and cloneSet (#238)
* support bluegreen release: Deployment and CloneSet

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* support bluegreen release: webhook update

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add unit test & split workload mutating webhook

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* fix a bug caused by previous merged PR

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* improve some log information

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* fix kruise version problem

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-12-24 19:38:52 +08:00
Jiajing LU 056c77dbd2
Patch canary service selector from PodTemplateMetadata (#243)
* patch canary service selector

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* check null

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix nil check

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* remove len check

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-12-09 19:25:49 +08:00
yunbo 3f66aae0ae support bluegreen release: release logic
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
2024-11-13 11:26:44 +08:00
Ai Ranthem 09e01cb95b
upgrade: gateway-api(0.5.1=>0.7.1), along with controller-runtime(0.12.1=>0.14.6) (#237)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2024-11-13 09:38:49 +08:00
Jiajing LU 6854752435
Add composite provider to support multiple network providers (#224)
* add composite provider

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix nginx lua script and add E2E

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix test case

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert image

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix indent

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* follow latest interface change

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* move e2e to v1beta1 file and add workflow

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-11-01 11:28:58 +08:00
ls-2018 f0363f28c0
fix: lua encode structural error (#209)
Signed-off-by: acejilam <acejilam@gmail.com>
2024-09-06 19:38:09 +08:00
myname4423 d2613132aa
improve finalising logic for canary release (#229)
improve finalising logic for canary release-2

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-09-04 17:29:58 +08:00
myname4423 5378dc2cf7
refactor the grace system (#226)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-08-13 15:17:38 +08:00
myname4423 78273c2998
add restriction for traffic configuration of partition-style step (#225)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-08-02 13:42:28 +08:00
myname4423 16a3f0acc1
update runCanary traffic step for special cases (#219)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-07-30 10:30:26 +08:00
Jiajing LU 6fae7085e5
* inject headerModifier to the luaData (#223)
* fix lua script and test case
* use ptr instead of struct
* add requestHeaderModifier to testcase debugging toolkit

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-07-22 10:47:18 +08:00
myname4423 e7652cbc7c
traffic: Refactor continous logic (#222)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-07-15 11:18:11 +08:00
myname4423 db761a979c
add 2 fields to status to support showing result of kubectl get (#220)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-25 14:58:53 +08:00
myname4423 aa28f4e12e
Allow to jump between steps (#218)
* allow jump between steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

allow jump between steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

allow jump among steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

safte index check

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

add e2e test for step jump

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

amend: style-agonstic reference for webhook

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

improve existing e2e logic to avoid unexpected behaviour

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

jump: nextStep Index default value from 0 to -1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

after rebase

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* jump: fix out of range

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-25 13:21:54 +08:00
myname4423 1e8af4a4c1
update api for future bluegreen (#214)
add status conversion



nextStepIndex default value from 0 to -1



restore the enableExtra field in BR

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-12 13:27:41 +08:00
Jiajing LU 62794dc883
Support Path and QueryParams in http route matches (#204)
* support queryparams for gateway api

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* support mse

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* support path and queryParams

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix lint

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix testcase

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix manifests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* do not provide default value for path

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* Allow Not generating Canary Service && Fixed a bug caused by NOT considering case-insensitivity. (#200)

* Fixed a bug caused by NOT considering case-insensitivity.

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add DisableGenerateCanaryService for CanaryStrategy

amend1: update crd yaml

amend2: add DisableGenerateCanaryService for v1alpha1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert test images

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* polish comments

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* add gateway api tests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix MSE cases

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update golang lint ci

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* regenerate manifests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* remove generic usage

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update istio lua script

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update v1alpha1 in e2e to v1beta1

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix cases

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* refactor istio case to include queryParams and path

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix cloneset issue

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix typo

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert images

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: myname4423 <57184070+myname4423@users.noreply.github.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-12 13:21:42 +08:00
myname4423 3eeb7b4ddc
modify the helm version from latest to v3.14.0 (#215)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-05-25 09:45:24 +08:00
pnr 07c1731e8a
fix: don't check for strategy when finalize (#198)
Co-authored-by: nwaiyatharee <nattadej.waiyatharee@agoda.com>
2024-04-02 13:45:36 +08:00
myname4423 25b053b8be
update unit test for PR #200 (#206)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-04-02 10:34:36 +08:00
myname4423 0ff23f6636
Allow Not generating Canary Service && Fixed a bug caused by NOT considering case-insensitivity. (#200)
* Fixed a bug caused by NOT considering case-insensitivity.

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add DisableGenerateCanaryService for CanaryStrategy

amend1: update crd yaml

amend2: add DisableGenerateCanaryService for v1alpha1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-03-12 16:37:29 +08:00
Wei-Xiang Sun 678d4d2b34
refresh observed rollout-id for BatchRelease (#193)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-12-27 11:16:07 +08:00
zhengjr9 1e84129ff1
bugfix: Filter rs that are not part of the current Deployement (#191)
Signed-off-by: zhengjr <zhengjiarui_pro@163.com>
2023-12-26 17:54:07 +08:00
berg 83eedb354e
rollout v0.5.0 changelog (#190)
* rollout v0.5.0 changelog

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify rollout types description

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* limit secret & configmaps namespace rbac

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify rollout v0.5.0 changelog

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

---------

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-21 15:23:02 +08:00
Wei-Xiang Sun 862040870d
set default advanced deployment strategy (#176)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-12-19 13:23:00 +08:00
Wei-Xiang Sun e19a89c16e
wait grace period seconds after pod creation/upgrade (#185)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-12-19 11:22:00 +08:00
berg bc580a3ae7
fix gateway print log panic (#167)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-19 11:20:00 +08:00
berg 897b42292c
dump to v1beta1 gatewayapis (#189)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-18 15:30:00 +08:00
berg 75b1b90dc9
webhook validate v1beta1 rollout (#188)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-18 13:14:00 +08:00
berg 9dcf3659d2
new v1beta1 apis (#184)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-04 14:56:47 +08:00
Kuromesi 07b7f20f6a
add lua scripts for istio (#178)
* add lua scripts for istio

Signed-off-by: Kuromesi <blackfacepan@163.com>

* make some improvements for istio lua script

Signed-off-by: Kuromesi <blackfacepan@163.com>

---------

Signed-off-by: Kuromesi <blackfacepan@163.com>
2023-11-06 17:34:57 +08:00
berg d41b1fa7d7
rollout v1beta1 apis (#182)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-11-03 13:38:55 +08:00
Kuromesi a9a9430a9a
add e2e tests for custom network provider (#177)
Signed-off-by: Kuromesi <blackfacepan@163.com>
2023-10-27 14:16:48 +08:00
berg 23f1e97f4e
add changelog v0.4.0 (#160)
* add changelog v0.4.0

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* optimize webhook patchResponse function

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify makefile install helm

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify github workflow golang version 1.19

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify makefile kustomize version v4.5.5

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* go format

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

---------

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-10-27 11:56:48 +08:00
Kuromesi 57f9853f23
add support for custom network provider (part A) (#172)
* add support for custom network providers

Signed-off-by: Kuromesi <blackfacepan@163.com>

* make some improvements

Signed-off-by: Kuromesi <blackfacepan@163.com>

* log format updates

Signed-off-by: Kuromesi <blackfacepan@163.com>

* make some logic changes

Signed-off-by: Kuromesi <blackfacepan@163.com>

* remove roll back

Signed-off-by: Kuromesi <blackfacepan@163.com>

* add annotation for lua.go

Signed-off-by: Kuromesi <blackfacepan@163.com>

* store configuration when ensure routes

Signed-off-by: Kuromesi <blackfacepan@163.com>

* store configuration when ensure routes

Signed-off-by: Kuromesi <blackfacepan@163.com>

* make some improvements

Signed-off-by: Kuromesi <blackfacepan@163.com>

* move TestLuaScript to custom_network_provider_test

Signed-off-by: Kuromesi <blackfacepan@163.com>

---------

Signed-off-by: Kuromesi <blackfacepan@163.com>
2023-09-25 13:39:20 +08:00
Kuromesi 76d33b830a
set nodePort of canary service to be 0 (#170)
Signed-off-by: Kuromesi <blackfacepan@163.com>
2023-08-29 13:29:56 +08:00
likakuli 657c6d8079
clean: update rollout status log info (#164)
Signed-off-by: likakuli <1154584512@qq.com>
2023-08-15 20:23:43 +08:00
berg 29862589aa
optimize webhook patchResponse function (#165)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-07-18 12:47:18 +08:00
Wei-Xiang Sun 72e1c0b936
Advanced deployment scale down old unhealthy pods firstly (#150)
* advanced deployment scale down old unhealthy pods firstly

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

* add e2e for advanced deployment scale down old unhealthy pod first

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

---------

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-07-12 17:52:13 +08:00
berg e99c529795
auto patch webhook objectSelector label on workload (#158)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-07-11 14:16:12 +08:00
berg bc014ea80d
rollout support patchPodTemplateMetadata (#157)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-07-10 10:46:10 +08:00
Kuromesi 88e4bb7679
disabled rollout (#155)
Signed-off-by: Kuromesi <blackfacepan@163.com>
2023-07-05 18:46:06 +08:00
berg 1d343d5a26
rollout trafficrouting support requestHeaderModifier (#156)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-07-05 17:03:06 +08:00
张启航 8737f336f0
perf: optimize the modification of rollout to httproute header (#137)
Signed-off-by: zhangsetsail <zqh15131121078@126.com>
2023-06-28 17:56:01 +08:00
berg 7139171497
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com> (#153)
Add TrafficRouting CRD for end-to-end canary deployment
2023-06-26 15:55:58 +08:00
wyike 3578b399a6
Exclude workload deleted matching labels in webhook. (#146)
* finis logic

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

go mod tidy

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

add test

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

* fix goimports lint

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>

---------

Signed-off-by: 楚岳 <wangyike.wyk@alibaba-inc.com>
2023-05-30 11:33:34 +08:00
Siyuan Chen 7f06e27ad3
Add DaemonSet e2e test (#143)
* Add DaemonSet e2e test

Signed-off-by: janice1457 <chen.siyuan4@northeastern.edu>

* Add e2e daemonset test 1.23

Signed-off-by: janice1457 <chen.siyuan4@northeastern.edu>

---------

Signed-off-by: janice1457 <chen.siyuan4@northeastern.edu>
Co-authored-by: janice1457 <chen.siyuan4@northeastern.edu>
2023-05-22 10:26:26 +08:00
MrZhousong 5626a7fbb8
[feature]When the data type of spec.replicas is int, cancel the upper limit (#142)
* feature<rollout> When the data type of spec.replicas is int, cancel the upper limit of 100. #141

Signed-off-by: zhousong <zhousong@onething.net>

* style<rollout> update error field `CanaryReplicas` ==> `Replicas` #141

Signed-off-by: zhousong <zhousong@onething.net>

---------

Signed-off-by: zhousong <zhousong@onething.net>
Co-authored-by: zhousong <zhousong@onething.net>
2023-05-10 14:49:16 +08:00
Yadan-Wei 15109e4cea
Add Kurise Advanced DaemonSet to rollouts framework. (#134)
Signed-off-by: Yadan-Wei <yadanwei0712@gmail.com>
2023-05-08 09:49:14 +08:00
Zhen Zhang c8ecfda823
Upgrade GitHub CI runner from ubuntu-18.04 to ubuntu-20.04 (#136)
Signed-off-by: 守辰 <shouchen.zz@alibaba-inc.com>
2023-04-28 13:24:06 +08:00
Wei-Xiang Sun 5807b5b299
Add contributing and debug docs (#131)
* add contributing and debug docs

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

* add contributing and debug docs

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

---------

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-03-27 13:04:37 +08:00
xin gu 77e4b8dc2e
Update image_utils.go (#126)
Signed-off-by: xin gu <418294249@qq.com>
2023-03-21 13:39:31 +08:00
Wei-Xiang Sun d30966d4ca
update README.md (#127)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-03-13 11:49:23 +08:00
berg 2b48ebaa00
modify aliyun alb lua script (#123)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-02-27 11:00:11 +08:00
Wei-Xiang Sun 73fefef79d
add v0.3.0 change log (#118)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-02-23 10:14:07 +08:00
berg 5fd8464a1e
fix docker build arm64 failed (#117)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-02-13 09:45:58 +08:00
Wei-Xiang Sun c56e2f3394
rolling deployment in partition-style (#115)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-02-10 10:57:55 +08:00
Yang e6ee14b40a
Add higress lua canary script (#116)
Signed-off-by: SpecialYang <940129520@qq.com>
2023-02-03 14:42:50 +08:00
Wei-Xiang Sun 843e8b8bc4
add advanced deployment controller (#110)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-01-11 17:47:29 +08:00
yangs 08dd7878ff
Feat: clean up the canary-related resources while canary step's weight and matches were nil (#108)
Signed-off-by: songyang.song <songyang.song@alibaba-inc.com>

Signed-off-by: songyang.song <songyang.song@alibaba-inc.com>
Co-authored-by: songyang.song <songyang.song@alibaba-inc.com>
2023-01-03 10:31:21 +08:00
Wei-Xiang Sun 3165f4e8c6
add advanced deployment api (#106)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-12-21 12:20:09 +08:00
Wei-Xiang Sun 7bfc93cd73
trim off native deployment controller codes (#105)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-12-20 11:30:09 +08:00
Wei-Xiang Sun b0c7b3b92a
init advanced deployment controller as native deployment controller (#104)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-12-19 17:54:08 +08:00
berg 973e39b0c8
Rewrite rollout controller code (#102)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2022-12-16 22:56:07 +08:00
Wei-Xiang Sun c0b1fea7f8
rewrite batchRelease controller (#90)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-12-01 17:36:51 +08:00
berg 0c54037c60
Implementing a generic Ingress based on Lua And A/B Testing Release (#86)
* rollout support A/B Testing API

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* Implementing a generic Ingress based on Lua

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2022-11-22 11:22:43 +08:00
Wei-Xiang Sun 113527e6f3
add failure threshold (#101)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-11-16 13:40:37 +08:00
Wei-Xiang Sun 5924c727a7
allow rollout even if revision not change (#98)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-11-01 14:48:24 +08:00
yike21 b7315e1658
[Issue10] Add_rollout history api and controller (#61)
* add RolloutHistory api
Signed-off-by: yike21 <yike21@qq.com>

* add RolloutHistory controller

Signed-off-by: yike21 <yike21@qq.com>

Signed-off-by: yike21 <yike21@qq.com>
2022-10-31 11:50:23 +08:00
yike21 7bb311afca
[Proposal]Add RolloutHistory proposal doc (#70)
docs: docs/proposals/20220803-rollouthistory.md
Signed-off-by: yike21 <yike21@qq.com>

Signed-off-by: yike21 <yike21@qq.com>
2022-10-30 19:16:22 +08:00
berg f21c3fb763
dockerfile support multiarch(amd64,arm64) (#83)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2022-09-13 10:35:45 +08:00
Wei-Xiang Sun 5525846f6c
change rollout id from workload labels to annotations (#75)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-09-05 11:26:38 +08:00
Wei-Xiang Sun e1ba1b0ea6
update canary status after the first deployment of rollout (#72)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-09-01 14:39:35 +08:00
Wei-Xiang Sun 7a39b8103d
add rollout id label to workload (#73)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-09-01 13:07:34 +08:00
Wei-Xiang Sun 3d1df9c315
allow users define controller workers (#67)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-08-23 14:46:26 +08:00
Wei-Xiang Sun 65b75a6615
support cloneset & statefulset rollback in batches (#54)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-08-23 11:11:43 +08:00
Wei-Xiang Sun c322b09f96
add UserAgent = kruise-rollout (#64)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-08-09 19:35:04 +08:00
Wei-Xiang Sun 53a746ace6
fix goroutine race bug (#62)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-08-08 19:15:53 +08:00
berg 794003c150
fix some little rollout bug (#59)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2022-07-22 18:47:37 +08:00
berg 279a4e8fab
rollout v0.2.0 changelog (#57)
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2022-07-21 11:19:36 +08:00
berg 56d17bcee8
Feat: support the Gateway API for the canary (#52)
Signed-off-by: barnettZQG <barnett.zqg@gmail.com>

Co-authored-by: barnettZQG <barnett.zqg@gmail.com>
2022-07-12 20:52:22 +08:00
Wei-Xiang Sun 84d9702e3d
fix revision name for statefulset (#53)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-07-08 11:05:23 +08:00
Wei-Xiang Sun 68b1c9eea9
consider the indirect owner-relationship between pod and workload (#48)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-28 10:32:18 +08:00
Wei-Xiang Sun 149e5a48da
add dynamic watcher for various workload types (#47)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-22 13:46:09 +08:00
Wei-Xiang Sun cf29580566
ignore some cases that no needs to progress (#46)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-14 15:04:01 +08:00
Wei-Xiang Sun 8efe94ff58
webhook allow step.weight=0 (#45)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-13 14:02:00 +08:00
Wei-Xiang Sun 4bd51e0c16
patch batch index to pods during rollout (#43)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-10 17:56:57 +08:00
Wei-Xiang Sun 53d32dccb2
add rolloutID and observedRolloutID fields (#44)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-08 16:51:56 +08:00
Wei-Xiang Sun dc5b0cb954
support statefulset & advanced statefulset (#34)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-06-02 10:30:05 +08:00
Chenxi Jiang 6bd43dbbf0
update rollout types in basic_usage (#38)
Signed-off-by: chenxi.jiang <chenxi.jiang.seu@gmail.com>
2022-06-01 10:34:48 +08:00
晓杰 35e09e1819
update tutorial to support minikube in mac m1 (#36)
* update tutorial to support minikube in mac m1

Signed-off-by: 晓杰 <2561589453@qq.com>

* add delte title

Signed-off-by: 晓杰 <2561589453@qq.com>
2022-05-26 11:21:44 +08:00
Wei-Xiang Sun e557a759b5
improve code implementation (#35)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>

Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2022-05-25 17:06:43 +08:00
288 changed files with 62936 additions and 8401 deletions

View File

@ -10,13 +10,13 @@ on:
env:
# Common versions
GO_VERSION: '1.17'
GOLANGCI_VERSION: 'v1.42'
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.52'
jobs:
golangci-lint:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v2
@ -27,7 +27,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -36,13 +36,13 @@ jobs:
run: |
make generate
- name: Lint golang code
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v6
with:
version: ${{ env.GOLANGCI_VERSION }}
args: --verbose
unit-tests:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -54,7 +54,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

28
.github/workflows/docker-image.yaml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Docker Image CI
on:
workflow_dispatch:
# Declare default permissions as read only.
permissions: read-all
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.HUB_KRUISE }}
- name: Build the Docker image
run: |
docker buildx create --use --platform=linux/amd64,linux/arm64,linux/ppc64le --name multi-platform-builder
docker buildx ls
IMG=openkruise/kruise-rollout:${{ github.ref_name }} make docker-multiarch

View File

@ -0,0 +1,128 @@
name: E2E-Advanced-Deployment-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
kubectl get pod -n kruise-rollout --no-headers | awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
kubectl get pod -n kruise-rollout --no-headers | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
exit 1
fi
- name: Run E2E Tests For Deployment Controller
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Run E2E Tests For Control Plane
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -1,4 +1,4 @@
name: E2E-1.23
name: E2E-Advanced-Deployment-1.23
on:
push:
@ -10,14 +10,14 @@ on:
env:
# Common versions
GO_VERSION: '1.17'
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
@ -90,12 +90,12 @@ jobs:
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
- name: Run E2E Tests For Deployment Controller
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='\[rollouts\] (Rollout)' test/e2e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
@ -108,87 +108,12 @@ jobs:
exit 1
fi
exit $retVal
batchRelease:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
- name: Run E2E Tests For Control Plane
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='\[rollouts\] (BatchRelease)' test/e2e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')

View File

@ -1,4 +1,4 @@
name: E2E-1.19
name: E2E-Advanced-Deployment-1.26
on:
push:
@ -10,14 +10,14 @@ on:
env:
# Common versions
GO_VERSION: '1.17'
KIND_IMAGE: 'kindest/node:v1.19.16'
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -27,7 +27,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
@ -90,12 +90,12 @@ jobs:
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
- name: Run E2E Tests For Deployment Controller
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='\[rollouts\] (Rollout)' test/e2e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
@ -108,87 +108,12 @@ jobs:
exit 1
fi
exit $retVal
batchRelease:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
- name: Run E2E Tests For Control Plane
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='\[rollouts\] (BatchRelease)' test/e2e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')

110
.github/workflows/e2e-cloneset-1.19.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-CloneSet-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='CloneSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

110
.github/workflows/e2e-cloneset-1.23.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-CloneSet-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='CloneSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

110
.github/workflows/e2e-cloneset-1.26.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-CloneSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='CloneSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

87
.github/workflows/e2e-custom.yaml vendored Normal file
View File

@ -0,0 +1,87 @@
name: E2E-Custom
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
version: ${{ env.KIND_VERSION }}
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/istio_crd.yaml
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/lua_script_configmap.yaml
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with custom network provider' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-DaemonSet-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='DaemonSet canary rollout' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-DaemonSet-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='DaemonSet canary rollout' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-DaemonSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='DaemonSet canary rollout' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-Deployment-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-Deployment-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-Deployment-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

83
.github/workflows/e2e-gateway.yaml vendored Normal file
View File

@ -0,0 +1,83 @@
name: E2E-Gateway
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with Gateway API' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,87 @@
name: E2E-Multiple-NetworkProvider
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
version: ${{ env.KIND_VERSION }}
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/istio_crd.yaml
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/lua_script_configmap.yaml
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with multiple network providers' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

110
.github/workflows/e2e-others-1.19.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-Others-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Others' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

110
.github/workflows/e2e-others-1.23.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-Others-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Others' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

110
.github/workflows/e2e-others-1.26.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-Others-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Others' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-StatefulSet-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='StatefulSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-StatefulSet-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='StatefulSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-StatefulSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='StatefulSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v1 for v1.19' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v2 for v1.23' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v2 for v1.26' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

2
.gitignore vendored
View File

@ -29,3 +29,5 @@ test/e2e/generated/bindata.go
.vscode
.DS_Store
lua_configuration/networking.istio.io/**/testdata/*.lua

5
.lift.toml Normal file
View File

@ -0,0 +1,5 @@
# Ignore results from vendor directories
ignoreFiles = """
vendor/
lua_configuration/
"""

View File

@ -1,10 +1,119 @@
# Change Log
## v0.1.0
## v0.6.1
### Key Features:
- Introduced `rollout-batch-id` labeling for blue-green and canary style releases ([#250](https://github.com/openkruise/rollouts/pull/250),([#251](https://github.com/openkruise/rollouts/pull/251),[#261](https://github.com/openkruise/rollouts/pull/261),[@PersistentJZH](https://github.com/PersistentJZH),[@AiRanthem](https://github.com/AiRanthem))
### Bugfix:
- The order of object in BatchRelease event handler is fixed ([#265](https://github.com/openkruise/rollouts/pull/265),[@z760087139](https://github.com/z760087139))
## v0.5.1
### Bugfix:
- Fix Rollout v1alpha1 and v1beta1 version conversions with incorrectly converted partition fields. ([#200](https://github.com/openkruise/rollouts/pull/200),[@myname4423](https://github.com/myname4423))
## v0.6.0
### Key Features:
- 🎊 Support for blue-green style releases has been added ([#214](https://github.com/openkruise/rollouts/pull/214),[#229](https://github.com/openkruise/rollouts/pull/229),[#238](https://github.com/openkruise/rollouts/pull/238),[#220](https://github.com/openkruise/rollouts/pull/220),[@myname4423](https://github.com/myname4423))
- ⭐ Traffic loss during rollouts in certain scenarios is avoided ([#226](https://github.com/openkruise/rollouts/pull/226),[#219](https://github.com/openkruise/rollouts/pull/219),[#222](https://github.com/openkruise/rollouts/pull/222),[@myname4423](https://github.com/myname4423))
### Other Features:
- Enhanced flexibility by allowing free navigation between different steps within a rollout ([#218](https://github.com/openkruise/rollouts/pull/218),[@myname4423](https://github.com/myname4423))
- Added support for HTTPQueryParamMatch and HTTPPathMatch of the Gateway API in traffic management ([#204](https://github.com/openkruise/rollouts/pull/204),[@lujiajing1126](https://github.com/lujiajing1126))
- Integrated RequestHeaderModifier into LuaData, facilitating its use with custom network references like Istio ([#223](https://github.com/openkruise/rollouts/pull/223),[@lujiajing1126](https://github.com/lujiajing1126))
- Introduced a composite provider to support multiple network providers ([#224](https://github.com/openkruise/rollouts/pull/224),[@lujiajing1126](https://github.com/lujiajing1126))
- Got the canary service selector patched from pod template metadata ([#243](https://github.com/openkruise/rollouts/pull/243),[@lujiajing1126](https://github.com/lujiajing1126))
- Patched label `rollout-batch-id` to pods even without `rollout-id` in the workload ([#248](https://github.com/openkruise/rollouts/pull/248),[@PersistentJZH](https://github.com/PersistentJZH))
- Upgraded Gateway API version from 0.5.1 to 0.7.1 and Kubernetes version from 1.24 to 1.26 ([#237](https://github.com/openkruise/rollouts/pull/237),[@AiRanthem](https://github.com/AiRanthem))
- Enabled the option to skip canary service generation when using `trafficRoutings.customNetworkRefs` ([#200](https://github.com/openkruise/rollouts/pull/200),[@myname4423](https://github.com/myname4423))
### Bugfix:
- Filtered out ReplicaSets not part of the current Deployment to prevent frequent scaling issues when multiple deployments share the same selectors ([#191](https://github.com/openkruise/rollouts/pull/191),[@zhengjr9](https://github.com/zhengjr9))
- Synced the observed `rollout-id` to the BatchRelease CR status ([#193](https://github.com/openkruise/rollouts/pull/193),[@veophi](https://github.com/veophi))
- Checked deployment strategy during finalization to prevent random stuck states when used with KubeVela ([#198](https://github.com/openkruise/rollouts/pull/198),[@phantomnat](https://github.com/phantomnat))
- Resolved a Lua encoding structural error ([#209](https://github.com/openkruise/rollouts/pull/209),[@ls-2018](https://github.com/ls-2018))
- Corrected batch ID labeling in partition-style releases when pod recreation happens ([#246](https://github.com/openkruise/rollouts/pull/246),[@myname4423](https://github.com/myname4423))
### Breaking Changes:
- Restricted the ability to set traffic percentage or match selectors in a partition-style release step when exceeding 30% replicas. Use the `rollouts.kruise.io/partition-replicas-limit` annotation to override this default threshold. Setting the threshold to 100% restores the previous behavior ([#225](https://github.com/openkruise/rollouts/pull/225),[@myname4423](https://github.com/myname4423))
## v0.5.0
### Resources Graduating to BETA
After more than a year of development, we have now decided to upgrade the following resources to v1beta1, as follows:
- Rollout
- BatchRelease
Please refer to the [community documentation](https://openkruise.io/rollouts/user-manuals/api-specifications) for detailed api definitions.
**Note:** The v1alpha1 api is still available, and you can still use the v1alpha1 api in v0.5.0.
But we still recommend that you migrate to v1beta1 gradually, as some of the new features will only be available in v1beta1,
e.g., [Extensible Traffic Routing Based on Lua Script](https://openkruise.io/rollouts/developer-manuals/custom-network-provider/).
### Bump To V1beta1 Gateway API
Support for GatewayAPI from v1alpha2 to v1beta1, you can use v1beta1 gateway API.
### Extensible Traffic Routing Based on Lua Script
The Gateway API is a standard gateway resource given by the K8S community, but there are still a large number of users in the community who are still using some customized gateway resources, such as VirtualService, Apisix, and so on.
In order to adapt to this behavior and meet the diverse demands of the community for gateway resources, we support a traffic routing scheme based on Lua scripts.
Kruise Rollout utilizes a Lua-script-based customization approach for API Gateway resources (Istio VirtualService, Apisix ApisixRoute, Kuma TrafficRoute and etc.).
Kruise Rollout involves invoking Lua scripts to retrieve and update the desired configurations of resources based on release strategies and the original configurations of API Gateway resources (including spec, labels, and annotations).
It enables users to easily adapt and integrate various types of API Gateway resources without modifying existing code and configurations.
By using Kruise Rollout, users can:
- Customize Lua scripts for handling API Gateway resources, allowing for flexible implementation of resource processing and providing support for a wider range of resources.
- Utilize a common Rollout configuration template to configure different resources, reducing configuration complexity and facilitating user configuration.
### Traffic Routing with Istio
Based on the lua script approach, now we add built-in support for Istio resources VirtualService,
you can directly use Kruise Rollout to achieve Istio scenarios Canary, A/B Testing release.
### Others
- Bug fix: wait grace period seconds after pod creation/upgrade. ([#185](https://github.com/openkruise/rollouts/pull/185), [@veophi](https://github.com/veophi))
## v0.4.0
### Kruise-Rollout-Controller
- Rollout Support Kruise Advanced DaemonSet. ([#134](https://github.com/openkruise/rollouts/pull/134), [@Yadan-Wei](https://github.com/Yadan-Wei))
- Rollout support end-to-end canary deployment. ([#153](https://github.com/openkruise/rollouts/pull/153), [@zmberg](https://github.com/zmberg))
- Rollout trafficTouting support requestHeaderModifier. ([#156](https://github.com/openkruise/rollouts/pull/156), [@zmberg](https://github.com/zmberg))
- Rollout support disabled for a rollout. ([#155](https://github.com/openkruise/rollouts/pull/155), [@Kuromesi](https://github.com/Kuromesi))
- Rollout support patch PodTemplateMetadata. ([#157](https://github.com/openkruise/rollouts/pull/157), [@zmberg](https://github.com/zmberg))
- Rollout only webhook workload which has rollout CR. ([#158](https://github.com/openkruise/rollouts/pull/158), [@zmberg](https://github.com/zmberg))
- Advanced deployment scale down old unhealthy pods firstly. ([#150](https://github.com/openkruise/rollouts/pull/150), [@veophi](https://github.com/veophi))
- Update k8s registry references to registry.k8s.io. ([#126](https://github.com/openkruise/rollouts/pull/126), [@asa3311](https://github.com/asa3311))
- When the data type of spec.replicas is int, cancel the upper 100 limit. ([#142](https://github.com/openkruise/rollouts/pull/142), [@MrSumeng](https://github.com/MrSumeng))
- Add e2e test for advanced daemonSet. ([#143](https://github.com/openkruise/rollouts/pull/143), [@Janice1457](https://github.com/Janice1457))
- Exclude workload deleted matching labels in webhook. ([#146](https://github.com/openkruise/rollouts/pull/146), [@wangyikewxgm](https://github.com/wangyikewxgm))
- Optimize the modification of rollout to GatewayAPI httpRoute header. ([#137](https://github.com/openkruise/rollouts/pull/137), [@ZhangSetSail](https://github.com/ZhangSetSail))
## v0.3.0
### Kruise-Rollout-Controller
- Support Canary Publishing + Nginx Ingress + Workload(CloneSet, Deployment)
- Support for Batch Release(e.g. 20%, 40%, 60%, 80, 100%) for workload(CloneSet)
#### New Features:
- Support rolling update deployment in batches without extra canary deployment.
- Support A/B Testing traffic routing.
- Support various types of traffic routing via adding Lua scripts in a pluggable way.
- Support [Higress](https://higress.io/en-us/) traffic routing.
- Support failure toleration threshold for rollout.
- Support multi-architectures, such as x86 and arm.
#### Optimization:
- Optimize rollout/batchRelease controller implementation.
- Allow users define the number of goroutines of controller.
- Add `UserAgent = kruise-rollout` for kruise-rollout operator.
- Define `rollout-id` in workload instead of rollout to avoid race bug.
## v0.2.0
### Kruise-Rollout-Controller
- Rollout Support StatefulSet & Advanced StatefulSet.
- Support patch batch-id label to pods during Rollout.
- Support the Gateway API for the canary release.
## v0.1.0
### Kruise-Rollout-Controller
- Support Canary Publishing + Nginx Ingress + Workload(CloneSet, Deployment).
- Support for Batch Release(e.g. 20%, 40%, 60%, 80, 100%) for workload(CloneSet).
### Documents
- Introduction, Installation, Basic Usage

133
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,133 @@
# Contributing to Openkruise
Welcome to Openkruise! Openkruise consists several repositories under the organization.
We encourage you to help out by reporting issues, improving documentation, fixing bugs, or adding new features.
Please also take a look at our code of conduct, which details how contributors are expected to conduct themselves as part of the Openkruise community.
## Reporting issues
To be honest, we regard every user of Openkruise as a very kind contributor.
After experiencing Openkruise, you may have some feedback for the project.
Then feel free to open an issue.
There are lot of cases when you could open an issue:
- bug report
- feature request
- performance issues
- feature proposal
- feature design
- help wanted
- doc incomplete
- test improvement
- any questions on project
- and so on
Also we must remind that when filing a new issue, please remember to remove the sensitive data from your post.
Sensitive data could be password, secret key, network locations, private business data and so on.
## Code and doc contribution
Every action to make Openkruise better is encouraged.
On GitHub, every improvement for Openkruise could be via a PR (short for pull request).
- If you find a typo, try to fix it!
- If you find a bug, try to fix it!
- If you find some redundant codes, try to remove them!
- If you find some test cases missing, try to add them!
- If you could enhance a feature, please DO NOT hesitate!
- If you find code implicit, try to add comments to make it clear!
- If you find code ugly, try to refactor that!
- If you can help to improve documents, it could not be better!
- If you find document incorrect, just do it and fix that!
- ...
### Workspace Preparation
To put forward a PR, we assume you have registered a GitHub ID.
Then you could finish the preparation in the following steps:
1. **Fork** Fork the repository you wish to work on. You just need to click the button Fork in right-left of project repository main page. Then you will end up with your repository in your GitHub username.
2. **Clone** your own repository to develop locally. Use `git clone https://github.com/<your-username>/<project>.git` to clone repository to your local machine. Then you can create new branches to finish the change you wish to make.
3. **Set remote** upstream to be `https://github.com/openkruise/<project>.git` using the following two commands:
```bash
git remote add upstream https://github.com/openkruise/<project>.git
git remote set-url --push upstream no-pushing
```
Note: `<project>` above is `rollouts` if you want to contribute codes to Kruise Rollout.
Adding this, we can easily synchronize local branches with upstream branches.
4. **Create a branch** to add a new feature or fix issues
Update local working directory:
```bash
cd <project>
git fetch upstream
git checkout master
git rebase upstream/master
```
Create a new branch:
```bash
git checkout -b <new-branch>
```
Make any change on the new-branch then build and test your codes.
### PR Description
PR is the only way to make change to project files.
To help reviewers better get your purpose, PR description could not be too detailed.
We encourage contributors to follow the [PR template](./.github/PULL_REQUEST_TEMPLATE.md) to finish the pull request.
### Developing Environment
As a contributor, if you want to make any contribution to Kruise project, we should reach an agreement on the version of tools used in the development environment.
Here are some dependents with specific version:
- Golang : v1.18+
- Kubernetes: v1.19+
### Developing guide
There's a `Makefile` in the root folder which describes the options to build and install. Here are some common ones:
```bash
# Generate code and manifests e.g. CRD, RBAC YAML files etc, and build the controller manager binary
make build
# Run the unit tests
make test
```
**There are some guide documents for contributors in [./docs/contributing/](./docs/contributing), such as debug guide to help you test your own branch in a Kubernetes cluster.**
### Proposals
If you are going to contribute a feature with new API or needs significant effort, please submit a proposal in [./docs/proposals/](./docs/proposals) first.
## Engage to help anything
We choose GitHub as the primary place for Openkruise to collaborate.
So the latest updates of Openkruise are always here.
Although contributions via PR is an explicit way to help, we still call for any other ways.
- reply to other's issues if you could;
- help solve other user's problems;
- help review other's PR design;
- help review other's codes in PR;
- discuss about Openkruise to make things clearer;
- advocate Openkruise technology beyond GitHub;
- write blogs on Openkruise and so on.
In a word, **ANY HELP IS CONTRIBUTION**.
## Join Openkruise as a member
It is also welcomed to join Openkruise team if you are willing to participate in Openkruise community continuously and keep active.
Please read and follow the [Community Membership](https://github.com/openkruise/community/blob/master/community-membership.md).

View File

@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.16 as builder
FROM golang:1.19-alpine3.17 AS builder
WORKDIR /workspace
@ -20,6 +20,7 @@ RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
COPY lua_configuration /lua_configuration
USER 65532:65532
ENTRYPOINT ["/manager"]

47
Dockerfile_multiarch Normal file
View File

@ -0,0 +1,47 @@
# Build the manager binary
ARG BASE_IMAGE=alpine
ARG BASE_IMAGE_VERION=3.17
FROM --platform=$BUILDPLATFORM golang:1.19-alpine3.17 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
COPY go.mod go.mod
COPY go.sum go.sum
# Copy the go source
COPY main.go main.go
COPY api/ api/
COPY pkg/ pkg/
# Build
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} CGO_ENABLED=0 GO111MODULE=on go build -a -o manager main.go
ARG BASE_IMAGE
ARG BASE_IMAGE_VERION
FROM ${BASE_IMAGE}:${BASE_IMAGE_VERION}
RUN set -eux; \
apk --no-cache --update upgrade && \
apk --no-cache add ca-certificates && \
apk --no-cache add tzdata && \
rm -rf /var/cache/apk/* && \
update-ca-certificates && \
echo "only include root and nobody user" && \
echo -e "root:x:0:0:root:/root:/bin/ash\nnobody:x:65534:65534:nobody:/:/sbin/nologin" | tee /etc/passwd && \
echo -e "root:x:0:root\nnobody:x:65534:" | tee /etc/group && \
rm -rf /usr/local/sbin/* && \
rm -rf /usr/local/bin/* && \
rm -rf /usr/sbin/* && \
rm -rf /usr/bin/* && \
rm -rf /sbin/* && \
rm -rf /bin/*
WORKDIR /
COPY --from=builder /workspace/manager .
COPY lua_configuration /lua_configuration
USER 65534
ENTRYPOINT ["/manager"]

View File

@ -1,6 +1,8 @@
# Image URL to use all building/pushing image targets
IMG ?= controller:latest
# Platforms to build the image for
PLATFORMS ?= linux/amd64,linux/arm64
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@ -68,6 +70,9 @@ docker-build: ## Build docker image with the manager.
docker-push: ## Push docker image with the manager.
docker push ${IMG}
# Build and push the multiarchitecture docker images and manifest.
docker-multiarch:
docker buildx build -f ./Dockerfile_multiarch --pull --no-cache --platform=$(PLATFORMS) --push . -t $(IMG)
##@ Deployment
install: manifests kustomize ## Install CRDs into the K8s cluster specified in ~/.kube/config.
@ -86,16 +91,17 @@ undeploy: ## Undeploy controller from the K8s cluster specified in ~/.kube/confi
CONTROLLER_GEN = $(shell pwd)/bin/controller-gen
CONTROLLER_GEN_VERSION = v0.11.0
controller-gen: ## Download controller-gen locally if necessary.
ifeq ("$(shell $(CONTROLLER_GEN) --version)", "Version: v0.7.0")
ifeq ("$(shell $(CONTROLLER_GEN) --version)", "Version: ${CONTROLLER_GEN_VERSION}")
else
rm -rf $(CONTROLLER_GEN)
$(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0)
$(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@${CONTROLLER_GEN_VERSION})
endif
KUSTOMIZE = $(shell pwd)/bin/kustomize
kustomize: ## Download kustomize locally if necessary.
$(call go-get-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v3@v3.8.7)
$(call go-get-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v4@v4.5.5)
GINKGO = $(shell pwd)/bin/ginkgo
ginkgo: ## Download ginkgo locally if necessary.
@ -103,7 +109,7 @@ ginkgo: ## Download ginkgo locally if necessary.
HELM = $(shell pwd)/bin/helm
helm: ## Download helm locally if necessary.
$(call go-get-tool,$(HELM),helm.sh/helm/v3@v3.8.1)
$(call go-get-tool,$(HELM),helm.sh/helm/v3/cmd/helm@v3.14.0)
# go-get-tool will 'go get' any package $2 and install it to $1.
PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))
@ -114,7 +120,7 @@ TMP_DIR=$$(mktemp -d) ;\
cd $$TMP_DIR ;\
go mod init tmp ;\
echo "Downloading $(2)" ;\
GOBIN=$(PROJECT_DIR)/bin go get $(2) ;\
GOBIN=$(PROJECT_DIR)/bin go install $(2) ;\
rm -rf $$TMP_DIR ;\
}
endef

View File

@ -2,31 +2,28 @@
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
## Introduction
Kruise Rollouts is **a Bypass component which provides advanced deployment capabilities such as canary, traffic routing, and progressive delivery features, for a series of Kubernetes workloads, such as Deployment and CloneSet.**
Kruise Rollouts is a **Bypass** component that offers **Advanced Progressive Delivery Features**. Its support for canary, multi-batch, and A/B testing delivery modes can be helpful in achieving smooth and controlled rollouts of changes to your application, while its compatibility with Gateway API and various Ingress implementations makes it easier to integrate with your existing infrastructure. Overall, Kruise Rollouts is a valuable tool for Kubernetes users looking to optimize their deployment processes!
<div style="text-align:center"><img src="docs/images/rollout_intro.png" /></div>
## Why Kruise Rollouts?
- **Functionality**
- Support multi-batch delivery for Deployment/CloneSet.
- Support Nginx/ALB/Istio traffic routing control during rollout.
- Supports canary and multi-batch delivery for various workloads, such as Deployment, CloneSet, and StatefulSet.
- Supports Fine-grained traffic orchestration of application with Kubernetes Ingress and [Gateway API](https://gateway-api.sigs.k8s.io/).
- **Flexibility**:
- Support scaling up/down to workloads during rollout.
- Can be applied to newly-created or existing workload objects directly;
- Can be ridden out of at any time when you needn't it without consideration of unavailable workloads and traffic problems.
- Can cooperate with other native/third-part Kubernetes controllers/operators, such as HPA and WorkloadSpread.
- **Non-Invasion**:
- Does not invade native workload controllers.
- Does not replace user-defined workload and traffic configurations.
- Handles both incremental and existing workloads with ease.
- Be compatible with workload-referencing components like HPA, allowing for easy deployment and management of workloads.
- Supports plug-and-play and hot-swapping, with immediate effect upon application, and the flexibility to be easily deleted at any stage, including during the rollout process.
- **Extensibility**:
- Easily extend to other traffic routing types, or workload types via plugin codes.
- **Easy-integration**:
- Easily integrate with classic or GitOps-style Kubernetes-based PaaS.
- Extend to other workloads and traffic types easily with pluggable lua scripts.
## Quick Start
- [Getting Started](docs/getting_started/introduction.md)
- See [Getting Started](https://openkruise.io/rollouts/introduction/) documents in OpenKruise official website.
## Contributing
You are warmly welcome to hack on Kruise Rollout. We have prepared a detailed guide [CONTRIBUTING.md](CONTRIBUTING.md).
## Community
Active communication channels:

View File

@ -0,0 +1,26 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apis
import (
"github.com/openkruise/rollouts/api/v1alpha1"
)
func init() {
// Register the types with the Scheme so the components can map objects to GroupVersionKinds and back
AddToSchemes = append(AddToSchemes, v1alpha1.SchemeBuilder.AddToScheme)
}

View File

@ -0,0 +1,26 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apis
import (
"github.com/openkruise/rollouts/api/v1beta1"
)
func init() {
// Register the types with the Scheme so the components can map objects to GroupVersionKinds and back
AddToSchemes = append(AddToSchemes, v1beta1.SchemeBuilder.AddToScheme)
}

29
api/apis.go Normal file
View File

@ -0,0 +1,29 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package apis
import (
"k8s.io/apimachinery/pkg/runtime"
)
// AddToSchemes may be used to add all resources defined in the project to a Scheme
var AddToSchemes runtime.SchemeBuilder
// AddToScheme adds all Resources to the Scheme
func AddToScheme(s *runtime.Scheme) error {
return AddToSchemes.AddToScheme(s)
}

View File

@ -21,9 +21,6 @@ import (
"k8s.io/apimachinery/pkg/util/intstr"
)
// ReleaseStrategyType defines strategies for pods rollout
type ReleaseStrategyType string
// ReleasePlan fines the details of the release plan
type ReleasePlan struct {
// Batches is the details on each batch of the ReleasePlan.
@ -37,26 +34,55 @@ type ReleasePlan struct {
Batches []ReleaseBatch `json:"batches"`
// All pods in the batches up to the batchPartition (included) will have
// the target resource specification while the rest still is the stable revision.
// This is designed for the operators to manually rollout
// This is designed for the operators to manually rollout.
// Default is nil, which means no partition and will release all batches.
// BatchPartition start from 0.
// +optional
BatchPartition *int32 `json:"batchPartition,omitempty"`
// Paused the rollout, the release progress will be paused util paused is false.
// default is false
// RolloutID indicates an id for each rollout progress
RolloutID string `json:"rolloutID,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// FinalizingPolicy define the behavior of controller when phase enter Finalizing
// Defaults to "Immediate"
FinalizingPolicy FinalizingPolicyType `json:"finalizingPolicy,omitempty"`
// PatchPodTemplateMetadata indicates patch configuration(e.g. labels, annotations) to the canary deployment podTemplateSpec.metadata
// only support for canary deployment
// +optional
Paused bool `json:"paused,omitempty"`
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// RollingStyle can be "Canary", "Partiton" or "BlueGreen"
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// EnableExtraWorkloadForCanary indicates whether to create extra workload for canary
// True corresponds to RollingStyle "Canary".
// False corresponds to RollingStyle "Partiton".
// Ignored in BlueGreen-style.
// This field is about to deprecate, use RollingStyle instead.
// If both of them are set, controller will only consider this
// filed when RollingStyle is empty
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary"`
}
type FinalizingPolicyType string
const (
// WaitResumeFinalizingPolicyType will wait workload to be resumed, which means
// controller will be hold at Finalizing phase util all pods of workload is upgraded.
// WaitResumeFinalizingPolicyType only works in canary-style BatchRelease controller.
WaitResumeFinalizingPolicyType FinalizingPolicyType = "WaitResume"
// ImmediateFinalizingPolicyType will not to wait workload to be resumed.
ImmediateFinalizingPolicyType FinalizingPolicyType = "Immediate"
)
// ReleaseBatch is used to describe how each batch release should be
type ReleaseBatch struct {
// CanaryReplicas is the number of upgraded pods that should have in this batch.
// it can be an absolute number (ex: 5) or a percentage of workload replicas.
// batches[i].canaryReplicas should less than or equal to batches[j].canaryReplicas if i < j.
CanaryReplicas intstr.IntOrString `json:"canaryReplicas"`
// The wait time, in seconds, between instances batches, default = 0
// +optional
PauseSeconds int64 `json:"pauseSeconds,omitempty"`
}
// BatchReleaseStatus defines the observed state of a release plan
@ -73,6 +99,10 @@ type BatchReleaseStatus struct {
// It corresponds to this BatchRelease's generation, which is updated on mutation
// by the API Server, and only if BatchRelease Spec was changed, its generation will increase 1.
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// ObservedRolloutID is the most recent rollout-id observed for this BatchRelease.
// If RolloutID was changed, we will restart to roll out from batch 0,
// to ensure the batch-id and rollout-id labels of Pods are correct.
ObservedRolloutID string `json:"observedRolloutID,omitempty"`
// ObservedWorkloadReplicas is observed replicas of target referenced workload.
// This field is designed to deal with scaling event during rollout, if this field changed,
// it means that the workload is scaling during rollout.
@ -82,7 +112,7 @@ type BatchReleaseStatus struct {
// newest canary Deployment.
// +optional
CollisionCount *int32 `json:"collisionCount,omitempty"`
// ObservedReleasePlanHash is a hash code of observed itself releasePlan.Batches.
// ObservedReleasePlanHash is a hash code of observed itself spec.releasePlan.
ObservedReleasePlanHash string `json:"observedReleasePlanHash,omitempty"`
// Phase is the release plan phase, which indicates the current state of release
// plan state machine in BatchRelease controller.
@ -102,6 +132,8 @@ type BatchReleaseCanaryStatus struct {
UpdatedReplicas int32 `json:"updatedReplicas,omitempty"`
// UpdatedReadyReplicas is the number upgraded Pods that have a Ready Condition.
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
// the number of pods that no need to rollback in rollback scene.
NoNeedUpdateReplicas *int32 `json:"noNeedUpdateReplicas,omitempty"`
}
type BatchReleaseBatchStateType string
@ -116,27 +148,10 @@ const (
)
const (
// VerifyingBatchReleaseCondition indicates the controller is verifying whether workload
// is ready to do rollout.
VerifyingBatchReleaseCondition RolloutConditionType = "Verifying"
// PreparingBatchReleaseCondition indicates the controller is preparing something before executing
// release plan, such as create canary deployment and record stable & canary revisions.
PreparingBatchReleaseCondition RolloutConditionType = "Preparing"
// ProgressingBatchReleaseCondition indicates the controller is executing release plan.
ProgressingBatchReleaseCondition RolloutConditionType = "Progressing"
// FinalizingBatchReleaseCondition indicates the canary state is completed,
// and the controller is doing something, such as cleaning up canary deployment.
FinalizingBatchReleaseCondition RolloutConditionType = "Finalizing"
// TerminatingBatchReleaseCondition indicates the rollout is terminating when the
// BatchRelease cr is being deleted or cancelled.
TerminatingBatchReleaseCondition RolloutConditionType = "Terminating"
// TerminatedBatchReleaseCondition indicates the BatchRelease cr can be deleted.
TerminatedBatchReleaseCondition RolloutConditionType = "Terminated"
// CancelledBatchReleaseCondition indicates the release plan is cancelled during rollout.
CancelledBatchReleaseCondition RolloutConditionType = "Cancelled"
// CompletedBatchReleaseCondition indicates the release plan is completed successfully.
CompletedBatchReleaseCondition RolloutConditionType = "Completed"
SucceededBatchReleaseConditionReason = "Succeeded"
FailedBatchReleaseConditionReason = "Failed"
// RolloutPhasePreparing indicates a rollout is preparing for next progress.
RolloutPhasePreparing RolloutPhase = "Preparing"
// RolloutPhaseFinalizing indicates a rollout is finalizing
RolloutPhaseFinalizing RolloutPhase = "Finalizing"
// RolloutPhaseCompleted indicates a rollout is completed/cancelled/terminated
RolloutPhaseCompleted RolloutPhase = "Completed"
)

View File

@ -46,8 +46,6 @@ type BatchReleaseSpec struct {
ReleasePlan ReleasePlan `json:"releasePlan"`
}
type DeploymentReleaseStrategyType string
// BatchReleaseList contains a list of BatchRelease
// +kubebuilder:object:root=true
type BatchReleaseList struct {

477
api/v1alpha1/conversion.go Normal file
View File

@ -0,0 +1,477 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"fmt"
"strings"
"github.com/openkruise/rollouts/api/v1beta1"
"k8s.io/apimachinery/pkg/util/intstr"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/conversion"
)
func (src *Rollout) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta1.Rollout:
obj := dst.(*v1beta1.Rollout)
obj.ObjectMeta = src.ObjectMeta
obj.Spec = v1beta1.RolloutSpec{}
srcSpec := src.Spec
obj.Spec.WorkloadRef = v1beta1.ObjectRef{
APIVersion: srcSpec.ObjectRef.WorkloadRef.APIVersion,
Kind: srcSpec.ObjectRef.WorkloadRef.Kind,
Name: srcSpec.ObjectRef.WorkloadRef.Name,
}
obj.Spec.Disabled = srcSpec.Disabled
obj.Spec.Strategy = v1beta1.RolloutStrategy{
Paused: srcSpec.Strategy.Paused,
Canary: &v1beta1.CanaryStrategy{
FailureThreshold: srcSpec.Strategy.Canary.FailureThreshold,
},
}
for _, step := range srcSpec.Strategy.Canary.Steps {
o := v1beta1.CanaryStep{
TrafficRoutingStrategy: ConversionToV1beta1TrafficRoutingStrategy(step.TrafficRoutingStrategy),
Replicas: step.Replicas,
Pause: v1beta1.RolloutPause{Duration: step.Pause.Duration},
}
if step.Replicas == nil && step.Weight != nil {
o.Replicas = &intstr.IntOrString{
Type: intstr.String,
StrVal: fmt.Sprintf("%d", *step.Weight) + "%",
}
}
obj.Spec.Strategy.Canary.Steps = append(obj.Spec.Strategy.Canary.Steps, o)
}
for _, ref := range srcSpec.Strategy.Canary.TrafficRoutings {
o := ConversionToV1beta1TrafficRoutingRef(ref)
obj.Spec.Strategy.Canary.TrafficRoutings = append(obj.Spec.Strategy.Canary.TrafficRoutings, o)
}
if srcSpec.Strategy.Canary.PatchPodTemplateMetadata != nil {
obj.Spec.Strategy.Canary.PatchPodTemplateMetadata = &v1beta1.PatchPodTemplateMetadata{
Annotations: map[string]string{},
Labels: map[string]string{},
}
for k, v := range srcSpec.Strategy.Canary.PatchPodTemplateMetadata.Annotations {
obj.Spec.Strategy.Canary.PatchPodTemplateMetadata.Annotations[k] = v
}
for k, v := range srcSpec.Strategy.Canary.PatchPodTemplateMetadata.Labels {
obj.Spec.Strategy.Canary.PatchPodTemplateMetadata.Labels[k] = v
}
}
if !strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(PartitionRollingStyle)) {
obj.Spec.Strategy.Canary.EnableExtraWorkloadForCanary = true
}
if src.Annotations[TrafficRoutingAnnotation] != "" {
obj.Spec.Strategy.Canary.TrafficRoutingRef = src.Annotations[TrafficRoutingAnnotation]
}
// status
obj.Status = v1beta1.RolloutStatus{
ObservedGeneration: src.Status.ObservedGeneration,
Phase: v1beta1.RolloutPhase(src.Status.Phase),
Message: src.Status.Message,
}
for _, cond := range src.Status.Conditions {
o := v1beta1.RolloutCondition{
Type: v1beta1.RolloutConditionType(cond.Type),
Status: cond.Status,
LastUpdateTime: cond.LastUpdateTime,
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
}
obj.Status.Conditions = append(obj.Status.Conditions, o)
}
if src.Status.CanaryStatus == nil {
return nil
}
obj.Status.CanaryStatus = &v1beta1.CanaryStatus{
CommonStatus: v1beta1.CommonStatus{
ObservedWorkloadGeneration: src.Status.CanaryStatus.ObservedWorkloadGeneration,
ObservedRolloutID: src.Status.CanaryStatus.ObservedRolloutID,
RolloutHash: src.Status.CanaryStatus.RolloutHash,
StableRevision: src.Status.CanaryStatus.StableRevision,
PodTemplateHash: src.Status.CanaryStatus.PodTemplateHash,
CurrentStepIndex: src.Status.CanaryStatus.CurrentStepIndex,
CurrentStepState: v1beta1.CanaryStepState(src.Status.CanaryStatus.CurrentStepState),
Message: src.Status.CanaryStatus.Message,
LastUpdateTime: src.Status.CanaryStatus.LastUpdateTime,
FinalisingStep: v1beta1.FinalisingStepType(src.Status.CanaryStatus.FinalisingStep),
NextStepIndex: src.Status.CanaryStatus.NextStepIndex,
},
CanaryRevision: src.Status.CanaryStatus.CanaryRevision,
CanaryReplicas: src.Status.CanaryStatus.CanaryReplicas,
CanaryReadyReplicas: src.Status.CanaryStatus.CanaryReadyReplicas,
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
func ConversionToV1beta1TrafficRoutingRef(src TrafficRoutingRef) (dst v1beta1.TrafficRoutingRef) {
dst.Service = src.Service
dst.GracePeriodSeconds = src.GracePeriodSeconds
if src.Ingress != nil {
dst.Ingress = &v1beta1.IngressTrafficRouting{
ClassType: src.Ingress.ClassType,
Name: src.Ingress.Name,
}
}
if src.Gateway != nil {
dst.Gateway = &v1beta1.GatewayTrafficRouting{
HTTPRouteName: src.Gateway.HTTPRouteName,
}
}
for _, ref := range src.CustomNetworkRefs {
obj := v1beta1.ObjectRef{
APIVersion: ref.APIVersion,
Kind: ref.Kind,
Name: ref.Name,
}
dst.CustomNetworkRefs = append(dst.CustomNetworkRefs, obj)
}
return dst
}
func ConversionToV1beta1TrafficRoutingStrategy(src TrafficRoutingStrategy) (dst v1beta1.TrafficRoutingStrategy) {
if src.Weight != nil {
dst.Traffic = utilpointer.String(fmt.Sprintf("%d", *src.Weight) + "%")
}
dst.RequestHeaderModifier = src.RequestHeaderModifier
for _, match := range src.Matches {
obj := v1beta1.HttpRouteMatch{
Headers: match.Headers,
}
dst.Matches = append(dst.Matches, obj)
}
return dst
}
func (dst *Rollout) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta1.Rollout:
srcV1beta1 := src.(*v1beta1.Rollout)
dst.ObjectMeta = srcV1beta1.ObjectMeta
if !srcV1beta1.Spec.Strategy.IsCanaryStragegy() {
// only v1beta1 supports bluegreen strategy
// Don't log the message because it will print too often
return nil
}
// spec
dst.Spec = RolloutSpec{
ObjectRef: ObjectRef{
WorkloadRef: &WorkloadRef{
APIVersion: srcV1beta1.Spec.WorkloadRef.APIVersion,
Kind: srcV1beta1.Spec.WorkloadRef.Kind,
Name: srcV1beta1.Spec.WorkloadRef.Name,
},
},
Strategy: RolloutStrategy{
Paused: srcV1beta1.Spec.Strategy.Paused,
Canary: &CanaryStrategy{
FailureThreshold: srcV1beta1.Spec.Strategy.Canary.FailureThreshold,
},
},
Disabled: srcV1beta1.Spec.Disabled,
}
for _, step := range srcV1beta1.Spec.Strategy.Canary.Steps {
obj := CanaryStep{
TrafficRoutingStrategy: ConversionToV1alpha1TrafficRoutingStrategy(step.TrafficRoutingStrategy),
Replicas: step.Replicas,
Pause: RolloutPause{Duration: step.Pause.Duration},
}
dst.Spec.Strategy.Canary.Steps = append(dst.Spec.Strategy.Canary.Steps, obj)
}
for _, ref := range srcV1beta1.Spec.Strategy.Canary.TrafficRoutings {
obj := ConversionToV1alpha1TrafficRoutingRef(ref)
dst.Spec.Strategy.Canary.TrafficRoutings = append(dst.Spec.Strategy.Canary.TrafficRoutings, obj)
}
if srcV1beta1.Spec.Strategy.Canary.PatchPodTemplateMetadata != nil {
dst.Spec.Strategy.Canary.PatchPodTemplateMetadata = &PatchPodTemplateMetadata{
Annotations: map[string]string{},
Labels: map[string]string{},
}
for k, v := range srcV1beta1.Spec.Strategy.Canary.PatchPodTemplateMetadata.Annotations {
dst.Spec.Strategy.Canary.PatchPodTemplateMetadata.Annotations[k] = v
}
for k, v := range srcV1beta1.Spec.Strategy.Canary.PatchPodTemplateMetadata.Labels {
dst.Spec.Strategy.Canary.PatchPodTemplateMetadata.Labels[k] = v
}
}
if dst.Annotations == nil {
dst.Annotations = map[string]string{}
}
if srcV1beta1.Spec.Strategy.Canary.EnableExtraWorkloadForCanary {
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(CanaryRollingStyle))
} else {
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(PartitionRollingStyle))
}
if srcV1beta1.Spec.Strategy.Canary.TrafficRoutingRef != "" {
dst.Annotations[TrafficRoutingAnnotation] = srcV1beta1.Spec.Strategy.Canary.TrafficRoutingRef
}
// status
dst.Status = RolloutStatus{
ObservedGeneration: srcV1beta1.Status.ObservedGeneration,
Phase: RolloutPhase(srcV1beta1.Status.Phase),
Message: srcV1beta1.Status.Message,
}
for _, cond := range srcV1beta1.Status.Conditions {
obj := RolloutCondition{
Type: RolloutConditionType(cond.Type),
Status: cond.Status,
LastUpdateTime: cond.LastUpdateTime,
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
}
dst.Status.Conditions = append(dst.Status.Conditions, obj)
}
if srcV1beta1.Status.CanaryStatus == nil {
return nil
}
dst.Status.CanaryStatus = &CanaryStatus{
ObservedWorkloadGeneration: srcV1beta1.Status.CanaryStatus.ObservedWorkloadGeneration,
ObservedRolloutID: srcV1beta1.Status.CanaryStatus.ObservedRolloutID,
RolloutHash: srcV1beta1.Status.CanaryStatus.RolloutHash,
StableRevision: srcV1beta1.Status.CanaryStatus.StableRevision,
CanaryRevision: srcV1beta1.Status.CanaryStatus.CanaryRevision,
PodTemplateHash: srcV1beta1.Status.CanaryStatus.PodTemplateHash,
CanaryReplicas: srcV1beta1.Status.CanaryStatus.CanaryReplicas,
CanaryReadyReplicas: srcV1beta1.Status.CanaryStatus.CanaryReadyReplicas,
CurrentStepIndex: srcV1beta1.Status.CanaryStatus.CurrentStepIndex,
CurrentStepState: CanaryStepState(srcV1beta1.Status.CanaryStatus.CurrentStepState),
Message: srcV1beta1.Status.CanaryStatus.Message,
LastUpdateTime: srcV1beta1.Status.CanaryStatus.LastUpdateTime,
FinalisingStep: FinalizeStateType(srcV1beta1.Status.CanaryStatus.FinalisingStep),
NextStepIndex: srcV1beta1.Status.CanaryStatus.NextStepIndex,
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
func ConversionToV1alpha1TrafficRoutingStrategy(src v1beta1.TrafficRoutingStrategy) (dst TrafficRoutingStrategy) {
if src.Traffic != nil {
is := intstr.FromString(*src.Traffic)
weight, _ := intstr.GetScaledValueFromIntOrPercent(&is, 100, true)
dst.Weight = utilpointer.Int32(int32(weight))
}
dst.RequestHeaderModifier = src.RequestHeaderModifier
for _, match := range src.Matches {
obj := HttpRouteMatch{
Headers: match.Headers,
}
dst.Matches = append(dst.Matches, obj)
}
return dst
}
func ConversionToV1alpha1TrafficRoutingRef(src v1beta1.TrafficRoutingRef) (dst TrafficRoutingRef) {
dst.Service = src.Service
dst.GracePeriodSeconds = src.GracePeriodSeconds
if src.Ingress != nil {
dst.Ingress = &IngressTrafficRouting{
ClassType: src.Ingress.ClassType,
Name: src.Ingress.Name,
}
}
if src.Gateway != nil {
dst.Gateway = &GatewayTrafficRouting{
HTTPRouteName: src.Gateway.HTTPRouteName,
}
}
for _, ref := range src.CustomNetworkRefs {
obj := CustomNetworkRef{
APIVersion: ref.APIVersion,
Kind: ref.Kind,
Name: ref.Name,
}
dst.CustomNetworkRefs = append(dst.CustomNetworkRefs, obj)
}
return dst
}
func (src *BatchRelease) ConvertTo(dst conversion.Hub) error {
switch t := dst.(type) {
case *v1beta1.BatchRelease:
obj := dst.(*v1beta1.BatchRelease)
obj.ObjectMeta = src.ObjectMeta
obj.Spec = v1beta1.BatchReleaseSpec{}
srcSpec := src.Spec
obj.Spec.WorkloadRef = v1beta1.ObjectRef{
APIVersion: srcSpec.TargetRef.WorkloadRef.APIVersion,
Kind: srcSpec.TargetRef.WorkloadRef.Kind,
Name: srcSpec.TargetRef.WorkloadRef.Name,
}
obj.Spec.ReleasePlan = v1beta1.ReleasePlan{
BatchPartition: srcSpec.ReleasePlan.BatchPartition,
RolloutID: srcSpec.ReleasePlan.RolloutID,
FailureThreshold: srcSpec.ReleasePlan.FailureThreshold,
FinalizingPolicy: v1beta1.FinalizingPolicyType(srcSpec.ReleasePlan.FinalizingPolicy),
}
for _, batch := range srcSpec.ReleasePlan.Batches {
o := v1beta1.ReleaseBatch{
CanaryReplicas: batch.CanaryReplicas,
}
obj.Spec.ReleasePlan.Batches = append(obj.Spec.ReleasePlan.Batches, o)
}
if srcSpec.ReleasePlan.PatchPodTemplateMetadata != nil {
obj.Spec.ReleasePlan.PatchPodTemplateMetadata = &v1beta1.PatchPodTemplateMetadata{
Annotations: map[string]string{},
Labels: map[string]string{},
}
for k, v := range srcSpec.ReleasePlan.PatchPodTemplateMetadata.Annotations {
obj.Spec.ReleasePlan.PatchPodTemplateMetadata.Annotations[k] = v
}
for k, v := range srcSpec.ReleasePlan.PatchPodTemplateMetadata.Labels {
obj.Spec.ReleasePlan.PatchPodTemplateMetadata.Labels[k] = v
}
}
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(PartitionRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.PartitionRollingStyle
}
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(CanaryRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.CanaryRollingStyle
}
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(BlueGreenRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.BlueGreenRollingStyle
}
obj.Spec.ReleasePlan.EnableExtraWorkloadForCanary = srcSpec.ReleasePlan.EnableExtraWorkloadForCanary
// status
obj.Status = v1beta1.BatchReleaseStatus{
StableRevision: src.Status.StableRevision,
UpdateRevision: src.Status.UpdateRevision,
ObservedGeneration: src.Status.ObservedGeneration,
ObservedRolloutID: src.Status.ObservedRolloutID,
ObservedWorkloadReplicas: src.Status.ObservedWorkloadReplicas,
ObservedReleasePlanHash: src.Status.ObservedReleasePlanHash,
CollisionCount: src.Status.CollisionCount,
Phase: v1beta1.RolloutPhase(src.Status.Phase),
}
for _, cond := range src.Status.Conditions {
o := v1beta1.RolloutCondition{
Type: v1beta1.RolloutConditionType(cond.Type),
Status: cond.Status,
LastUpdateTime: cond.LastUpdateTime,
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
}
obj.Status.Conditions = append(obj.Status.Conditions, o)
}
obj.Status.CanaryStatus = v1beta1.BatchReleaseCanaryStatus{
CurrentBatchState: v1beta1.BatchReleaseBatchStateType(src.Status.CanaryStatus.CurrentBatchState),
CurrentBatch: src.Status.CanaryStatus.CurrentBatch,
BatchReadyTime: src.Status.CanaryStatus.BatchReadyTime,
UpdatedReplicas: src.Status.CanaryStatus.UpdatedReplicas,
UpdatedReadyReplicas: src.Status.CanaryStatus.UpdatedReadyReplicas,
NoNeedUpdateReplicas: src.Status.CanaryStatus.NoNeedUpdateReplicas,
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}
func (dst *BatchRelease) ConvertFrom(src conversion.Hub) error {
switch t := src.(type) {
case *v1beta1.BatchRelease:
srcV1beta1 := src.(*v1beta1.BatchRelease)
dst.ObjectMeta = srcV1beta1.ObjectMeta
dst.Spec = BatchReleaseSpec{}
srcSpec := srcV1beta1.Spec
dst.Spec.TargetRef.WorkloadRef = &WorkloadRef{
APIVersion: srcSpec.WorkloadRef.APIVersion,
Kind: srcSpec.WorkloadRef.Kind,
Name: srcSpec.WorkloadRef.Name,
}
dst.Spec.ReleasePlan = ReleasePlan{
BatchPartition: srcSpec.ReleasePlan.BatchPartition,
RolloutID: srcSpec.ReleasePlan.RolloutID,
FailureThreshold: srcSpec.ReleasePlan.FailureThreshold,
FinalizingPolicy: FinalizingPolicyType(srcSpec.ReleasePlan.FinalizingPolicy),
}
for _, batch := range srcSpec.ReleasePlan.Batches {
obj := ReleaseBatch{
CanaryReplicas: batch.CanaryReplicas,
}
dst.Spec.ReleasePlan.Batches = append(dst.Spec.ReleasePlan.Batches, obj)
}
if srcSpec.ReleasePlan.PatchPodTemplateMetadata != nil {
dst.Spec.ReleasePlan.PatchPodTemplateMetadata = &PatchPodTemplateMetadata{
Annotations: map[string]string{},
Labels: map[string]string{},
}
for k, v := range srcSpec.ReleasePlan.PatchPodTemplateMetadata.Annotations {
dst.Spec.ReleasePlan.PatchPodTemplateMetadata.Annotations[k] = v
}
for k, v := range srcSpec.ReleasePlan.PatchPodTemplateMetadata.Labels {
dst.Spec.ReleasePlan.PatchPodTemplateMetadata.Labels[k] = v
}
}
if dst.Annotations == nil {
dst.Annotations = map[string]string{}
}
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(srcV1beta1.Spec.ReleasePlan.RollingStyle))
dst.Spec.ReleasePlan.RollingStyle = RollingStyleType(srcV1beta1.Spec.ReleasePlan.RollingStyle)
dst.Spec.ReleasePlan.EnableExtraWorkloadForCanary = srcV1beta1.Spec.ReleasePlan.EnableExtraWorkloadForCanary
// status
dst.Status = BatchReleaseStatus{
StableRevision: srcV1beta1.Status.StableRevision,
UpdateRevision: srcV1beta1.Status.UpdateRevision,
ObservedGeneration: srcV1beta1.Status.ObservedGeneration,
ObservedRolloutID: srcV1beta1.Status.ObservedRolloutID,
ObservedWorkloadReplicas: srcV1beta1.Status.ObservedWorkloadReplicas,
ObservedReleasePlanHash: srcV1beta1.Status.ObservedReleasePlanHash,
CollisionCount: srcV1beta1.Status.CollisionCount,
Phase: RolloutPhase(srcV1beta1.Status.Phase),
}
for _, cond := range srcV1beta1.Status.Conditions {
obj := RolloutCondition{
Type: RolloutConditionType(cond.Type),
Status: cond.Status,
LastUpdateTime: cond.LastUpdateTime,
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
}
dst.Status.Conditions = append(dst.Status.Conditions, obj)
}
dst.Status.CanaryStatus = BatchReleaseCanaryStatus{
CurrentBatchState: BatchReleaseBatchStateType(srcV1beta1.Status.CanaryStatus.CurrentBatchState),
CurrentBatch: srcV1beta1.Status.CanaryStatus.CurrentBatch,
BatchReadyTime: srcV1beta1.Status.CanaryStatus.BatchReadyTime,
UpdatedReplicas: srcV1beta1.Status.CanaryStatus.UpdatedReplicas,
UpdatedReadyReplicas: srcV1beta1.Status.CanaryStatus.UpdatedReadyReplicas,
NoNeedUpdateReplicas: srcV1beta1.Status.CanaryStatus.NoNeedUpdateReplicas,
}
return nil
default:
return fmt.Errorf("unsupported type %v", t)
}
}

View File

@ -0,0 +1,105 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
apps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
const (
// DeploymentStrategyAnnotation is annotation for deployment,
// which is strategy fields of Advanced Deployment.
DeploymentStrategyAnnotation = "rollouts.kruise.io/deployment-strategy"
// DeploymentExtraStatusAnnotation is annotation for deployment,
// which is extra status field of Advanced Deployment.
DeploymentExtraStatusAnnotation = "rollouts.kruise.io/deployment-extra-status"
// DeploymentStableRevisionLabel is label for deployment,
// which record the stable revision during the current rolling process.
DeploymentStableRevisionLabel = "rollouts.kruise.io/stable-revision"
// AdvancedDeploymentControlLabel is label for deployment,
// which labels whether the deployment is controlled by advanced-deployment-controller.
AdvancedDeploymentControlLabel = "rollouts.kruise.io/controlled-by-advanced-deployment-controller"
)
// DeploymentStrategy is strategy field for Advanced Deployment
type DeploymentStrategy struct {
// RollingStyle define the behavior of rolling for deployment.
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// original deployment strategy rolling update fields
RollingUpdate *apps.RollingUpdateDeployment `json:"rollingUpdate,omitempty"`
// Paused = true will block the upgrade of Pods
Paused bool `json:"paused,omitempty"`
// Partition describe how many Pods should be updated during rollout.
// We use this field to implement partition-style rolling update.
Partition intstr.IntOrString `json:"partition,omitempty"`
}
type RollingStyleType string
const (
// PartitionRollingStyle means rolling in batches just like CloneSet, and will NOT create any extra Deployment;
PartitionRollingStyle RollingStyleType = "Partition"
// CanaryRollingStyle means rolling in canary way, and will create a canary Deployment.
CanaryRollingStyle RollingStyleType = "Canary"
// BlueGreenRollingStyle means rolling in blue-green way, and will NOT create a canary Deployment.
BlueGreenRollingStyle RollingStyleType = "BlueGreen"
)
// DeploymentExtraStatus is extra status field for Advanced Deployment
type DeploymentExtraStatus struct {
// UpdatedReadyReplicas the number of pods that has been updated and ready.
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
// ExpectedUpdatedReplicas is an absolute number calculated based on Partition
// and Deployment.Spec.Replicas, means how many pods are expected be updated under
// current strategy.
// This field is designed to avoid users to fall into the details of algorithm
// for Partition calculation.
ExpectedUpdatedReplicas int32 `json:"expectedUpdatedReplicas,omitempty"`
}
func SetDefaultDeploymentStrategy(strategy *DeploymentStrategy) {
if strategy.RollingStyle != PartitionRollingStyle {
return
}
if strategy.RollingUpdate == nil {
strategy.RollingUpdate = &apps.RollingUpdateDeployment{}
}
if strategy.RollingUpdate.MaxUnavailable == nil {
// Set MaxUnavailable as 25% by default
maxUnavailable := intstr.FromString("25%")
strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
}
if strategy.RollingUpdate.MaxSurge == nil {
// Set MaxSurge as 25% by default
maxSurge := intstr.FromString("25%")
strategy.RollingUpdate.MaxUnavailable = &maxSurge
}
// Cannot allow maxSurge==0 && MaxUnavailable==0, otherwise, no pod can be updated when rolling update.
maxSurge, _ := intstr.GetScaledValueFromIntOrPercent(strategy.RollingUpdate.MaxSurge, 100, true)
maxUnavailable, _ := intstr.GetScaledValueFromIntOrPercent(strategy.RollingUpdate.MaxUnavailable, 100, true)
if maxSurge == 0 && maxUnavailable == 0 {
strategy.RollingUpdate = &apps.RollingUpdateDeployment{
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: 0},
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
}
}
}

View File

@ -15,8 +15,8 @@ limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the apps v1alpha1 API group
//+kubebuilder:object:generate=true
//+groupName=rollouts.kruise.io
// +kubebuilder:object:generate=true
// +groupName=rollouts.kruise.io
package v1alpha1
import (

View File

@ -20,11 +20,45 @@ import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
gatewayv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
const (
// RolloutIDLabel is set to workload labels.
// RolloutIDLabel is designed to distinguish each workload revision publications.
// The value of RolloutIDLabel corresponds Rollout.Spec.RolloutID.
RolloutIDLabel = "rollouts.kruise.io/rollout-id"
// RolloutBatchIDLabel is patched in pod labels.
// RolloutBatchIDLabel is the label key of batch id that will be patched to pods during rollout.
// Only when RolloutIDLabel is set, RolloutBatchIDLabel will be patched.
// Users can use RolloutIDLabel and RolloutBatchIDLabel to select the pods that are upgraded in some certain batch and release.
RolloutBatchIDLabel = "rollouts.kruise.io/rollout-batch-id"
// RollbackInBatchAnnotation is set to rollout annotations.
// RollbackInBatchAnnotation allow use disable quick rollback, and will roll back in batch style.
RollbackInBatchAnnotation = "rollouts.kruise.io/rollback-in-batch"
// RolloutStyleAnnotation define the rolling behavior for Deployment.
// must be "partition" or "canary":
// * "partition" means rolling in batches just like CloneSet, and will NOT create any extra Workload;
// * "canary" means rolling in canary way, and will create a canary Workload.
// Currently, only Deployment support both "partition" and "canary" rolling styles.
// For other workload types, they only support "partition" styles.
// Defaults to "canary" to Deployment.
// Defaults to "partition" to the others.
RolloutStyleAnnotation = "rollouts.kruise.io/rolling-style"
// TrafficRoutingAnnotation is the TrafficRouting Name, and it is the Rollout's TrafficRouting.
// The Rollout release will trigger the TrafficRouting release. For example:
// A microservice consists of three applications, and the invocation relationship is as follows: a -> b -> c,
// and application(a, b, c)'s gateway is trafficRouting. Any application(a, b or b) release will trigger TrafficRouting release.
TrafficRoutingAnnotation = "rollouts.kruise.io/trafficrouting"
)
// RolloutSpec defines the desired state of Rollout
type RolloutSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
@ -33,25 +67,23 @@ type RolloutSpec struct {
ObjectRef ObjectRef `json:"objectRef"`
// rollout strategy
Strategy RolloutStrategy `json:"strategy"`
// DeprecatedRolloutID is the deprecated field.
// It is recommended that configure RolloutId in workload.annotations[rollouts.kruise.io/rollout-id].
// RolloutID should be changed before each workload revision publication.
// It is to distinguish consecutive multiple workload publications and rollout progress.
DeprecatedRolloutID string `json:"rolloutID,omitempty"`
// if a rollout disabled, then the rollout would not watch changes of workload
//+kubebuilder:validation:Optional
//+kubebuilder:default=false
Disabled bool `json:"disabled"`
}
type ObjectRef struct {
// WorkloadRef contains enough information to let you identify a workload for Rollout
// Batch release of the bypass
WorkloadRef *WorkloadRef `json:"workloadRef,omitempty"`
// revisionRef
// Fully managed batch publishing capability
//RevisionRef *ControllerRevisionRef `json:"revisionRef,omitempty"`
}
type ObjectRefType string
const (
WorkloadRefType = "workloadRef"
RevisionRefType = "revisionRef"
)
// WorkloadRef holds a references to the Kubernetes object
type WorkloadRef struct {
// API Version of the referent
@ -62,11 +94,6 @@ type WorkloadRef struct {
Name string `json:"name"`
}
/*type ControllerRevisionRef struct {
TargetRevisionName string `json:"targetRevisionName"`
SourceRevisionName string `json:"sourceRevisionName"`
}*/
// RolloutStrategy defines strategy to apply during next rollout
type RolloutStrategy struct {
// Paused indicates that the Rollout is paused.
@ -74,39 +101,54 @@ type RolloutStrategy struct {
Paused bool `json:"paused,omitempty"`
// +optional
Canary *CanaryStrategy `json:"canary,omitempty"`
// +optional
// BlueGreen *BlueGreenStrategy `json:"blueGreen,omitempty"`
}
type RolloutStrategyType string
const (
RolloutStrategyCanary RolloutStrategyType = "canary"
RolloutStrategyBlueGreen RolloutStrategyType = "blueGreen"
)
// CanaryStrategy defines parameters for a Replica Based Canary
type CanaryStrategy struct {
// Steps define the order of phases to execute release in batches(20%, 40%, 60%, 80%, 100%)
// +optional
Steps []CanaryStep `json:"steps,omitempty"`
// TrafficRoutings hosts all the supported service meshes supported to enable more fine-grained traffic routing
// todo current only support one
TrafficRoutings []*TrafficRouting `json:"trafficRoutings,omitempty"`
// MetricsAnalysis *MetricsAnalysisBackground `json:"metricsAnalysis,omitempty"`
// and current only support one TrafficRouting
TrafficRoutings []TrafficRoutingRef `json:"trafficRoutings,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// PatchPodTemplateMetadata indicates patch configuration(e.g. labels, annotations) to the canary deployment podTemplateSpec.metadata
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
type PatchPodTemplateMetadata struct {
// annotations
Annotations map[string]string `json:"annotations,omitempty"`
// labels
Labels map[string]string `json:"labels,omitempty"`
}
// CanaryStep defines a step of a canary workload.
type CanaryStep struct {
// SetWeight sets what percentage of the canary pods should receive
Weight int32 `json:"weight,omitempty"`
TrafficRoutingStrategy `json:",inline"`
// Replicas is the number of expected canary pods in this batch
// it can be an absolute number (ex: 5) or a percentage of total pods.
Replicas *intstr.IntOrString `json:"replicas,omitempty"`
// Pause defines a pause stage for a rollout, manual or auto
// +optional
Pause RolloutPause `json:"pause,omitempty"`
// MetricsAnalysis *RolloutAnalysis `json:"metricsAnalysis,omitempty"`
}
type HttpRouteMatch struct {
// Headers specifies HTTP request header matchers. Multiple match values are
// ANDed together, meaning, a request must match all the specified headers
// to select the route.
// +kubebuilder:validation:MaxItems=16
Headers []gatewayv1beta1.HTTPHeaderMatch `json:"headers,omitempty"`
}
// RolloutPause defines a pause stage for a rollout
@ -116,24 +158,6 @@ type RolloutPause struct {
Duration *int32 `json:"duration,omitempty"`
}
// TrafficRouting hosts all the different configuration for supported service meshes to enable more fine-grained traffic routing
type TrafficRouting struct {
// Service holds the name of a service which selects pods with stable version and don't select any pods with canary version.
Service string `json:"service"`
// Optional duration in seconds the traffic provider(e.g. nginx ingress controller) consumes the service, ingress configuration changes gracefully.
GracePeriodSeconds int32 `json:"gracePeriodSeconds,omitempty"`
// nginx, alb, istio etc.
Type string `json:"type"`
// Ingress holds Ingress specific configuration to route traffic, e.g. Nginx, Alb.
Ingress *IngressTrafficRouting `json:"ingress,omitempty"`
}
// IngressTrafficRouting configuration for ingress controller to control traffic routing
type IngressTrafficRouting struct {
// Name refers to the name of an `Ingress` resource in the same namespace as the `Rollout`
Name string `json:"name"`
}
// RolloutStatus defines the observed state of Rollout
type RolloutStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
@ -141,19 +165,12 @@ type RolloutStatus struct {
// observedGeneration is the most recent generation observed for this Rollout.
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// CanaryRevision the hash of the canary pod template
// +optional
//CanaryRevision string `json:"canaryRevision,omitempty"`
// StableRevision indicates the revision pods that has successfully rolled out
StableRevision string `json:"stableRevision,omitempty"`
// Conditions a list of conditions a rollout can have.
// +optional
Conditions []RolloutCondition `json:"conditions,omitempty"`
// Canary describes the state of the canary rollout
// +optional
CanaryStatus *CanaryStatus `json:"canaryStatus,omitempty"`
// Conditions a list of conditions a rollout can have.
// +optional
//BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
Conditions []RolloutCondition `json:"conditions,omitempty"`
// Phase is the rollout phase.
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
@ -190,11 +207,13 @@ const (
ProgressingReasonInitializing = "Initializing"
ProgressingReasonInRolling = "InRolling"
ProgressingReasonFinalising = "Finalising"
ProgressingReasonSucceeded = "Succeeded"
ProgressingReasonCompleted = "Completed"
ProgressingReasonCancelling = "Cancelling"
ProgressingReasonCanceled = "Canceled"
ProgressingReasonPaused = "Paused"
// RolloutConditionSucceeded indicates whether rollout is succeeded or failed.
RolloutConditionSucceeded RolloutConditionType = "Succeeded"
// Terminating condition
RolloutConditionTerminating RolloutConditionType = "Terminating"
// Terminating Reason
@ -206,13 +225,14 @@ const (
type CanaryStatus struct {
// observedWorkloadGeneration is the most recent generation observed for this Rollout ref workload generation.
ObservedWorkloadGeneration int64 `json:"observedWorkloadGeneration,omitempty"`
// ObservedRolloutID will record the newest spec.RolloutID if status.canaryRevision equals to workload.updateRevision
ObservedRolloutID string `json:"observedRolloutID,omitempty"`
// RolloutHash from rollout.spec object
RolloutHash string `json:"rolloutHash,omitempty"`
// CanaryService holds the name of a service which selects pods with canary version and don't select any pods with stable version.
CanaryService string `json:"canaryService"`
// StableRevision indicates the revision of stable pods
StableRevision string `json:"stableRevision,omitempty"`
// CanaryRevision is calculated by rollout based on podTemplateHash, and the internal logic flow uses
// It may be different from rs podTemplateHash in different k8s versions, so it cannot be used as service selector label
// +optional
CanaryRevision string `json:"canaryRevision"`
// pod template hash is used as service selector label
PodTemplateHash string `json:"podTemplateHash"`
@ -220,17 +240,28 @@ type CanaryStatus struct {
CanaryReplicas int32 `json:"canaryReplicas"`
// CanaryReadyReplicas the numbers of ready canary revision pods
CanaryReadyReplicas int32 `json:"canaryReadyReplicas"`
// CurrentStepIndex defines the current step of the rollout is on. If the current step index is null, the
// controller will execute the rollout.
// NextStepIndex defines the next step of the rollout is on.
// In normal case, NextStepIndex is equal to CurrentStepIndex + 1
// If the current step is the last step, NextStepIndex is equal to -1
// Before the release, NextStepIndex is also equal to -1
// 0 is not used and won't appear in any case
// It is allowed to patch NextStepIndex by design,
// e.g. if CurrentStepIndex is 2, user can patch NextStepIndex to 3 (if exists) to
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
// Patching it with a non-positive value is meaningless, which will be corrected
// in the next reconciliation
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
NextStepIndex int32 `json:"nextStepIndex"`
// +optional
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
// The last time this step pods is ready.
LastUpdateTime *metav1.Time `json:"lastReadyTime,omitempty"`
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
FinalisingStep FinalizeStateType `json:"finalisingStep"`
}
type CanaryStepState string
type FinalizeStateType string
const (
CanaryStepStateUpgrade CanaryStepState = "StepUpgrade"
@ -249,18 +280,14 @@ const (
RolloutPhaseInitial RolloutPhase = "Initial"
// RolloutPhaseHealthy indicates a rollout is healthy
RolloutPhaseHealthy RolloutPhase = "Healthy"
// RolloutPhasePreparing indicates a rollout is preparing for next progress.
RolloutPhasePreparing RolloutPhase = "Preparing"
// RolloutPhaseProgressing indicates a rollout is not yet healthy but still making progress towards a healthy state
RolloutPhaseProgressing RolloutPhase = "Progressing"
// RolloutPhaseFinalizing indicates a rollout is finalizing
RolloutPhaseFinalizing RolloutPhase = "Finalizing"
// RolloutPhaseTerminating indicates a rollout is terminated
RolloutPhaseTerminating RolloutPhase = "Terminating"
// RolloutPhaseCompleted indicates a rollout is completed
RolloutPhaseCompleted RolloutPhase = "Completed"
// RolloutPhaseCancelled indicates a rollout is cancelled
RolloutPhaseCancelled RolloutPhase = "Cancelled"
// RolloutPhaseDisabled indicates a rollout is disabled
RolloutPhaseDisabled RolloutPhase = "Disabled"
// RolloutPhaseDisabling indicates a rollout is disabling and releasing resources
RolloutPhaseDisabling RolloutPhase = "Disabling"
)
// +genclient

View File

@ -0,0 +1,151 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
// RolloutHistorySpec defines the desired state of RolloutHistory
type RolloutHistorySpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Rollout indicates information of the rollout related with rollouthistory
Rollout RolloutInfo `json:"rollout,omitempty"`
// Workload indicates information of the workload, such as cloneset, deployment, advanced statefulset
Workload WorkloadInfo `json:"workload,omitempty"`
// Service indicates information of the service related with workload
Service ServiceInfo `json:"service,omitempty"`
// TrafficRouting indicates information of traffic route related with workload
TrafficRouting TrafficRoutingInfo `json:"trafficRouting,omitempty"`
}
type NameAndSpecData struct {
// Name indicates the name of object ref, such as rollout name, workload name, ingress name, etc.
Name string `json:"name"`
// Data indecates the spec of object ref
// +kubebuilder:pruning:PreserveUnknownFields
// +kubebuilder:validation:Schemaless
Data runtime.RawExtension `json:"data,omitempty"`
}
// RolloutInfo indicates information of the rollout related
type RolloutInfo struct {
// RolloutID indicates the new rollout
// if there is no new RolloutID this time, ignore it and not execute RolloutHistory
RolloutID string `json:"rolloutID"`
NameAndSpecData `json:",inline"`
}
// ServiceInfo indicates information of the service related
type ServiceInfo struct {
NameAndSpecData `json:",inline"`
}
// TrafficRoutingInfo indicates information of Gateway API or Ingress
type TrafficRoutingInfo struct {
// IngressRef indicates information of ingress
// +optional
Ingress *IngressInfo `json:"ingress,omitempty"`
// HTTPRouteRef indacates information of Gateway API
// +optional
HTTPRoute *HTTPRouteInfo `json:"httpRoute,omitempty"`
}
// IngressInfo indicates information of the ingress related
type IngressInfo struct {
NameAndSpecData `json:",inline"`
}
// HTTPRouteInfo indicates information of gateway API
type HTTPRouteInfo struct {
NameAndSpecData `json:",inline"`
}
// WorkloadInfo indicates information of the workload, such as cloneset, deployment, advanced statefulset
type WorkloadInfo struct {
metav1.TypeMeta `json:",inline"`
NameAndSpecData `json:",inline"`
}
// RolloutHistoryStatus defines the observed state of RolloutHistory
type RolloutHistoryStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Phase indicates phase of RolloutHistory, just "" or "completed"
Phase string `json:"phase,omitempty"`
// CanarySteps indicates the pods released each step
CanarySteps []CanaryStepInfo `json:"canarySteps,omitempty"`
}
// CanaryStepInfo indicates the pods for a revision
type CanaryStepInfo struct {
// CanaryStepIndex indicates step this revision
CanaryStepIndex int32 `json:"canaryStepIndex,omitempty"`
// Pods indicates the pods information
Pods []Pod `json:"pods,omitempty"`
}
// Pod indicates the information of a pod, including name, ip, node_name.
type Pod struct {
// Name indicates the node name
Name string `json:"name,omitempty"`
// IP indicates the pod ip
IP string `json:"ip,omitempty"`
// NodeName indicates the node which pod is located at
NodeName string `json:"nodeName,omitempty"`
// todo
// State indicates whether the pod is ready or not
// State string `json:"state, omitempty"`
}
// Phase indicates rollouthistory phase
const (
PhaseCompleted string = "completed"
)
// +genclient
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// RolloutHistory is the Schema for the rollouthistories API
type RolloutHistory struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RolloutHistorySpec `json:"spec,omitempty"`
Status RolloutHistoryStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// RolloutHistoryList contains a list of RolloutHistory
type RolloutHistoryList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RolloutHistory `json:"items"`
}
func init() {
SchemeBuilder.Register(&RolloutHistory{}, &RolloutHistoryList{})
}

View File

@ -0,0 +1,163 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
gatewayv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"
)
const (
ProgressingRolloutFinalizerPrefix = "progressing.rollouts.kruise.io"
)
// TrafficRoutingRef hosts all the different configuration for supported service meshes to enable more fine-grained traffic routing
type TrafficRoutingRef struct {
// Service holds the name of a service which selects pods with stable version and don't select any pods with canary version.
Service string `json:"service"`
// Optional duration in seconds the traffic provider(e.g. nginx ingress controller) consumes the service, ingress configuration changes gracefully.
// +kubebuilder:default=3
GracePeriodSeconds int32 `json:"gracePeriodSeconds,omitempty"`
// Ingress holds Ingress specific configuration to route traffic, e.g. Nginx, Alb.
Ingress *IngressTrafficRouting `json:"ingress,omitempty"`
// Gateway holds Gateway specific configuration to route traffic
// Gateway configuration only supports >= v0.4.0 (v1alpha2).
Gateway *GatewayTrafficRouting `json:"gateway,omitempty"`
// CustomNetworkRefs hold a list of custom providers to route traffic
CustomNetworkRefs []CustomNetworkRef `json:"customNetworkRefs,omitempty"`
}
// IngressTrafficRouting configuration for ingress controller to control traffic routing
type IngressTrafficRouting struct {
// ClassType refers to the type of `Ingress`.
// current support nginx, aliyun-alb. default is nginx.
// +optional
ClassType string `json:"classType,omitempty"`
// Name refers to the name of an `Ingress` resource in the same namespace as the `Rollout`
Name string `json:"name"`
}
// GatewayTrafficRouting configuration for gateway api
type GatewayTrafficRouting struct {
// HTTPRouteName refers to the name of an `HTTPRoute` resource in the same namespace as the `Rollout`
HTTPRouteName *string `json:"httpRouteName,omitempty"`
// TCPRouteName *string `json:"tcpRouteName,omitempty"`
// UDPRouteName *string `json:"udpRouteName,omitempty"`
}
type TrafficRoutingSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// ObjectRef indicates trafficRouting ref
ObjectRef []TrafficRoutingRef `json:"objectRef"`
// trafficrouting strategy
Strategy TrafficRoutingStrategy `json:"strategy"`
}
type TrafficRoutingStrategy struct {
// Weight indicate how many percentage of traffic the canary pods should receive
// +optional
Weight *int32 `json:"weight,omitempty"`
// Set overwrites the request with the given header (name, value)
// before the action.
//
// Input:
// GET /foo HTTP/1.1
// my-header: foo
//
// requestHeaderModifier:
// set:
// - name: "my-header"
// value: "bar"
//
// Output:
// GET /foo HTTP/1.1
// my-header: bar
//
// +optional
RequestHeaderModifier *gatewayv1beta1.HTTPHeaderFilter `json:"requestHeaderModifier,omitempty"`
// Matches define conditions used for matching the incoming HTTP requests to canary service.
// Each match is independent, i.e. this rule will be matched if **any** one of the matches is satisfied.
// If Gateway API, current only support one match.
// And cannot support both weight and matches, if both are configured, then matches takes precedence.
Matches []HttpRouteMatch `json:"matches,omitempty"`
}
type TrafficRoutingStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// observedGeneration is the most recent generation observed for this Rollout.
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// Phase is the trafficRouting phase.
Phase TrafficRoutingPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
Message string `json:"message,omitempty"`
}
// TrafficRoutingPhase are a set of phases that this rollout
type TrafficRoutingPhase string
const (
// TrafficRoutingPhaseInitial indicates a traffic routing is Initial
TrafficRoutingPhaseInitial TrafficRoutingPhase = "Initial"
// TrafficRoutingPhaseHealthy indicates a traffic routing is healthy.
// This means that Ingress and Service Resources exist.
TrafficRoutingPhaseHealthy TrafficRoutingPhase = "Healthy"
// TrafficRoutingPhaseProgressing indicates a traffic routing is not yet healthy but still making progress towards a healthy state
TrafficRoutingPhaseProgressing TrafficRoutingPhase = "Progressing"
// TrafficRoutingPhaseFinalizing indicates the trafficRouting progress is complete, and is running recycle operations.
TrafficRoutingPhaseFinalizing TrafficRoutingPhase = "Finalizing"
// TrafficRoutingPhaseTerminating indicates a traffic routing is terminated
TrafficRoutingPhaseTerminating TrafficRoutingPhase = "Terminating"
)
// +genclient
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="STATUS",type="string",JSONPath=".status.phase",description="The TrafficRouting status phase"
// +kubebuilder:printcolumn:name="MESSAGE",type="string",JSONPath=".status.message",description="The TrafficRouting canary status message"
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
// TrafficRouting is the Schema for the TrafficRoutings API
type TrafficRouting struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec TrafficRoutingSpec `json:"spec,omitempty"`
Status TrafficRoutingStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// TrafficRoutingList contains a list of TrafficRouting
type TrafficRoutingList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []TrafficRouting `json:"items"`
}
type CustomNetworkRef struct {
APIVersion string `json:"apiVersion"`
Kind string `json:"kind"`
Name string `json:"name"`
}
func init() {
SchemeBuilder.Register(&TrafficRouting{}, &TrafficRoutingList{})
}

View File

@ -2,7 +2,7 @@
// +build !ignore_autogenerated
/*
Copyright 2022 The Kruise Authors.
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -22,8 +22,10 @@ limitations under the License.
package v1alpha1
import (
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/gateway-api/apis/v1beta1"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@ -60,6 +62,11 @@ func (in *BatchReleaseCanaryStatus) DeepCopyInto(out *BatchReleaseCanaryStatus)
in, out := &in.BatchReadyTime, &out.BatchReadyTime
*out = (*in).DeepCopy()
}
if in.NoNeedUpdateReplicas != nil {
in, out := &in.NoNeedUpdateReplicas, &out.NoNeedUpdateReplicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchReleaseCanaryStatus.
@ -171,6 +178,7 @@ func (in *CanaryStatus) DeepCopy() *CanaryStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStep) DeepCopyInto(out *CanaryStep) {
*out = *in
in.TrafficRoutingStrategy.DeepCopyInto(&out.TrafficRoutingStrategy)
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(intstr.IntOrString)
@ -189,6 +197,26 @@ func (in *CanaryStep) DeepCopy() *CanaryStep {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStepInfo) DeepCopyInto(out *CanaryStepInfo) {
*out = *in
if in.Pods != nil {
in, out := &in.Pods, &out.Pods
*out = make([]Pod, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStepInfo.
func (in *CanaryStepInfo) DeepCopy() *CanaryStepInfo {
if in == nil {
return nil
}
out := new(CanaryStepInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStrategy) DeepCopyInto(out *CanaryStrategy) {
*out = *in
@ -201,15 +229,21 @@ func (in *CanaryStrategy) DeepCopyInto(out *CanaryStrategy) {
}
if in.TrafficRoutings != nil {
in, out := &in.TrafficRoutings, &out.TrafficRoutings
*out = make([]*TrafficRouting, len(*in))
*out = make([]TrafficRoutingRef, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(TrafficRouting)
(*in).DeepCopyInto(*out)
}
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
if in.PatchPodTemplateMetadata != nil {
in, out := &in.PatchPodTemplateMetadata, &out.PatchPodTemplateMetadata
*out = new(PatchPodTemplateMetadata)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStrategy.
@ -222,6 +256,131 @@ func (in *CanaryStrategy) DeepCopy() *CanaryStrategy {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomNetworkRef) DeepCopyInto(out *CustomNetworkRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomNetworkRef.
func (in *CustomNetworkRef) DeepCopy() *CustomNetworkRef {
if in == nil {
return nil
}
out := new(CustomNetworkRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentExtraStatus) DeepCopyInto(out *DeploymentExtraStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeploymentExtraStatus.
func (in *DeploymentExtraStatus) DeepCopy() *DeploymentExtraStatus {
if in == nil {
return nil
}
out := new(DeploymentExtraStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentStrategy) DeepCopyInto(out *DeploymentStrategy) {
*out = *in
if in.RollingUpdate != nil {
in, out := &in.RollingUpdate, &out.RollingUpdate
*out = new(v1.RollingUpdateDeployment)
(*in).DeepCopyInto(*out)
}
out.Partition = in.Partition
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeploymentStrategy.
func (in *DeploymentStrategy) DeepCopy() *DeploymentStrategy {
if in == nil {
return nil
}
out := new(DeploymentStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GatewayTrafficRouting) DeepCopyInto(out *GatewayTrafficRouting) {
*out = *in
if in.HTTPRouteName != nil {
in, out := &in.HTTPRouteName, &out.HTTPRouteName
*out = new(string)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GatewayTrafficRouting.
func (in *GatewayTrafficRouting) DeepCopy() *GatewayTrafficRouting {
if in == nil {
return nil
}
out := new(GatewayTrafficRouting)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPRouteInfo) DeepCopyInto(out *HTTPRouteInfo) {
*out = *in
in.NameAndSpecData.DeepCopyInto(&out.NameAndSpecData)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPRouteInfo.
func (in *HTTPRouteInfo) DeepCopy() *HTTPRouteInfo {
if in == nil {
return nil
}
out := new(HTTPRouteInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HttpRouteMatch) DeepCopyInto(out *HttpRouteMatch) {
*out = *in
if in.Headers != nil {
in, out := &in.Headers, &out.Headers
*out = make([]v1beta1.HTTPHeaderMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HttpRouteMatch.
func (in *HttpRouteMatch) DeepCopy() *HttpRouteMatch {
if in == nil {
return nil
}
out := new(HttpRouteMatch)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *IngressInfo) DeepCopyInto(out *IngressInfo) {
*out = *in
in.NameAndSpecData.DeepCopyInto(&out.NameAndSpecData)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new IngressInfo.
func (in *IngressInfo) DeepCopy() *IngressInfo {
if in == nil {
return nil
}
out := new(IngressInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *IngressTrafficRouting) DeepCopyInto(out *IngressTrafficRouting) {
*out = *in
@ -237,6 +396,22 @@ func (in *IngressTrafficRouting) DeepCopy() *IngressTrafficRouting {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NameAndSpecData) DeepCopyInto(out *NameAndSpecData) {
*out = *in
in.Data.DeepCopyInto(&out.Data)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NameAndSpecData.
func (in *NameAndSpecData) DeepCopy() *NameAndSpecData {
if in == nil {
return nil
}
out := new(NameAndSpecData)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectRef) DeepCopyInto(out *ObjectRef) {
*out = *in
@ -257,6 +432,50 @@ func (in *ObjectRef) DeepCopy() *ObjectRef {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PatchPodTemplateMetadata) DeepCopyInto(out *PatchPodTemplateMetadata) {
*out = *in
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PatchPodTemplateMetadata.
func (in *PatchPodTemplateMetadata) DeepCopy() *PatchPodTemplateMetadata {
if in == nil {
return nil
}
out := new(PatchPodTemplateMetadata)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Pod) DeepCopyInto(out *Pod) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pod.
func (in *Pod) DeepCopy() *Pod {
if in == nil {
return nil
}
out := new(Pod)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ReleaseBatch) DeepCopyInto(out *ReleaseBatch) {
*out = *in
@ -286,6 +505,16 @@ func (in *ReleasePlan) DeepCopyInto(out *ReleasePlan) {
*out = new(int32)
**out = **in
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
if in.PatchPodTemplateMetadata != nil {
in, out := &in.PatchPodTemplateMetadata, &out.PatchPodTemplateMetadata
*out = new(PatchPodTemplateMetadata)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReleasePlan.
@ -342,6 +571,122 @@ func (in *RolloutCondition) DeepCopy() *RolloutCondition {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutHistory) DeepCopyInto(out *RolloutHistory) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutHistory.
func (in *RolloutHistory) DeepCopy() *RolloutHistory {
if in == nil {
return nil
}
out := new(RolloutHistory)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RolloutHistory) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutHistoryList) DeepCopyInto(out *RolloutHistoryList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]RolloutHistory, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutHistoryList.
func (in *RolloutHistoryList) DeepCopy() *RolloutHistoryList {
if in == nil {
return nil
}
out := new(RolloutHistoryList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RolloutHistoryList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutHistorySpec) DeepCopyInto(out *RolloutHistorySpec) {
*out = *in
in.Rollout.DeepCopyInto(&out.Rollout)
in.Workload.DeepCopyInto(&out.Workload)
in.Service.DeepCopyInto(&out.Service)
in.TrafficRouting.DeepCopyInto(&out.TrafficRouting)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutHistorySpec.
func (in *RolloutHistorySpec) DeepCopy() *RolloutHistorySpec {
if in == nil {
return nil
}
out := new(RolloutHistorySpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutHistoryStatus) DeepCopyInto(out *RolloutHistoryStatus) {
*out = *in
if in.CanarySteps != nil {
in, out := &in.CanarySteps, &out.CanarySteps
*out = make([]CanaryStepInfo, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutHistoryStatus.
func (in *RolloutHistoryStatus) DeepCopy() *RolloutHistoryStatus {
if in == nil {
return nil
}
out := new(RolloutHistoryStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutInfo) DeepCopyInto(out *RolloutInfo) {
*out = *in
in.NameAndSpecData.DeepCopyInto(&out.NameAndSpecData)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutInfo.
func (in *RolloutInfo) DeepCopy() *RolloutInfo {
if in == nil {
return nil
}
out := new(RolloutInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutList) DeepCopyInto(out *RolloutList) {
*out = *in
@ -414,6 +759,11 @@ func (in *RolloutSpec) DeepCopy() *RolloutSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutStatus) DeepCopyInto(out *RolloutStatus) {
*out = *in
if in.CanaryStatus != nil {
in, out := &in.CanaryStatus, &out.CanaryStatus
*out = new(CanaryStatus)
(*in).DeepCopyInto(*out)
}
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]RolloutCondition, len(*in))
@ -421,11 +771,6 @@ func (in *RolloutStatus) DeepCopyInto(out *RolloutStatus) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.CanaryStatus != nil {
in, out := &in.CanaryStatus, &out.CanaryStatus
*out = new(CanaryStatus)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutStatus.
@ -458,14 +803,29 @@ func (in *RolloutStrategy) DeepCopy() *RolloutStrategy {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ServiceInfo) DeepCopyInto(out *ServiceInfo) {
*out = *in
in.NameAndSpecData.DeepCopyInto(&out.NameAndSpecData)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceInfo.
func (in *ServiceInfo) DeepCopy() *ServiceInfo {
if in == nil {
return nil
}
out := new(ServiceInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRouting) DeepCopyInto(out *TrafficRouting) {
*out = *in
if in.Ingress != nil {
in, out := &in.Ingress, &out.Ingress
*out = new(IngressTrafficRouting)
**out = **in
}
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
out.Status = in.Status
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRouting.
@ -478,6 +838,188 @@ func (in *TrafficRouting) DeepCopy() *TrafficRouting {
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TrafficRouting) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingInfo) DeepCopyInto(out *TrafficRoutingInfo) {
*out = *in
if in.Ingress != nil {
in, out := &in.Ingress, &out.Ingress
*out = new(IngressInfo)
(*in).DeepCopyInto(*out)
}
if in.HTTPRoute != nil {
in, out := &in.HTTPRoute, &out.HTTPRoute
*out = new(HTTPRouteInfo)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingInfo.
func (in *TrafficRoutingInfo) DeepCopy() *TrafficRoutingInfo {
if in == nil {
return nil
}
out := new(TrafficRoutingInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingList) DeepCopyInto(out *TrafficRoutingList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]TrafficRouting, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingList.
func (in *TrafficRoutingList) DeepCopy() *TrafficRoutingList {
if in == nil {
return nil
}
out := new(TrafficRoutingList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TrafficRoutingList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingRef) DeepCopyInto(out *TrafficRoutingRef) {
*out = *in
if in.Ingress != nil {
in, out := &in.Ingress, &out.Ingress
*out = new(IngressTrafficRouting)
**out = **in
}
if in.Gateway != nil {
in, out := &in.Gateway, &out.Gateway
*out = new(GatewayTrafficRouting)
(*in).DeepCopyInto(*out)
}
if in.CustomNetworkRefs != nil {
in, out := &in.CustomNetworkRefs, &out.CustomNetworkRefs
*out = make([]CustomNetworkRef, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingRef.
func (in *TrafficRoutingRef) DeepCopy() *TrafficRoutingRef {
if in == nil {
return nil
}
out := new(TrafficRoutingRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingSpec) DeepCopyInto(out *TrafficRoutingSpec) {
*out = *in
if in.ObjectRef != nil {
in, out := &in.ObjectRef, &out.ObjectRef
*out = make([]TrafficRoutingRef, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.Strategy.DeepCopyInto(&out.Strategy)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingSpec.
func (in *TrafficRoutingSpec) DeepCopy() *TrafficRoutingSpec {
if in == nil {
return nil
}
out := new(TrafficRoutingSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingStatus) DeepCopyInto(out *TrafficRoutingStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingStatus.
func (in *TrafficRoutingStatus) DeepCopy() *TrafficRoutingStatus {
if in == nil {
return nil
}
out := new(TrafficRoutingStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingStrategy) DeepCopyInto(out *TrafficRoutingStrategy) {
*out = *in
if in.Weight != nil {
in, out := &in.Weight, &out.Weight
*out = new(int32)
**out = **in
}
if in.RequestHeaderModifier != nil {
in, out := &in.RequestHeaderModifier, &out.RequestHeaderModifier
*out = new(v1beta1.HTTPHeaderFilter)
(*in).DeepCopyInto(*out)
}
if in.Matches != nil {
in, out := &in.Matches, &out.Matches
*out = make([]HttpRouteMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingStrategy.
func (in *TrafficRoutingStrategy) DeepCopy() *TrafficRoutingStrategy {
if in == nil {
return nil
}
out := new(TrafficRoutingStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadInfo) DeepCopyInto(out *WorkloadInfo) {
*out = *in
out.TypeMeta = in.TypeMeta
in.NameAndSpecData.DeepCopyInto(&out.NameAndSpecData)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadInfo.
func (in *WorkloadInfo) DeepCopy() *WorkloadInfo {
if in == nil {
return nil
}
out := new(WorkloadInfo)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadRef) DeepCopyInto(out *WorkloadRef) {
*out = *in

View File

@ -0,0 +1,159 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
// ReleasePlan fines the details of the release plan
type ReleasePlan struct {
// Batches is the details on each batch of the ReleasePlan.
//Users can specify their batch plan in this field, such as:
// batches:
// - canaryReplicas: 1 # batches 0
// - canaryReplicas: 2 # batches 1
// - canaryReplicas: 5 # batches 2
// Not that these canaryReplicas should be a non-decreasing sequence.
// +optional
Batches []ReleaseBatch `json:"batches"`
// All pods in the batches up to the batchPartition (included) will have
// the target resource specification while the rest still is the stable revision.
// This is designed for the operators to manually rollout.
// Default is nil, which means no partition and will release all batches.
// BatchPartition start from 0.
// +optional
BatchPartition *int32 `json:"batchPartition,omitempty"`
// RolloutID indicates an id for each rollout progress
RolloutID string `json:"rolloutID,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// FinalizingPolicy define the behavior of controller when phase enter Finalizing
// Defaults to "Immediate"
FinalizingPolicy FinalizingPolicyType `json:"finalizingPolicy,omitempty"`
// PatchPodTemplateMetadata indicates patch configuration(e.g. labels, annotations) to the canary deployment podTemplateSpec.metadata
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// RollingStyle can be "Canary", "Partiton" or "BlueGreen"
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// EnableExtraWorkloadForCanary indicates whether to create extra workload for canary
// True corresponds to RollingStyle "Canary".
// False corresponds to RollingStyle "Partiton".
// Ignored in BlueGreen-style.
// This field is about to deprecate, use RollingStyle instead.
// If both of them are set, controller will only consider this
// filed when RollingStyle is empty
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary"`
}
type FinalizingPolicyType string
const (
// WaitResumeFinalizingPolicyType will wait workload to be resumed, which means
// controller will be hold at Finalizing phase until all pods of workload is upgraded.
// WaitResumeFinalizingPolicyType only works in canary-style BatchRelease controller.
WaitResumeFinalizingPolicyType FinalizingPolicyType = "WaitResume"
// ImmediateFinalizingPolicyType will not to wait workload to be resumed.
ImmediateFinalizingPolicyType FinalizingPolicyType = "Immediate"
)
// ReleaseBatch is used to describe how each batch release should be
type ReleaseBatch struct {
// CanaryReplicas is the number of upgraded pods that should have in this batch.
// it can be an absolute number (ex: 5) or a percentage of workload replicas.
// batches[i].canaryReplicas should less than or equal to batches[j].canaryReplicas if i < j.
CanaryReplicas intstr.IntOrString `json:"canaryReplicas"`
}
// BatchReleaseStatus defines the observed state of a release plan
type BatchReleaseStatus struct {
// Conditions represents the observed process state of each phase during executing the release plan.
Conditions []RolloutCondition `json:"conditions,omitempty"`
// CanaryStatus describes the state of the canary rollout.
CanaryStatus BatchReleaseCanaryStatus `json:"canaryStatus,omitempty"`
// StableRevision is the pod-template-hash of stable revision pod template.
StableRevision string `json:"stableRevision,omitempty"`
// UpdateRevision is the pod-template-hash of update revision pod template.
UpdateRevision string `json:"updateRevision,omitempty"`
// ObservedGeneration is the most recent generation observed for this BatchRelease.
// It corresponds to this BatchRelease's generation, which is updated on mutation
// by the API Server, and only if BatchRelease Spec was changed, its generation will increase 1.
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// ObservedRolloutID is the most recent rollout-id observed for this BatchRelease.
// If RolloutID was changed, we will restart to roll out from batch 0,
// to ensure the batch-id and rollout-id labels of Pods are correct.
ObservedRolloutID string `json:"observedRolloutID,omitempty"`
// ObservedWorkloadReplicas is observed replicas of target referenced workload.
// This field is designed to deal with scaling event during rollout, if this field changed,
// it means that the workload is scaling during rollout.
ObservedWorkloadReplicas int32 `json:"observedWorkloadReplicas,omitempty"`
// Count of hash collisions for creating canary Deployment. The controller uses this
// field as a collision avoidance mechanism when it needs to create the name for the
// newest canary Deployment.
// +optional
CollisionCount *int32 `json:"collisionCount,omitempty"`
// ObservedReleasePlanHash is a hash code of observed itself spec.releasePlan.
ObservedReleasePlanHash string `json:"observedReleasePlanHash,omitempty"`
// Phase is the release plan phase, which indicates the current state of release
// plan state machine in BatchRelease controller.
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
Message string `json:"message,omitempty"`
}
type BatchReleaseCanaryStatus struct {
// CurrentBatchState indicates the release state of the current batch.
CurrentBatchState BatchReleaseBatchStateType `json:"batchState,omitempty"`
// The current batch the rollout is working on/blocked, it starts from 0
CurrentBatch int32 `json:"currentBatch"`
// BatchReadyTime is the ready timestamp of the current batch or the last batch.
// This field is updated once a batch ready, and the batches[x].pausedSeconds
// relies on this field to calculate the real-time duration.
BatchReadyTime *metav1.Time `json:"batchReadyTime,omitempty"`
// UpdatedReplicas is the number of upgraded Pods.
UpdatedReplicas int32 `json:"updatedReplicas,omitempty"`
// UpdatedReadyReplicas is the number upgraded Pods that have a Ready Condition.
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
// the number of pods that no need to rollback in rollback scene.
NoNeedUpdateReplicas *int32 `json:"noNeedUpdateReplicas,omitempty"`
}
type BatchReleaseBatchStateType string
const (
// UpgradingBatchState indicates that current batch is at upgrading pod state
UpgradingBatchState BatchReleaseBatchStateType = "Upgrading"
// VerifyingBatchState indicates that current batch is at verifying whether it's ready state
VerifyingBatchState BatchReleaseBatchStateType = "Verifying"
// ReadyBatchState indicates that current batch is at batch ready state
ReadyBatchState BatchReleaseBatchStateType = "Ready"
)
const (
// RolloutPhasePreparing indicates a rollout is preparing for next progress.
RolloutPhasePreparing RolloutPhase = "Preparing"
// RolloutPhaseFinalizing indicates a rollout is finalizing
RolloutPhaseFinalizing RolloutPhase = "Finalizing"
// RolloutPhaseCompleted indicates a rollout is completed/cancelled/terminated
RolloutPhaseCompleted RolloutPhase = "Completed"
)

View File

@ -0,0 +1,61 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +genclient
// +k8s:openapi-gen=true
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="KIND",type=string,JSONPath=`.spec.targetReference.workloadRef.kind`
// +kubebuilder:printcolumn:name="PHASE",type=string,JSONPath=`.status.phase`
// +kubebuilder:printcolumn:name="BATCH",type=integer,JSONPath=`.status.canaryStatus.currentBatch`
// +kubebuilder:printcolumn:name="BATCH-STATE",type=string,JSONPath=`.status.canaryStatus.batchState`
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
type BatchRelease struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec BatchReleaseSpec `json:"spec,omitempty"`
Status BatchReleaseStatus `json:"status,omitempty"`
}
// BatchReleaseSpec defines how to describe an update between different compRevision
type BatchReleaseSpec struct {
// WorkloadRef contains enough information to let you identify a workload for Rollout
// Batch release of the bypass
WorkloadRef ObjectRef `json:"workloadRef,omitempty"`
// ReleasePlan is the details on how to rollout the resources
ReleasePlan ReleasePlan `json:"releasePlan"`
}
// BatchReleaseList contains a list of BatchRelease
// +kubebuilder:object:root=true
type BatchReleaseList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BatchRelease `json:"items"`
}
func init() {
SchemeBuilder.Register(&BatchRelease{}, &BatchReleaseList{})
}

21
api/v1beta1/convertion.go Normal file
View File

@ -0,0 +1,21 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
func (*Rollout) Hub() {}
func (*BatchRelease) Hub() {}

View File

@ -0,0 +1,115 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
apps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
const (
// DeploymentStrategyAnnotation is annotation for deployment,
// which is strategy fields of Advanced Deployment.
DeploymentStrategyAnnotation = "rollouts.kruise.io/deployment-strategy"
// DeploymentExtraStatusAnnotation is annotation for deployment,
// which is extra status field of Advanced Deployment.
DeploymentExtraStatusAnnotation = "rollouts.kruise.io/deployment-extra-status"
// DeploymentStableRevisionLabel is label for deployment,
// which record the stable revision during the current rolling process.
DeploymentStableRevisionLabel = "rollouts.kruise.io/stable-revision"
// AdvancedDeploymentControlLabel is label for deployment,
// which labels whether the deployment is controlled by advanced-deployment-controller.
AdvancedDeploymentControlLabel = "rollouts.kruise.io/controlled-by-advanced-deployment-controller"
// OriginalDeploymentStrategyAnnotation is annotation for workload in BlueGreen Release,
// it will store the original setting of the workload, which will be used to restore the workload
OriginalDeploymentStrategyAnnotation = "rollouts.kruise.io/original-deployment-strategy"
// MaxProgressSeconds is the value we set for ProgressDeadlineSeconds
// MaxReadySeconds is the value we set for MinReadySeconds, which is one less than ProgressDeadlineSeconds
// MaxInt32: 2147483647, ≈ 68 years
MaxProgressSeconds = 1<<31 - 1
MaxReadySeconds = MaxProgressSeconds - 1
)
// DeploymentStrategy is strategy field for Advanced Deployment
type DeploymentStrategy struct {
// RollingStyle define the behavior of rolling for deployment.
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// original deployment strategy rolling update fields
RollingUpdate *apps.RollingUpdateDeployment `json:"rollingUpdate,omitempty"`
// Paused = true will block the upgrade of Pods
Paused bool `json:"paused,omitempty"`
// Partition describe how many Pods should be updated during rollout.
// We use this field to implement partition-style rolling update.
Partition intstr.IntOrString `json:"partition,omitempty"`
}
type RollingStyleType string
const (
// PartitionRollingStyle means rolling in batches just like CloneSet, and will NOT create any extra Deployment;
PartitionRollingStyle RollingStyleType = "Partition"
// CanaryRollingStyle means rolling in canary way, and will create a canary Deployment.
CanaryRollingStyle RollingStyleType = "Canary"
// BlueGreenRollingStyle means rolling in blue-green way, and will NOT create a extra Deployment.
BlueGreenRollingStyle RollingStyleType = "BlueGreen"
)
// DeploymentExtraStatus is extra status field for Advanced Deployment
type DeploymentExtraStatus struct {
// UpdatedReadyReplicas the number of pods that has been updated and ready.
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas,omitempty"`
// ExpectedUpdatedReplicas is an absolute number calculated based on Partition
// and Deployment.Spec.Replicas, means how many pods are expected be updated under
// current strategy.
// This field is designed to avoid users to fall into the details of algorithm
// for Partition calculation.
ExpectedUpdatedReplicas int32 `json:"expectedUpdatedReplicas,omitempty"`
}
func SetDefaultDeploymentStrategy(strategy *DeploymentStrategy) {
if strategy.RollingStyle != PartitionRollingStyle {
return
}
if strategy.RollingUpdate == nil {
strategy.RollingUpdate = &apps.RollingUpdateDeployment{}
}
if strategy.RollingUpdate.MaxUnavailable == nil {
// Set MaxUnavailable as 25% by default
maxUnavailable := intstr.FromString("25%")
strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
}
if strategy.RollingUpdate.MaxSurge == nil {
// Set MaxSurge as 25% by default
maxSurge := intstr.FromString("25%")
strategy.RollingUpdate.MaxUnavailable = &maxSurge
}
// Cannot allow maxSurge==0 && MaxUnavailable==0, otherwise, no pod can be updated when rolling update.
maxSurge, _ := intstr.GetScaledValueFromIntOrPercent(strategy.RollingUpdate.MaxSurge, 100, true)
maxUnavailable, _ := intstr.GetScaledValueFromIntOrPercent(strategy.RollingUpdate.MaxUnavailable, 100, true)
if maxSurge == 0 && maxUnavailable == 0 {
strategy.RollingUpdate = &apps.RollingUpdateDeployment{
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: 0},
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
}
}
}

View File

@ -0,0 +1,43 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1beta1 contains API Schema definitions for the apps v1beta1 API group
// +kubebuilder:object:generate=true
// +groupName=rollouts.kruise.io
package v1beta1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "rollouts.kruise.io", Version: "v1beta1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
SchemeGroupVersion = GroupVersion
)
// Resource is required by pkg/client/listers/...
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}

View File

@ -0,0 +1,616 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
"reflect"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
gatewayv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"
)
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for the fields to be serialized.
const (
// RolloutIDLabel is set to workload labels.
// RolloutIDLabel is designed to distinguish each workload revision publications.
// The value of RolloutIDLabel corresponds Rollout.Spec.RolloutID.
RolloutIDLabel = "rollouts.kruise.io/rollout-id"
// RolloutBatchIDLabel is patched in pod labels.
// RolloutBatchIDLabel is the label key of batch id that will be patched to pods during rollout.
// Only when RolloutIDLabel is set, RolloutBatchIDLabel will be patched.
// Users can use RolloutIDLabel and RolloutBatchIDLabel to select the pods that are upgraded in some certain batch and release.
RolloutBatchIDLabel = "rollouts.kruise.io/rollout-batch-id"
// RollbackInBatchAnnotation is set to rollout annotations.
// RollbackInBatchAnnotation allow use disable quick rollback, and will roll back in batch style.
RollbackInBatchAnnotation = "rollouts.kruise.io/rollback-in-batch"
)
// RolloutSpec defines the desired state of Rollout
type RolloutSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// WorkloadRef contains enough information to let you identify a workload for Rollout
// Batch release of the bypass
WorkloadRef ObjectRef `json:"workloadRef"`
// rollout strategy
Strategy RolloutStrategy `json:"strategy"`
// if a rollout disabled, then the rollout would not watch changes of workload
//+kubebuilder:validation:Optional
//+kubebuilder:default=false
Disabled bool `json:"disabled"`
}
// ObjectRef holds a references to the Kubernetes object
type ObjectRef struct {
// API Version of the referent
APIVersion string `json:"apiVersion"`
// Kind of the referent
Kind string `json:"kind"`
// Name of the referent
Name string `json:"name"`
}
// RolloutStrategy defines strategy to apply during next rollout
type RolloutStrategy struct {
// Paused indicates that the Rollout is paused.
// Default value is false
Paused bool `json:"paused,omitempty"`
// +optional
Canary *CanaryStrategy `json:"canary,omitempty"`
// +optional
BlueGreen *BlueGreenStrategy `json:"blueGreen,omitempty" protobuf:"bytes,1,opt,name=blueGreen"`
}
// Get the rolling style based on the strategy
func (r *RolloutStrategy) GetRollingStyle() RollingStyleType {
if r.BlueGreen != nil {
return BlueGreenRollingStyle
}
//NOTE - even EnableExtraWorkloadForCanary is true, as long as it is not Deployment,
//we won't do canary release. BatchRelease will treat it as Partiton release
if r.Canary.EnableExtraWorkloadForCanary {
return CanaryRollingStyle
}
return PartitionRollingStyle
}
// using single field EnableExtraWorkloadForCanary to distinguish partition-style from canary-style
// is not enough, for example, a v1alaph1 Rollout can be converted to v1beta1 Rollout
// with EnableExtraWorkloadForCanary set as true, even the objectRef is cloneset (which doesn't support canary release)
func IsRealPartition(rollout *Rollout) bool {
if rollout.Spec.Strategy.IsEmptyRelease() {
return false
}
estimation := rollout.Spec.Strategy.GetRollingStyle()
if estimation == BlueGreenRollingStyle {
return false
}
targetRef := rollout.Spec.WorkloadRef
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() &&
estimation == CanaryRollingStyle {
return false
}
return true
}
// r.GetRollingStyle() == BlueGreenRollingStyle
func (r *RolloutStrategy) IsBlueGreenRelease() bool {
return r.GetRollingStyle() == BlueGreenRollingStyle
}
// r.GetRollingStyle() == CanaryRollingStyle || r.GetRollingStyle() == PartitionRollingStyle
func (r *RolloutStrategy) IsCanaryStragegy() bool {
return r.GetRollingStyle() == CanaryRollingStyle || r.GetRollingStyle() == PartitionRollingStyle
}
func (r *RolloutStrategy) IsEmptyRelease() bool {
return r.BlueGreen == nil && r.Canary == nil
}
// Get the steps based on the rolling style
func (r *RolloutStrategy) GetSteps() []CanaryStep {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.Steps
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.Steps
default:
return nil
}
}
// Get the traffic routing based on the rolling style
func (r *RolloutStrategy) GetTrafficRouting() []TrafficRoutingRef {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.TrafficRoutings
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.TrafficRoutings
default:
return nil
}
}
// Check if there are traffic routings
func (r *RolloutStrategy) HasTrafficRoutings() bool {
return len(r.GetTrafficRouting()) > 0
}
// Check the value of DisableGenerateCanaryService
func (r *RolloutStrategy) DisableGenerateCanaryService() bool {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.DisableGenerateCanaryService
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.DisableGenerateCanaryService
default:
return false
}
}
// BlueGreenStrategy defines parameters for Blue Green Release
type BlueGreenStrategy struct {
// Steps define the order of phases to execute release in batches(20%, 40%, 60%, 80%, 100%)
// +optional
Steps []CanaryStep `json:"steps,omitempty"`
// TrafficRoutings support ingress, gateway api and custom network resource(e.g. istio, apisix) to enable more fine-grained traffic routing
// and current only support one TrafficRouting
TrafficRoutings []TrafficRoutingRef `json:"trafficRoutings,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// TrafficRoutingRef is TrafficRouting's Name
TrafficRoutingRef string `json:"trafficRoutingRef,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
// CanaryStrategy defines parameters for a Replica Based Canary
type CanaryStrategy struct {
// Steps define the order of phases to execute release in batches(20%, 40%, 60%, 80%, 100%)
// +optional
Steps []CanaryStep `json:"steps,omitempty"`
// TrafficRoutings support ingress, gateway api and custom network resource(e.g. istio, apisix) to enable more fine-grained traffic routing
// and current only support one TrafficRouting
TrafficRoutings []TrafficRoutingRef `json:"trafficRoutings,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// PatchPodTemplateMetadata indicates patch configuration(e.g. labels, annotations) to the canary deployment podTemplateSpec.metadata
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// If true, then it will create new deployment for canary, such as: workload-demo-canary.
// When user verifies that the canary version is ready, we will remove the canary deployment and release the deployment workload-demo in full.
// Current only support k8s native deployment
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary,omitempty"`
// TrafficRoutingRef is TrafficRouting's Name
TrafficRoutingRef string `json:"trafficRoutingRef,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
type PatchPodTemplateMetadata struct {
// annotations
Annotations map[string]string `json:"annotations,omitempty"`
// labels
Labels map[string]string `json:"labels,omitempty"`
}
// CanaryStep defines a step of a canary workload.
type CanaryStep struct {
TrafficRoutingStrategy `json:",inline"`
// Replicas is the number of expected canary pods in this batch
// it can be an absolute number (ex: 5) or a percentage of total pods.
Replicas *intstr.IntOrString `json:"replicas,omitempty"`
// Pause defines a pause stage for a rollout, manual or auto
// +optional
Pause RolloutPause `json:"pause,omitempty"`
}
type TrafficRoutingStrategy struct {
// Traffic indicate how many percentage of traffic the canary pods should receive
// Value is of string type and is a percentage, e.g. 5%.
// +optional
Traffic *string `json:"traffic,omitempty"`
// Set overwrites the request with the given header (name, value)
// before the action.
//
// Input:
// GET /foo HTTP/1.1
// my-header: foo
//
// requestHeaderModifier:
// set:
// - name: "my-header"
// value: "bar"
//
// Output:
// GET /foo HTTP/1.1
// my-header: bar
//
// +optional
RequestHeaderModifier *gatewayv1beta1.HTTPHeaderFilter `json:"requestHeaderModifier,omitempty"`
// Matches define conditions used for matching incoming HTTP requests to the canary service.
// Each match is independent, i.e. this rule will be matched as long as **any** one of the matches is satisfied.
//
// It cannot support Traffic (weight-based routing) and Matches simultaneously, if both are configured.
// In such cases, Matches takes precedence.
Matches []HttpRouteMatch `json:"matches,omitempty"`
}
type HttpRouteMatch struct {
// Path specifies a HTTP request path matcher.
// Supported list:
// - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
// - GatewayAPI: If path is defined, the whole HttpRouteMatch will be used as a standalone matcher
//
// +optional
Path *gatewayv1beta1.HTTPPathMatch `json:"path,omitempty"`
// Headers specifies HTTP request header matchers. Multiple match values are
// ANDed together, meaning, a request must match all the specified headers
// to select the route.
//
// +listType=map
// +listMapKey=name
// +optional
// +kubebuilder:validation:MaxItems=16
Headers []gatewayv1beta1.HTTPHeaderMatch `json:"headers,omitempty"`
// QueryParams specifies HTTP query parameter matchers. Multiple match
// values are ANDed together, meaning, a request must match all the
// specified query parameters to select the route.
// Supported list:
// - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
// - MSE Ingress: https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/annotations-supported-by-mse-ingress-gateways-1
// Header/Cookie > QueryParams
// - Gateway API
//
// +listType=map
// +listMapKey=name
// +optional
// +kubebuilder:validation:MaxItems=16
QueryParams []gatewayv1beta1.HTTPQueryParamMatch `json:"queryParams,omitempty"`
}
// RolloutPause defines a pause stage for a rollout
type RolloutPause struct {
// Duration the amount of time to wait before moving to the next step.
// +optional
Duration *int32 `json:"duration,omitempty"`
}
// RolloutStatus defines the observed state of Rollout
type RolloutStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
// observedGeneration is the most recent generation observed for this Rollout.
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// Canary describes the state of the canary rollout
// +optional
CanaryStatus *CanaryStatus `json:"canaryStatus,omitempty"`
// BlueGreen describes the state of the blueGreen rollout
// +optional
BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
// Conditions a list of conditions a rollout can have.
// +optional
Conditions []RolloutCondition `json:"conditions,omitempty"`
// +optional
//BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
// Phase is the rollout phase.
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
Message string `json:"message,omitempty"`
// These two values will be synchronized with the same fileds in CanaryStatus or BlueGreeenStatus
// mainly used to provide info for kubectl get command
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
}
// RolloutCondition describes the state of a rollout at a certain point.
type RolloutCondition struct {
// Type of rollout condition.
Type RolloutConditionType `json:"type"`
// Phase of the condition, one of True, False, Unknown.
Status corev1.ConditionStatus `json:"status"`
// The last time this condition was updated.
LastUpdateTime metav1.Time `json:"lastUpdateTime,omitempty"`
// Last time the condition transitioned from one status to another.
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
// The reason for the condition's last transition.
Reason string `json:"reason"`
// A human readable message indicating details about the transition.
Message string `json:"message"`
}
// RolloutConditionType defines the conditions of Rollout
type RolloutConditionType string
// These are valid conditions of a rollout.
//
//goland:noinspection GoUnusedConst
const (
// RolloutConditionProgressing means the rollout is progressing. Progress for a rollout is
// considered when a new replica set is created or adopted, when pods scale
// up or old pods scale down, or when the services are updated. Progress is not estimated
// for paused rollouts.
RolloutConditionProgressing RolloutConditionType = "Progressing"
// Progressing Reason
ProgressingReasonInitializing = "Initializing"
ProgressingReasonInRolling = "InRolling"
ProgressingReasonFinalising = "Finalising"
ProgressingReasonCompleted = "Completed"
ProgressingReasonCancelling = "Cancelling"
ProgressingReasonPaused = "Paused"
// RolloutConditionSucceeded indicates whether rollout is succeeded or failed.
RolloutConditionSucceeded RolloutConditionType = "Succeeded"
// Terminating condition
RolloutConditionTerminating RolloutConditionType = "Terminating"
// Terminating Reason
TerminatingReasonInTerminating = "InTerminating"
TerminatingReasonCompleted = "Completed"
// Finalise Reason
// Finalise when the last batch is released and all pods will update to new version
FinaliseReasonSuccess = "Success"
// Finalise when rollback detected
FinaliseReasonRollback = "Rollback"
// Finalise when Continuous Release detected
FinaliseReasonContinuous = "Continuous"
// Finalise when Rollout is disabling
FinaliseReasonDisalbed = "RolloutDisabled"
// Finalise when Rollout is deleting
FinaliseReasonDelete = "RolloutDeleting"
)
// fields in CommonStatus are shared between canary status and bluegreen status
// if a field is accessed in strategy-agnostic way, e.g. accessed from rollout_progressing.go, or rollout_status.go
// then it can be put into CommonStatus
// if a field is only accessed in strategy-specific way, e.g. accessed from rollout_canary.go or rollout_bluegreen.go
// then it should stay behind with CanaryStatus or BlueGreenStatus
type CommonStatus struct {
// observedWorkloadGeneration is the most recent generation observed for this Rollout ref workload generation.
ObservedWorkloadGeneration int64 `json:"observedWorkloadGeneration,omitempty"`
// ObservedRolloutID will record the newest spec.RolloutID if status.canaryRevision equals to workload.updateRevision
ObservedRolloutID string `json:"observedRolloutID,omitempty"`
// RolloutHash from rollout.spec object
RolloutHash string `json:"rolloutHash,omitempty"`
// StableRevision indicates the revision of stable pods
StableRevision string `json:"stableRevision,omitempty"`
// pod template hash is used as service selector label
PodTemplateHash string `json:"podTemplateHash"`
// CurrentStepIndex defines the current step of the rollout is on.
// +optional
CurrentStepIndex int32 `json:"currentStepIndex"`
// NextStepIndex defines the next step of the rollout is on.
// In normal case, NextStepIndex is equal to CurrentStepIndex + 1
// If the current step is the last step, NextStepIndex is equal to -1
// Before the release, NextStepIndex is also equal to -1
// 0 is not used and won't appear in any case
// It is allowed to patch NextStepIndex by design,
// e.g. if CurrentStepIndex is 2, user can patch NextStepIndex to 3 (if exists) to
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
// Patching it with a non-positive value is useless and meaningless, which will be corrected
// in the next reconciliation
NextStepIndex int32 `json:"nextStepIndex"`
// FinalisingStep the step of finalising
FinalisingStep FinalisingStepType `json:"finalisingStep"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
}
// CanaryStatus status fields that only pertain to the canary rollout
type CanaryStatus struct {
// must be inline
CommonStatus `json:",inline"`
// CanaryRevision is calculated by rollout based on podTemplateHash, and the internal logic flow uses
// It may be different from rs podTemplateHash in different k8s versions, so it cannot be used as service selector label
CanaryRevision string `json:"canaryRevision"`
// CanaryReplicas the numbers of canary revision pods
CanaryReplicas int32 `json:"canaryReplicas"`
// CanaryReadyReplicas the numbers of ready canary revision pods
CanaryReadyReplicas int32 `json:"canaryReadyReplicas"`
}
// BlueGreenStatus status fields that only pertain to the blueGreen rollout
type BlueGreenStatus struct {
CommonStatus `json:",inline"`
// UpdatedRevision is calculated by rollout based on podTemplateHash, and the internal logic flow uses
// It may be different from rs podTemplateHash in different k8s versions, so it cannot be used as service selector label
UpdatedRevision string `json:"updatedRevision"`
// UpdatedReplicas the numbers of updated pods
UpdatedReplicas int32 `json:"updatedReplicas"`
// UpdatedReadyReplicas the numbers of updated ready pods
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas"`
}
// GetSubStatus returns the ethier canary or bluegreen status
func (r *RolloutStatus) GetSubStatus() *CommonStatus {
if r.CanaryStatus == nil && r.BlueGreenStatus == nil {
return nil
}
if r.CanaryStatus != nil {
return &r.CanaryStatus.CommonStatus
}
return &r.BlueGreenStatus.CommonStatus
}
func (r *RolloutStatus) IsSubStatusEmpty() bool {
return r.CanaryStatus == nil && r.BlueGreenStatus == nil
}
func (r *RolloutStatus) Clear() {
r.CanaryStatus = nil
r.BlueGreenStatus = nil
}
//TODO - the following functions seem awkward, is there better way for our case?
func (r *RolloutStatus) GetCanaryRevision() string {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryRevision
}
return r.BlueGreenStatus.UpdatedRevision
}
func (r *RolloutStatus) SetCanaryRevision(revision string) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryRevision = revision
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedRevision = revision
}
}
func (r *RolloutStatus) GetCanaryReplicas() int32 {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryReplicas
}
return r.BlueGreenStatus.UpdatedReplicas
}
func (r *RolloutStatus) SetCanaryReplicas(replicas int32) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryReplicas = replicas
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedReplicas = replicas
}
}
func (r *RolloutStatus) GetCanaryReadyReplicas() int32 {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryReadyReplicas
}
return r.BlueGreenStatus.UpdatedReadyReplicas
}
func (r *RolloutStatus) SetCanaryReadyReplicas(replicas int32) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryReadyReplicas = replicas
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedReadyReplicas = replicas
}
}
type CanaryStepState string
const (
// the first step, handle some special cases before step upgrade, to prevent traffic loss
CanaryStepStateInit CanaryStepState = "BeforeStepUpgrade"
CanaryStepStateUpgrade CanaryStepState = "StepUpgrade"
CanaryStepStateTrafficRouting CanaryStepState = "StepTrafficRouting"
CanaryStepStateMetricsAnalysis CanaryStepState = "StepMetricsAnalysis"
CanaryStepStatePaused CanaryStepState = "StepPaused"
CanaryStepStateReady CanaryStepState = "StepReady"
CanaryStepStateCompleted CanaryStepState = "Completed"
)
// RolloutPhase are a set of phases that this rollout
type RolloutPhase string
const (
// RolloutPhaseInitial indicates a rollout is Initial
RolloutPhaseInitial RolloutPhase = "Initial"
// RolloutPhaseHealthy indicates a rollout is healthy
RolloutPhaseHealthy RolloutPhase = "Healthy"
// RolloutPhaseProgressing indicates a rollout is not yet healthy but still making progress towards a healthy state
RolloutPhaseProgressing RolloutPhase = "Progressing"
// RolloutPhaseTerminating indicates a rollout is terminated
RolloutPhaseTerminating RolloutPhase = "Terminating"
// RolloutPhaseDisabled indicates a rollout is disabled
RolloutPhaseDisabled RolloutPhase = "Disabled"
// RolloutPhaseDisabling indicates a rollout is disabling and releasing resources
RolloutPhaseDisabling RolloutPhase = "Disabling"
)
type FinalisingStepType string
//goland:noinspection GoUnusedConst
const (
// Route all traffic to new version (for bluegreen)
FinalisingStepRouteTrafficToNew FinalisingStepType = "FinalisingStepRouteTrafficToNew"
// Restore the GatewayAPI/Ingress/Istio
FinalisingStepRouteTrafficToStable FinalisingStepType = "FinalisingStepRouteTrafficToStable"
// Restore the stable Service, i.e. remove corresponding selector
FinalisingStepRestoreStableService FinalisingStepType = "RestoreStableService"
// Remove the Canary Service
FinalisingStepRemoveCanaryService FinalisingStepType = "RemoveCanaryService"
// Patch Batch Release to scale down (exception: the canary Deployment will be
// scaled down in FinalisingStepTypeDeleteBR step)
// For Both BlueGreenStrategy and CanaryStrategy:
// set workload.pause=false, set workload.partition=0
FinalisingStepResumeWorkload FinalisingStepType = "ResumeWorkload"
// Delete Batch Release
FinalisingStepReleaseWorkloadControl FinalisingStepType = "ReleaseWorkloadControl"
// All needed work done
FinalisingStepTypeEnd FinalisingStepType = "END"
// Only for debugging use
FinalisingStepWaitEndless FinalisingStepType = "WaitEndless"
)
// +genclient
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="STATUS",type="string",JSONPath=".status.phase",description="The rollout status phase"
// +kubebuilder:printcolumn:name="CANARY_STEP",type="integer",JSONPath=".status.currentStepIndex",description="The rollout canary status step"
// +kubebuilder:printcolumn:name="CANARY_STATE",type="string",JSONPath=".status.currentStepState",description="The rollout canary status step state"
// +kubebuilder:printcolumn:name="MESSAGE",type="string",JSONPath=".status.message",description="The rollout canary status message"
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"
// Rollout is the Schema for the rollouts API
type Rollout struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RolloutSpec `json:"spec,omitempty"`
Status RolloutStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// RolloutList contains a list of Rollout
type RolloutList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Rollout `json:"items"`
}
func init() {
SchemeBuilder.Register(&Rollout{}, &RolloutList{})
}

View File

@ -0,0 +1,51 @@
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
// TrafficRoutingRef hosts all the different configuration for supported service meshes to enable more fine-grained traffic routing
type TrafficRoutingRef struct {
// Service holds the name of a service which selects pods with stable version and don't select any pods with canary version.
Service string `json:"service"`
// Optional duration in seconds the traffic provider(e.g. nginx ingress controller) consumes the service, ingress configuration changes gracefully.
// +kubebuilder:default=3
GracePeriodSeconds int32 `json:"gracePeriodSeconds,omitempty"`
// Ingress holds Ingress specific configuration to route traffic, e.g. Nginx, Alb.
Ingress *IngressTrafficRouting `json:"ingress,omitempty"`
// Gateway holds Gateway specific configuration to route traffic
// Gateway configuration only supports >= v0.4.0 (v1alpha2).
Gateway *GatewayTrafficRouting `json:"gateway,omitempty"`
// CustomNetworkRefs hold a list of custom providers to route traffic
CustomNetworkRefs []ObjectRef `json:"customNetworkRefs,omitempty"`
}
// IngressTrafficRouting configuration for ingress controller to control traffic routing
type IngressTrafficRouting struct {
// ClassType refers to the type of `Ingress`.
// current support nginx, aliyun-alb. default is nginx.
// +optional
ClassType string `json:"classType,omitempty"`
// Name refers to the name of an `Ingress` resource in the same namespace as the `Rollout`
Name string `json:"name"`
}
// GatewayTrafficRouting configuration for gateway api
type GatewayTrafficRouting struct {
// HTTPRouteName refers to the name of an `HTTPRoute` resource in the same namespace as the `Rollout`
HTTPRouteName *string `json:"httpRouteName,omitempty"`
// TCPRouteName *string `json:"tcpRouteName,omitempty"`
// UDPRouteName *string `json:"udpRouteName,omitempty"`
}

View File

@ -0,0 +1,735 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1beta1
import (
"k8s.io/api/apps/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
apisv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BatchRelease) DeepCopyInto(out *BatchRelease) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchRelease.
func (in *BatchRelease) DeepCopy() *BatchRelease {
if in == nil {
return nil
}
out := new(BatchRelease)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *BatchRelease) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BatchReleaseCanaryStatus) DeepCopyInto(out *BatchReleaseCanaryStatus) {
*out = *in
if in.BatchReadyTime != nil {
in, out := &in.BatchReadyTime, &out.BatchReadyTime
*out = (*in).DeepCopy()
}
if in.NoNeedUpdateReplicas != nil {
in, out := &in.NoNeedUpdateReplicas, &out.NoNeedUpdateReplicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchReleaseCanaryStatus.
func (in *BatchReleaseCanaryStatus) DeepCopy() *BatchReleaseCanaryStatus {
if in == nil {
return nil
}
out := new(BatchReleaseCanaryStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BatchReleaseList) DeepCopyInto(out *BatchReleaseList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]BatchRelease, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchReleaseList.
func (in *BatchReleaseList) DeepCopy() *BatchReleaseList {
if in == nil {
return nil
}
out := new(BatchReleaseList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *BatchReleaseList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BatchReleaseSpec) DeepCopyInto(out *BatchReleaseSpec) {
*out = *in
out.WorkloadRef = in.WorkloadRef
in.ReleasePlan.DeepCopyInto(&out.ReleasePlan)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchReleaseSpec.
func (in *BatchReleaseSpec) DeepCopy() *BatchReleaseSpec {
if in == nil {
return nil
}
out := new(BatchReleaseSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BatchReleaseStatus) DeepCopyInto(out *BatchReleaseStatus) {
*out = *in
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]RolloutCondition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.CanaryStatus.DeepCopyInto(&out.CanaryStatus)
if in.CollisionCount != nil {
in, out := &in.CollisionCount, &out.CollisionCount
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BatchReleaseStatus.
func (in *BatchReleaseStatus) DeepCopy() *BatchReleaseStatus {
if in == nil {
return nil
}
out := new(BatchReleaseStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BlueGreenStatus) DeepCopyInto(out *BlueGreenStatus) {
*out = *in
in.CommonStatus.DeepCopyInto(&out.CommonStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BlueGreenStatus.
func (in *BlueGreenStatus) DeepCopy() *BlueGreenStatus {
if in == nil {
return nil
}
out := new(BlueGreenStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BlueGreenStrategy) DeepCopyInto(out *BlueGreenStrategy) {
*out = *in
if in.Steps != nil {
in, out := &in.Steps, &out.Steps
*out = make([]CanaryStep, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TrafficRoutings != nil {
in, out := &in.TrafficRoutings, &out.TrafficRoutings
*out = make([]TrafficRoutingRef, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BlueGreenStrategy.
func (in *BlueGreenStrategy) DeepCopy() *BlueGreenStrategy {
if in == nil {
return nil
}
out := new(BlueGreenStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStatus) DeepCopyInto(out *CanaryStatus) {
*out = *in
in.CommonStatus.DeepCopyInto(&out.CommonStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStatus.
func (in *CanaryStatus) DeepCopy() *CanaryStatus {
if in == nil {
return nil
}
out := new(CanaryStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStep) DeepCopyInto(out *CanaryStep) {
*out = *in
in.TrafficRoutingStrategy.DeepCopyInto(&out.TrafficRoutingStrategy)
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(intstr.IntOrString)
**out = **in
}
in.Pause.DeepCopyInto(&out.Pause)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStep.
func (in *CanaryStep) DeepCopy() *CanaryStep {
if in == nil {
return nil
}
out := new(CanaryStep)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStrategy) DeepCopyInto(out *CanaryStrategy) {
*out = *in
if in.Steps != nil {
in, out := &in.Steps, &out.Steps
*out = make([]CanaryStep, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TrafficRoutings != nil {
in, out := &in.TrafficRoutings, &out.TrafficRoutings
*out = make([]TrafficRoutingRef, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
if in.PatchPodTemplateMetadata != nil {
in, out := &in.PatchPodTemplateMetadata, &out.PatchPodTemplateMetadata
*out = new(PatchPodTemplateMetadata)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStrategy.
func (in *CanaryStrategy) DeepCopy() *CanaryStrategy {
if in == nil {
return nil
}
out := new(CanaryStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CommonStatus) DeepCopyInto(out *CommonStatus) {
*out = *in
if in.LastUpdateTime != nil {
in, out := &in.LastUpdateTime, &out.LastUpdateTime
*out = (*in).DeepCopy()
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CommonStatus.
func (in *CommonStatus) DeepCopy() *CommonStatus {
if in == nil {
return nil
}
out := new(CommonStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentExtraStatus) DeepCopyInto(out *DeploymentExtraStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeploymentExtraStatus.
func (in *DeploymentExtraStatus) DeepCopy() *DeploymentExtraStatus {
if in == nil {
return nil
}
out := new(DeploymentExtraStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentStrategy) DeepCopyInto(out *DeploymentStrategy) {
*out = *in
if in.RollingUpdate != nil {
in, out := &in.RollingUpdate, &out.RollingUpdate
*out = new(v1.RollingUpdateDeployment)
(*in).DeepCopyInto(*out)
}
out.Partition = in.Partition
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeploymentStrategy.
func (in *DeploymentStrategy) DeepCopy() *DeploymentStrategy {
if in == nil {
return nil
}
out := new(DeploymentStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GatewayTrafficRouting) DeepCopyInto(out *GatewayTrafficRouting) {
*out = *in
if in.HTTPRouteName != nil {
in, out := &in.HTTPRouteName, &out.HTTPRouteName
*out = new(string)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GatewayTrafficRouting.
func (in *GatewayTrafficRouting) DeepCopy() *GatewayTrafficRouting {
if in == nil {
return nil
}
out := new(GatewayTrafficRouting)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HttpRouteMatch) DeepCopyInto(out *HttpRouteMatch) {
*out = *in
if in.Path != nil {
in, out := &in.Path, &out.Path
*out = new(apisv1beta1.HTTPPathMatch)
(*in).DeepCopyInto(*out)
}
if in.Headers != nil {
in, out := &in.Headers, &out.Headers
*out = make([]apisv1beta1.HTTPHeaderMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.QueryParams != nil {
in, out := &in.QueryParams, &out.QueryParams
*out = make([]apisv1beta1.HTTPQueryParamMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HttpRouteMatch.
func (in *HttpRouteMatch) DeepCopy() *HttpRouteMatch {
if in == nil {
return nil
}
out := new(HttpRouteMatch)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *IngressTrafficRouting) DeepCopyInto(out *IngressTrafficRouting) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new IngressTrafficRouting.
func (in *IngressTrafficRouting) DeepCopy() *IngressTrafficRouting {
if in == nil {
return nil
}
out := new(IngressTrafficRouting)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectRef) DeepCopyInto(out *ObjectRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ObjectRef.
func (in *ObjectRef) DeepCopy() *ObjectRef {
if in == nil {
return nil
}
out := new(ObjectRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PatchPodTemplateMetadata) DeepCopyInto(out *PatchPodTemplateMetadata) {
*out = *in
if in.Annotations != nil {
in, out := &in.Annotations, &out.Annotations
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PatchPodTemplateMetadata.
func (in *PatchPodTemplateMetadata) DeepCopy() *PatchPodTemplateMetadata {
if in == nil {
return nil
}
out := new(PatchPodTemplateMetadata)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ReleaseBatch) DeepCopyInto(out *ReleaseBatch) {
*out = *in
out.CanaryReplicas = in.CanaryReplicas
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReleaseBatch.
func (in *ReleaseBatch) DeepCopy() *ReleaseBatch {
if in == nil {
return nil
}
out := new(ReleaseBatch)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ReleasePlan) DeepCopyInto(out *ReleasePlan) {
*out = *in
if in.Batches != nil {
in, out := &in.Batches, &out.Batches
*out = make([]ReleaseBatch, len(*in))
copy(*out, *in)
}
if in.BatchPartition != nil {
in, out := &in.BatchPartition, &out.BatchPartition
*out = new(int32)
**out = **in
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
if in.PatchPodTemplateMetadata != nil {
in, out := &in.PatchPodTemplateMetadata, &out.PatchPodTemplateMetadata
*out = new(PatchPodTemplateMetadata)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReleasePlan.
func (in *ReleasePlan) DeepCopy() *ReleasePlan {
if in == nil {
return nil
}
out := new(ReleasePlan)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Rollout) DeepCopyInto(out *Rollout) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Rollout.
func (in *Rollout) DeepCopy() *Rollout {
if in == nil {
return nil
}
out := new(Rollout)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Rollout) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutCondition) DeepCopyInto(out *RolloutCondition) {
*out = *in
in.LastUpdateTime.DeepCopyInto(&out.LastUpdateTime)
in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutCondition.
func (in *RolloutCondition) DeepCopy() *RolloutCondition {
if in == nil {
return nil
}
out := new(RolloutCondition)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutList) DeepCopyInto(out *RolloutList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Rollout, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutList.
func (in *RolloutList) DeepCopy() *RolloutList {
if in == nil {
return nil
}
out := new(RolloutList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *RolloutList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutPause) DeepCopyInto(out *RolloutPause) {
*out = *in
if in.Duration != nil {
in, out := &in.Duration, &out.Duration
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutPause.
func (in *RolloutPause) DeepCopy() *RolloutPause {
if in == nil {
return nil
}
out := new(RolloutPause)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutSpec) DeepCopyInto(out *RolloutSpec) {
*out = *in
out.WorkloadRef = in.WorkloadRef
in.Strategy.DeepCopyInto(&out.Strategy)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutSpec.
func (in *RolloutSpec) DeepCopy() *RolloutSpec {
if in == nil {
return nil
}
out := new(RolloutSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutStatus) DeepCopyInto(out *RolloutStatus) {
*out = *in
if in.CanaryStatus != nil {
in, out := &in.CanaryStatus, &out.CanaryStatus
*out = new(CanaryStatus)
(*in).DeepCopyInto(*out)
}
if in.BlueGreenStatus != nil {
in, out := &in.BlueGreenStatus, &out.BlueGreenStatus
*out = new(BlueGreenStatus)
(*in).DeepCopyInto(*out)
}
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]RolloutCondition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutStatus.
func (in *RolloutStatus) DeepCopy() *RolloutStatus {
if in == nil {
return nil
}
out := new(RolloutStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RolloutStrategy) DeepCopyInto(out *RolloutStrategy) {
*out = *in
if in.Canary != nil {
in, out := &in.Canary, &out.Canary
*out = new(CanaryStrategy)
(*in).DeepCopyInto(*out)
}
if in.BlueGreen != nil {
in, out := &in.BlueGreen, &out.BlueGreen
*out = new(BlueGreenStrategy)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutStrategy.
func (in *RolloutStrategy) DeepCopy() *RolloutStrategy {
if in == nil {
return nil
}
out := new(RolloutStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingRef) DeepCopyInto(out *TrafficRoutingRef) {
*out = *in
if in.Ingress != nil {
in, out := &in.Ingress, &out.Ingress
*out = new(IngressTrafficRouting)
**out = **in
}
if in.Gateway != nil {
in, out := &in.Gateway, &out.Gateway
*out = new(GatewayTrafficRouting)
(*in).DeepCopyInto(*out)
}
if in.CustomNetworkRefs != nil {
in, out := &in.CustomNetworkRefs, &out.CustomNetworkRefs
*out = make([]ObjectRef, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingRef.
func (in *TrafficRoutingRef) DeepCopy() *TrafficRoutingRef {
if in == nil {
return nil
}
out := new(TrafficRoutingRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TrafficRoutingStrategy) DeepCopyInto(out *TrafficRoutingStrategy) {
*out = *in
if in.Traffic != nil {
in, out := &in.Traffic, &out.Traffic
*out = new(string)
**out = **in
}
if in.RequestHeaderModifier != nil {
in, out := &in.RequestHeaderModifier, &out.RequestHeaderModifier
*out = new(apisv1beta1.HTTPHeaderFilter)
(*in).DeepCopyInto(*out)
}
if in.Matches != nil {
in, out := &in.Matches, &out.Matches
*out = make([]HttpRouteMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TrafficRoutingStrategy.
func (in *TrafficRoutingStrategy) DeepCopy() *TrafficRoutingStrategy {
if in == nil {
return nil
}
out := new(TrafficRoutingStrategy)
in.DeepCopyInto(out)
return out
}

View File

@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: batchreleases.rollouts.kruise.io
spec:
@ -59,7 +58,7 @@ spec:
description: All pods in the batches up to the batchPartition
(included) will have the target resource specification while
the rest still is the stable revision. This is designed for
the operators to manually rollout Default is nil, which means
the operators to manually rollout. Default is nil, which means
no partition and will release all batches. BatchPartition start
from 0.
format: int32
@ -84,19 +83,56 @@ spec:
should less than or equal to batches[j].canaryReplicas
if i < j.'
x-kubernetes-int-or-string: true
pauseSeconds:
description: The wait time, in seconds, between instances
batches, default = 0
format: int64
type: integer
required:
- canaryReplicas
type: object
type: array
paused:
description: Paused the rollout, the release progress will be
paused util paused is false. default is false
enableExtraWorkloadForCanary:
description: EnableExtraWorkloadForCanary indicates whether to
create extra workload for canary True corresponds to RollingStyle
"Canary". False corresponds to RollingStyle "Partiton". Ignored
in BlueGreen-style. This field is about to deprecate, use RollingStyle
instead. If both of them are set, controller will only consider
this filed when RollingStyle is empty
type: boolean
failureThreshold:
anyOf:
- type: integer
- type: string
description: FailureThreshold indicates how many failed pods can
be tolerated in all upgraded pods. Only when FailureThreshold
are satisfied, Rollout can enter ready state. If FailureThreshold
is nil, Rollout will use the MaxUnavailable of workload as its
FailureThreshold. Defaults to nil.
x-kubernetes-int-or-string: true
finalizingPolicy:
description: FinalizingPolicy define the behavior of controller
when phase enter Finalizing Defaults to "Immediate"
type: string
patchPodTemplateMetadata:
description: PatchPodTemplateMetadata indicates patch configuration(e.g.
labels, annotations) to the canary deployment podTemplateSpec.metadata
only support for canary deployment
properties:
annotations:
additionalProperties:
type: string
description: annotations
type: object
labels:
additionalProperties:
type: string
description: labels
type: object
type: object
rollingStyle:
description: RollingStyle can be "Canary", "Partiton" or "BlueGreen"
type: string
rolloutID:
description: RolloutID indicates an id for each rollout progress
type: string
required:
- enableExtraWorkloadForCanary
type: object
targetReference:
description: TargetRef contains the GVK and name of the workload that
@ -148,6 +184,11 @@ spec:
it starts from 0
format: int32
type: integer
noNeedUpdateReplicas:
description: the number of pods that no need to rollback in rollback
scene.
format: int32
type: integer
updatedReadyReplicas:
description: UpdatedReadyReplicas is the number upgraded Pods
that have a Ready Condition.
@ -211,7 +252,281 @@ spec:
type: integer
observedReleasePlanHash:
description: ObservedReleasePlanHash is a hash code of observed itself
releasePlan.Batches.
spec.releasePlan.
type: string
observedRolloutID:
description: ObservedRolloutID is the most recent rollout-id observed
for this BatchRelease. If RolloutID was changed, we will restart
to roll out from batch 0, to ensure the batch-id and rollout-id
labels of Pods are correct.
type: string
observedWorkloadReplicas:
description: ObservedWorkloadReplicas is observed replicas of target
referenced workload. This field is designed to deal with scaling
event during rollout, if this field changed, it means that the workload
is scaling during rollout.
format: int32
type: integer
phase:
description: Phase is the release plan phase, which indicates the
current state of release plan state machine in BatchRelease controller.
type: string
stableRevision:
description: StableRevision is the pod-template-hash of stable revision
pod template.
type: string
updateRevision:
description: UpdateRevision is the pod-template-hash of update revision
pod template.
type: string
type: object
type: object
served: true
storage: false
subresources:
status: {}
- additionalPrinterColumns:
- jsonPath: .spec.targetReference.workloadRef.kind
name: KIND
type: string
- jsonPath: .status.phase
name: PHASE
type: string
- jsonPath: .status.canaryStatus.currentBatch
name: BATCH
type: integer
- jsonPath: .status.canaryStatus.batchState
name: BATCH-STATE
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
name: v1beta1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: BatchReleaseSpec defines how to describe an update between
different compRevision
properties:
releasePlan:
description: ReleasePlan is the details on how to rollout the resources
properties:
batchPartition:
description: All pods in the batches up to the batchPartition
(included) will have the target resource specification while
the rest still is the stable revision. This is designed for
the operators to manually rollout. Default is nil, which means
no partition and will release all batches. BatchPartition start
from 0.
format: int32
type: integer
batches:
description: 'Batches is the details on each batch of the ReleasePlan.
Users can specify their batch plan in this field, such as: batches:
- canaryReplicas: 1 # batches 0 - canaryReplicas: 2 # batches
1 - canaryReplicas: 5 # batches 2 Not that these canaryReplicas
should be a non-decreasing sequence.'
items:
description: ReleaseBatch is used to describe how each batch
release should be
properties:
canaryReplicas:
anyOf:
- type: integer
- type: string
description: 'CanaryReplicas is the number of upgraded pods
that should have in this batch. it can be an absolute
number (ex: 5) or a percentage of workload replicas. batches[i].canaryReplicas
should less than or equal to batches[j].canaryReplicas
if i < j.'
x-kubernetes-int-or-string: true
required:
- canaryReplicas
type: object
type: array
enableExtraWorkloadForCanary:
description: EnableExtraWorkloadForCanary indicates whether to
create extra workload for canary True corresponds to RollingStyle
"Canary". False corresponds to RollingStyle "Partiton". Ignored
in BlueGreen-style. This field is about to deprecate, use RollingStyle
instead. If both of them are set, controller will only consider
this filed when RollingStyle is empty
type: boolean
failureThreshold:
anyOf:
- type: integer
- type: string
description: FailureThreshold indicates how many failed pods can
be tolerated in all upgraded pods. Only when FailureThreshold
are satisfied, Rollout can enter ready state. If FailureThreshold
is nil, Rollout will use the MaxUnavailable of workload as its
FailureThreshold. Defaults to nil.
x-kubernetes-int-or-string: true
finalizingPolicy:
description: FinalizingPolicy define the behavior of controller
when phase enter Finalizing Defaults to "Immediate"
type: string
patchPodTemplateMetadata:
description: PatchPodTemplateMetadata indicates patch configuration(e.g.
labels, annotations) to the canary deployment podTemplateSpec.metadata
only support for canary deployment
properties:
annotations:
additionalProperties:
type: string
description: annotations
type: object
labels:
additionalProperties:
type: string
description: labels
type: object
type: object
rollingStyle:
description: RollingStyle can be "Canary", "Partiton" or "BlueGreen"
type: string
rolloutID:
description: RolloutID indicates an id for each rollout progress
type: string
required:
- enableExtraWorkloadForCanary
type: object
workloadRef:
description: WorkloadRef contains enough information to let you identify
a workload for Rollout Batch release of the bypass
properties:
apiVersion:
description: API Version of the referent
type: string
kind:
description: Kind of the referent
type: string
name:
description: Name of the referent
type: string
required:
- apiVersion
- kind
- name
type: object
required:
- releasePlan
type: object
status:
description: BatchReleaseStatus defines the observed state of a release
plan
properties:
canaryStatus:
description: CanaryStatus describes the state of the canary rollout.
properties:
batchReadyTime:
description: BatchReadyTime is the ready timestamp of the current
batch or the last batch. This field is updated once a batch
ready, and the batches[x].pausedSeconds relies on this field
to calculate the real-time duration.
format: date-time
type: string
batchState:
description: CurrentBatchState indicates the release state of
the current batch.
type: string
currentBatch:
description: The current batch the rollout is working on/blocked,
it starts from 0
format: int32
type: integer
noNeedUpdateReplicas:
description: the number of pods that no need to rollback in rollback
scene.
format: int32
type: integer
updatedReadyReplicas:
description: UpdatedReadyReplicas is the number upgraded Pods
that have a Ready Condition.
format: int32
type: integer
updatedReplicas:
description: UpdatedReplicas is the number of upgraded Pods.
format: int32
type: integer
required:
- currentBatch
type: object
collisionCount:
description: Count of hash collisions for creating canary Deployment.
The controller uses this field as a collision avoidance mechanism
when it needs to create the name for the newest canary Deployment.
format: int32
type: integer
conditions:
description: Conditions represents the observed process state of each
phase during executing the release plan.
items:
description: RolloutCondition describes the state of a rollout at
a certain point.
properties:
lastTransitionTime:
description: Last time the condition transitioned from one status
to another.
format: date-time
type: string
lastUpdateTime:
description: The last time this condition was updated.
format: date-time
type: string
message:
description: A human readable message indicating details about
the transition.
type: string
reason:
description: The reason for the condition's last transition.
type: string
status:
description: Phase of the condition, one of True, False, Unknown.
type: string
type:
description: Type of rollout condition.
type: string
required:
- message
- reason
- status
- type
type: object
type: array
message:
description: Message provides details on why the rollout is in its
current phase
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for this BatchRelease. It corresponds to this BatchRelease's generation,
which is updated on mutation by the API Server, and only if BatchRelease
Spec was changed, its generation will increase 1.
format: int64
type: integer
observedReleasePlanHash:
description: ObservedReleasePlanHash is a hash code of observed itself
spec.releasePlan.
type: string
observedRolloutID:
description: ObservedRolloutID is the most recent rollout-id observed
for this BatchRelease. If RolloutID was changed, we will restart
to roll out from batch 0, to ensure the batch-id and rollout-id
labels of Pods are correct.
type: string
observedWorkloadReplicas:
description: ObservedWorkloadReplicas is observed replicas of target
@ -238,9 +553,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,169 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: rollouthistories.rollouts.kruise.io
spec:
group: rollouts.kruise.io
names:
kind: RolloutHistory
listKind: RolloutHistoryList
plural: rollouthistories
singular: rollouthistory
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: RolloutHistory is the Schema for the rollouthistories API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: RolloutHistorySpec defines the desired state of RolloutHistory
properties:
rollout:
description: Rollout indicates information of the rollout related
with rollouthistory
properties:
data:
description: Data indecates the spec of object ref
x-kubernetes-preserve-unknown-fields: true
name:
description: Name indicates the name of object ref, such as rollout
name, workload name, ingress name, etc.
type: string
rolloutID:
description: RolloutID indicates the new rollout if there is no
new RolloutID this time, ignore it and not execute RolloutHistory
type: string
required:
- name
- rolloutID
type: object
service:
description: Service indicates information of the service related
with workload
properties:
data:
description: Data indecates the spec of object ref
x-kubernetes-preserve-unknown-fields: true
name:
description: Name indicates the name of object ref, such as rollout
name, workload name, ingress name, etc.
type: string
required:
- name
type: object
trafficRouting:
description: TrafficRouting indicates information of traffic route
related with workload
properties:
httpRoute:
description: HTTPRouteRef indacates information of Gateway API
properties:
data:
description: Data indecates the spec of object ref
x-kubernetes-preserve-unknown-fields: true
name:
description: Name indicates the name of object ref, such as
rollout name, workload name, ingress name, etc.
type: string
required:
- name
type: object
ingress:
description: IngressRef indicates information of ingress
properties:
data:
description: Data indecates the spec of object ref
x-kubernetes-preserve-unknown-fields: true
name:
description: Name indicates the name of object ref, such as
rollout name, workload name, ingress name, etc.
type: string
required:
- name
type: object
type: object
workload:
description: Workload indicates information of the workload, such
as cloneset, deployment, advanced statefulset
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this
representation of an object. Servers should convert recognized
schemas to the latest internal value, and may reject unrecognized
values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
data:
description: Data indecates the spec of object ref
x-kubernetes-preserve-unknown-fields: true
kind:
description: 'Kind is a string value representing the REST resource
this object represents. Servers may infer this from the endpoint
the client submits requests to. Cannot be updated. In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: Name indicates the name of object ref, such as rollout
name, workload name, ingress name, etc.
type: string
required:
- name
type: object
type: object
status:
description: RolloutHistoryStatus defines the observed state of RolloutHistory
properties:
canarySteps:
description: CanarySteps indicates the pods released each step
items:
description: CanaryStepInfo indicates the pods for a revision
properties:
canaryStepIndex:
description: CanaryStepIndex indicates step this revision
format: int32
type: integer
pods:
description: Pods indicates the pods information
items:
description: Pod indicates the information of a pod, including
name, ip, node_name.
properties:
ip:
description: IP indicates the pod ip
type: string
name:
description: Name indicates the node name
type: string
nodeName:
description: NodeName indicates the node which pod is
located at
type: string
type: object
type: array
type: object
type: array
phase:
description: Phase indicates phase of RolloutHistory, just "" or "completed"
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,309 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: trafficroutings.rollouts.kruise.io
spec:
group: rollouts.kruise.io
names:
kind: TrafficRouting
listKind: TrafficRoutingList
plural: trafficroutings
singular: trafficrouting
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: The TrafficRouting status phase
jsonPath: .status.phase
name: STATUS
type: string
- description: The TrafficRouting canary status message
jsonPath: .status.message
name: MESSAGE
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: TrafficRouting is the Schema for the TrafficRoutings API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
properties:
objectRef:
description: ObjectRef indicates trafficRouting ref
items:
description: TrafficRoutingRef hosts all the different configuration
for supported service meshes to enable more fine-grained traffic
routing
properties:
customNetworkRefs:
description: CustomNetworkRefs hold a list of custom providers
to route traffic
items:
properties:
apiVersion:
type: string
kind:
type: string
name:
type: string
required:
- apiVersion
- kind
- name
type: object
type: array
gateway:
description: Gateway holds Gateway specific configuration to
route traffic Gateway configuration only supports >= v0.4.0
(v1alpha2).
properties:
httpRouteName:
description: HTTPRouteName refers to the name of an `HTTPRoute`
resource in the same namespace as the `Rollout`
type: string
type: object
gracePeriodSeconds:
default: 3
description: Optional duration in seconds the traffic provider(e.g.
nginx ingress controller) consumes the service, ingress configuration
changes gracefully.
format: int32
type: integer
ingress:
description: Ingress holds Ingress specific configuration to
route traffic, e.g. Nginx, Alb.
properties:
classType:
description: ClassType refers to the type of `Ingress`.
current support nginx, aliyun-alb. default is nginx.
type: string
name:
description: Name refers to the name of an `Ingress` resource
in the same namespace as the `Rollout`
type: string
required:
- name
type: object
service:
description: Service holds the name of a service which selects
pods with stable version and don't select any pods with canary
version.
type: string
required:
- service
type: object
type: array
strategy:
description: trafficrouting strategy
properties:
matches:
description: Matches define conditions used for matching the incoming
HTTP requests to canary service. Each match is independent,
i.e. this rule will be matched if **any** one of the matches
is satisfied. If Gateway API, current only support one match.
And cannot support both weight and matches, if both are configured,
then matches takes precedence.
items:
properties:
headers:
description: Headers specifies HTTP request header matchers.
Multiple match values are ANDed together, meaning, a request
must match all the specified headers to select the route.
items:
description: HTTPHeaderMatch describes how to select a
HTTP route by matching HTTP request headers.
properties:
name:
description: "Name is the name of the HTTP Header
to be matched. Name matching MUST be case insensitive.
(See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent header
names, only the first entry with an equivalent name
MUST be considered for a match. Subsequent entries
with an equivalent header name MUST be ignored.
Due to the case-insensitivity of header names, \"foo\"
and \"Foo\" are considered equivalent. \n When a
header is repeated in an HTTP request, it is implementation-specific
behavior as to how this is represented. Generally,
proxies should follow the guidance from the RFC:
https://www.rfc-editor.org/rfc/rfc7230.html#section-3.2.2
regarding processing a repeated header, with special
handling for \"Set-Cookie\"."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
type:
default: Exact
description: "Type specifies how to match against
the value of the header. \n Support: Core (Exact)
\n Support: Implementation-specific (RegularExpression)
\n Since RegularExpression HeaderMatchType has implementation-specific
conformance, implementations can support POSIX,
PCRE or any other dialects of regular expressions.
Please read the implementation's documentation to
determine the supported dialect."
enum:
- Exact
- RegularExpression
type: string
value:
description: Value is the value of HTTP Header to
be matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
type: object
type: array
requestHeaderModifier:
description: "Set overwrites the request with the given header
(name, value) before the action. \n Input: GET /foo HTTP/1.1
my-header: foo \n requestHeaderModifier: set: - name: \"my-header\"
value: \"bar\" \n Output: GET /foo HTTP/1.1 my-header: bar"
properties:
add:
description: "Add adds the given header(s) (name, value) to
the request before the action. It appends to any existing
values associated with the header name. \n Input: GET /foo
HTTP/1.1 my-header: foo \n Config: add: - name: \"my-header\"
value: \"bar,baz\" \n Output: GET /foo HTTP/1.1 my-header:
foo,bar,baz"
items:
description: HTTPHeader represents an HTTP Header name and
value as defined by RFC 7230.
properties:
name:
description: "Name is the name of the HTTP Header to
be matched. Name matching MUST be case insensitive.
(See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent header names,
the first entry with an equivalent name MUST be considered
for a match. Subsequent entries with an equivalent
header name MUST be ignored. Due to the case-insensitivity
of header names, \"foo\" and \"Foo\" are considered
equivalent."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
value:
description: Value is the value of HTTP Header to be
matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
remove:
description: "Remove the given header(s) from the HTTP request
before the action. The value of Remove is a list of HTTP
header names. Note that the header names are case-insensitive
(see https://datatracker.ietf.org/doc/html/rfc2616#section-4.2).
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
bar"
items:
type: string
maxItems: 16
type: array
set:
description: "Set overwrites the request with the given header
(name, value) before the action. \n Input: GET /foo HTTP/1.1
my-header: foo \n Config: set: - name: \"my-header\" value:
\"bar\" \n Output: GET /foo HTTP/1.1 my-header: bar"
items:
description: HTTPHeader represents an HTTP Header name and
value as defined by RFC 7230.
properties:
name:
description: "Name is the name of the HTTP Header to
be matched. Name matching MUST be case insensitive.
(See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent header names,
the first entry with an equivalent name MUST be considered
for a match. Subsequent entries with an equivalent
header name MUST be ignored. Due to the case-insensitivity
of header names, \"foo\" and \"Foo\" are considered
equivalent."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
value:
description: Value is the value of HTTP Header to be
matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
weight:
description: Weight indicate how many percentage of traffic the
canary pods should receive
format: int32
type: integer
type: object
required:
- objectRef
- strategy
type: object
status:
properties:
message:
description: Message provides details on why the rollout is in its
current phase
type: string
observedGeneration:
description: observedGeneration is the most recent generation observed
for this Rollout.
format: int64
type: integer
phase:
description: Phase is the trafficRouting phase.
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@ -4,13 +4,15 @@
resources:
- bases/rollouts.kruise.io_rollouts.yaml
- bases/rollouts.kruise.io_batchreleases.yaml
- bases/rollouts.kruise.io_rollouthistories.yaml
- bases/rollouts.kruise.io_trafficroutings.yaml
#+kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_rollouts.yaml
#- patches/webhook_in_batchreleases.yaml
- patches/webhook_in_rollouts.yaml
- patches/webhook_in_batchreleases.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.

View File

@ -13,4 +13,4 @@ spec:
name: webhook-service
path: /convert
conversionReviewVersions:
- v1
- v1beta1

View File

@ -13,4 +13,4 @@ spec:
name: webhook-service
path: /convert
conversionReviewVersions:
- v1
- v1beta1

View File

@ -14,4 +14,8 @@ spec:
- "--health-probe-bind-address=:8081"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
- "--v=3"
- "--feature-gates=AdvancedDeployment=true"
- "--v=5"
env:
- name: KUBE_CACHE_MUTATION_DETECTOR
value: "true"

View File

@ -8,6 +8,7 @@ spec:
spec:
containers:
- name: manager
imagePullPolicy: Always
args:
- "--config=controller_manager_config.yaml"
volumeMounts:

View File

@ -29,6 +29,7 @@ spec:
- /manager
args:
- --leader-elect
- --feature-gates=AdvancedDeployment=true
image: controller:latest
name: manager
securityContext:
@ -48,9 +49,9 @@ spec:
resources:
limits:
cpu: 100m
memory: 30Mi
memory: 100Mi
requests:
cpu: 100m
memory: 20Mi
memory: 100Mi
serviceAccountName: controller-manager
terminationGracePeriodSeconds: 10

View File

@ -1,4 +1,3 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@ -84,6 +83,26 @@ rules:
- get
- patch
- update
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets/status
verbs:
- get
- patch
- update
- apiGroups:
- apps.kruise.io
resources:
@ -104,6 +123,62 @@ rules:
- get
- patch
- update
- apiGroups:
- apps.kruise.io
resources:
- daemonsets
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- apps.kruise.io
resources:
- daemonsets/status
verbs:
- get
- patch
- update
- apiGroups:
- apps.kruise.io
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps.kruise.io
resources:
- statefulsets/status
verbs:
- get
- patch
- update
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
@ -130,18 +205,6 @@ rules:
- get
- patch
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
@ -162,6 +225,37 @@ rules:
- get
- patch
- update
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- gateway.networking.k8s.io
resources:
- httproutes/status
verbs:
- get
- patch
- update
- apiGroups:
- networking.istio.io
resources:
- destinationrules
- virtualservices
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- networking.k8s.io
resources:
@ -202,6 +296,32 @@ rules:
- get
- patch
- update
- apiGroups:
- rollouts.kruise.io
resources:
- rollouthistories
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- rollouts.kruise.io
resources:
- rollouthistories/finalizers
verbs:
- update
- apiGroups:
- rollouts.kruise.io
resources:
- rollouthistories/status
verbs:
- get
- patch
- update
- apiGroups:
- rollouts.kruise.io
resources:
@ -228,3 +348,48 @@ rules:
- get
- patch
- update
- apiGroups:
- rollouts.kruise.io
resources:
- trafficroutings
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- rollouts.kruise.io
resources:
- trafficroutings/finalizers
verbs:
- update
- apiGroups:
- rollouts.kruise.io
resources:
- trafficroutings/status
verbs:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: manager-role
namespace: system
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@ -10,3 +10,17 @@ subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manager-rolebinding
namespace: system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system

View File

@ -2,5 +2,8 @@ resources:
- manifests.yaml
- service.yaml
patchesStrategicMerge:
- patch_manifests.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -1,4 +1,3 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
@ -14,7 +13,7 @@ webhooks:
name: webhook-service
namespace: system
path: /mutate-apps-kruise-io-v1alpha1-cloneset
failurePolicy: Ignore
failurePolicy: Fail
name: mcloneset.kb.io
rules:
- apiGroups:
@ -26,6 +25,26 @@ webhooks:
resources:
- clonesets
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-apps-kruise-io-v1alpha1-daemonset
failurePolicy: Fail
name: mdaemonset.kb.io
rules:
- apiGroups:
- apps.kruise.io
apiVersions:
- v1alpha1
operations:
- UPDATE
resources:
- daemonsets
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
@ -34,7 +53,7 @@ webhooks:
name: webhook-service
namespace: system
path: /mutate-apps-v1-deployment
failurePolicy: Ignore
failurePolicy: Fail
name: mdeployment.kb.io
rules:
- apiGroups:
@ -46,7 +65,27 @@ webhooks:
resources:
- deployments
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-unified-workload
failurePolicy: Fail
name: munifiedworload.kb.io
rules:
- apiGroups:
- '*'
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- '*'
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
@ -69,6 +108,7 @@ webhooks:
- rollouts.kruise.io
apiVersions:
- v1alpha1
- v1beta1
operations:
- CREATE
- UPDATE

View File

@ -0,0 +1,39 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-webhook-configuration
webhooks:
- name: munifiedworload.kb.io
objectSelector:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
- name: mcloneset.kb.io
objectSelector:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
- name: mdaemonset.kb.io
objectSelector:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
# - name: mstatefulset.kb.io
# objectSelector:
# matchExpressions:
# - key: rollouts.kruise.io/workload-type
# operator: Exists
# - name: madvancedstatefulset.kb.io
# objectSelector:
# matchExpressions:
# - key: rollouts.kruise.io/workload-type
# operator: Exists
- name: mdeployment.kb.io
objectSelector:
matchExpressions:
- key: control-plane
operator: NotIn
values:
- controller-manager
- key: rollouts.kruise.io/workload-type
operator: Exists

View File

@ -0,0 +1,92 @@
# How to Debug Your Kruise Rollout
## Way 1: Debug your kruise Rollout with Pod (Recommended)
### First Step: Start your kubernetes cluster
#### Requirements:
- Linux system, MacOS, or Windows Subsystem for Linux 2.0 (WSL 2)
- Docker installed (follow the [official docker installation guide](https://docs.docker.com/get-docker/) to install if need be)
- Kubernetes cluster >= v1.19.0
- Kubectl installed and configured
Kruise Rollout relies on Kubernetes as control plane. The control plane could be any managed Kubernetes offering or your own cluster.
For local deployment and test, you could use [kind](https://kind.sigs.k8s.io/) or [minikube](https://minikube.sigs.k8s.io/docs/start/). For production usage, you could use Kubernetes services provided by cloud providers.
#### Option 1: Start your kubernetes cluster with kind (Recommended)
Follow [this guide](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) to install kind.
Then spins up a kind cluster:
```shell
cat <<EOF | kind create cluster --image=kindest/node:v1.22.15 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
```
#### Option 2: Start your kubernetes cluster with minikube
Follow the [minikube installation guide](https://minikube.sigs.k8s.io/docs/start/).
Then spins up a MiniKube cluster:
```shell
minikube start
```
### Second Step (Optional): Install Kruise
If you want to use Workload such as CloneSet, Advanced StatefulSet and Advanced DaemonSet, you have to [install Kruise](https://openkruise.io/docs/installation):
```bash
# Firstly add openkruise charts repository if you haven't do this.
$ helm repo add openkruise https://openkruise.github.io/charts/
# [Optional]
$ helm repo update
# Install the latest version.
$ helm install kruise openkruise/kruise --version 1.3.0
```
### Third Step: Deploy your kruise Rollout with a deployment
#### 1. Generate code and manifests in your branch
Make your own code changes and validate the build by running `make build` in rollouts directory.
#### 2. Deploy customized controller manager
The deployment can be done by following steps.
- Prerequisites: prepare an image registry, it can be [docker hub](https://hub.docker.com/) or your private hub.
- step 1: `export IMG=<image_name>` to specify the target image name. e.g., `export IMG=$DOCKERID/kruise-rollout:test`;
- step 2: `make docker-build` to build the image locally and `make docker-push` to push the image to registry;
- step 3: `export KUBECONFIG=<your_k8s_config>` to specify the k8s cluster config. e.g., `export KUBECONFIG=$~/.kube/config`;
- step 4:
- 4.1: `make deploy` to deploy Rollout to the k8s cluster with the `IMG` you have packaged, if the cluster has not installed Rollout or has installed via `make deploy`;
- 4.2: if the cluster has installed Rollout via helm chart, we suggest you just update your `IMG` into it with `kubectl set image -n kruise-rollout deployment kruise-rollout-controller-manager manager=${IMG}`;
Tips:
- You have to run `./scripts/uninstall.sh` to uninstall Rollout if you installed it using `make deploy`.
#### 3.View logs of your Rollout
You can perform manual tests and use `kubectl logs -n kruise-rollout <kruise-rollout-pod-name>` to check controller logs for debugging, and you can see your `<kruise-rollout-pod-name>` by applying `kubectl get pod -n kruise-rollout`.
## Way 2: Debug your Rollout locally (NOT Recommended)
Kubebuilder default `make run` does not work for webhooks since its scaffolding code starts webhook server
using kubernetes service and the service usually does not work in local dev environment.
We workarounds this problem by allowing to start webbook server in local host directly.
With this fix, one can start/debug Rollout process locally
which connects to a local or remote Kubernetes cluster. Several extra steps are needed to make it work:
**Setup host and run your Rollout**
First, make sure your kubernetes cluster is running.
Second, **make sure `kube-apiserver` could connect to your local machine.**
Then, run rollout locally with `WEBHOOK_HOST` env:
```bash
export KUBECONFIG=${PATH_TO_CONFIG}
export WEBHOOK_HOST=${YOUR_LOCAL_IP}
make install
make run
```

View File

@ -38,3 +38,4 @@ The native Kubernetes Deployment Object supports the **RollingUpdate** strategy
Here are some recommended next steps:
- Start to [Install Kruise Rollout](./installation.md).
- Learn Kruise Rollout's [Basic Usage](../tutorials/basic_usage.md).
- [Demo video](https://www.bilibili.com/video/BV1wT4y1Q7eD?spm_id_from=333.880.my_history.page.click)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 232 KiB

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

View File

@ -0,0 +1,216 @@
---
title: RolloutHistory-Proposal
authors:
- "@yike21"
reviewers:
- "@zmberg"
- "@hantmac"
- "@veophi"
creation-date: 2022-04-24
last-updated: 2022-10-28
status: implementable
---
# RolloutHistory
- Record the information such as rollout status, pod names, pod ips, rollout strategy... when user do a rollout action.
- Record the information of workload, service, ingress which is related to rollout.
## Table of Contents
A table of contents is helpful for quickly jumping to sections of a proposal and for highlighting
any additional information provided beyond the standard proposal template.
[Tools for generating](https://github.com/ekalinin/github-markdown-toc) a table of contents from markdown are available.
- [RolloutHistory](#RolloutHistory)
- [Table of Contents](#table-of-contents)
- [Motivation](#motivation)
- [Proposal](#proposal)
- [User Stories](#user-stories)
- [Story 1](#story-1)
- [Story 2](#story-2)
- [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints)
- [Risks and Mitigations](#risks-and-mitigations)
- [Implementation History](#implementation-history)
## Motivation
From the design doc of rollouts, we know that the rollouts resource is bound to a deployment or cloneset, which is a one to one mode.
When users finish some rollout actions, they can not find some information in the past few actions.
And a history record of rollout is very useful for rollout tracing.
So we should record the information such as pod names, pod ips, rollout strategy... when users do a rollout action.
It is useful for some user scenarios, such as:
1. Help users to know what happened in the past few rollout actions.
2. Record the pods released when user does a rollout action.
3. Record the information of workload, service, ingress/gateway related to rollout
## Proposal
Define a CRD named `RolloutHistory`, and implement a controller to record information. the Golang type is like :
```go
// RolloutHistorySpec defines the desired state of RolloutHistory
type RolloutHistorySpec struct {
// Rollout indicates information of the rollout related with rollouthistory
Rollout RolloutInfo `json:"rollout,omitempty"`
// Workload indicates information of the workload, such as cloneset, deployment, advanced statefulset
Workload WorkloadInfo `json:"workload,omitempty"`
// Service indicates information of the service related with workload
Service ServiceInfo `json:"service,omitempty"`
// TrafficRouting indicates information of traffic route related with workload
TrafficRouting TrafficRoutingInfo `json:"trafficRouting,omitempty"`
}
type NameAndSpecData struct {
// Name indicates the name of object ref, such as rollout name, workload name, ingress name, etc.
Name string `json:"name"`
// Data indecates the spec of object ref
Data runtime.RawExtension `json:"data,omitempty"`
}
// RolloutInfo indicates information of the rollout related
type RolloutInfo struct {
// RolloutID indicates the new rollout
// if there is no new RolloutID this time, ignore it and not execute RolloutHistory
RolloutID string `json:"rolloutID"`
NameAndSpecData `json:",inline"`
}
// ServiceInfo indicates information of the service related
type ServiceInfo struct {
NameAndSpecData `json:",inline"`
}
// TrafficRoutingInfo indicates information of Gateway API or Ingress
type TrafficRoutingInfo struct {
// IngressRef indicates information of ingress
Ingress *IngressInfo `json:"ingress,omitempty"`
// HTTPRouteRef indacates information of Gateway API
HTTPRoute *HTTPRouteInfo `json:"httpRoute,omitempty"`
}
// IngressInfo indicates information of the ingress related
type IngressInfo struct {
NameAndSpecData `json:",inline"`
}
// HTTPRouteInfo indicates information of gateway API
type HTTPRouteInfo struct {
NameAndSpecData `json:",inline"`
}
// WorkloadInfo indicates information of the workload, such as cloneset, deployment, advanced statefulset
type WorkloadInfo struct {
metav1.TypeMeta `json:",inline"`
NameAndSpecData `json:",inline"`
}
// RolloutHistoryStatus defines the observed state of RolloutHistory
type RolloutHistoryStatus struct {
// Phase indicates phase of RolloutHistory, such as "pending", "updated", "completed"
Phase string `json:"phase,omitempty"`
// CanarySteps indicates the pods released each step
CanarySteps []CanaryStepInfo `json:"canarySteps,omitempty"`
}
// CanaryStepInfo indicates the pods for a revision
type CanaryStepInfo struct {
// CanaryStepIndex indicates step this revision
CanaryStepIndex int32 `json:"canaryStepIndex,omitempty"`
// Pods indicates the pods information
Pods []Pod `json:"pods,omitempty"`
}
// Pod indicates the information of a pod, including name, ip, node_name.
type Pod struct {
// Name indicates the node name
Name string `json:"name,omitempty"`
// IP indicates the pod ip
IP string `json:"ip,omitempty"`
// Node indicates the node which pod is located at
Node string `json:"node,omitempty"`
}
// Phase indicates rollouthistory status/phase
const (
PhaseCompleted string = "completed"
)
// RolloutHistory is the Schema for the rollouthistories API
type RolloutHistory struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec RolloutHistorySpec `json:"spec,omitempty"`
Status RolloutHistoryStatus `json:"status,omitempty"`
}
// RolloutHistoryList contains a list of RolloutHistory
type RolloutHistoryList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []RolloutHistory `json:"items"`
}
```
1. When users start a new `rollout` process and the phase of `rollout` is `RolloutPhaseProgressing`, a `RolloutHistory` is generated to record information for this `rollout` if there is not other `RolloutHistory` for it. The phase of `RolloutHistory` will be empty.
2. When a `rollout` is completed and its phase is `RolloutPhaseHealthy`, `RolloutHistory` begin to record `.spec` (such as workload infomation, service information) and `.status` (canary pods information). After that, the phase of `RolloutHistory` is `PhaseCompleted`.
3. When the `rollout` process has completed, the `.status.canarySteps` of `RolloutHistory` record information of released pods.
4. When a RolloutHistory is generated, RolloutHistory controller labels it with `rolloutIDLabel` and `rolloutNameLabel` which indicate the related rollout's name and rolloutID.
5. At any time, there is at most one RolloutHistory for a rollout with a specified pair `{RolloutID + RolloutName}`. If user does a rollout again but not changed its `RolloutID`, and there is already a RolloutHistory for it. The new `RolloutHistory` won't be generated.
### User Stories
#### Story 1
Users want to know what happened in the past few rollout actions, for example, the strategy in that rollout(the rollout strategy has been changed since last release),
the detail conditions and so on.
Related [Issue](https://github.com/openkruise/rollouts/issues/10).
#### Story 2
Users want to know the information of deployment/cloneset, service, ingress, gateway for this rollout.
### Requirements
`RolloutHistory` relies on `Rollout` resources. But `RolloutHistory` shouldn't be deleted with Rollout.
`Rollout` should have `.spec.rolloutID`. When user does a rollout, `RolloutHistory` will not be generated until its `.status.canaryStatus.observedRolloutID` is set.
### Implementation Details/Notes/Constraints
1. What's the RolloutHistory phase designed?
`.status.phase` indicates phase of a RolloutHistory. If phase is empty, it means that this RolloutHistory is generated and waiting for rollout to be completed. If phase is `PhaseCompleted`, it means the RolloutHistory has completed and record information.
2. What's the rollout step state in a rollout step?
When a rollout processes, its step state `.status.canaryStatus.canaryStepState` can be
`null(*)` -> `StepUpgrade` -> `StepTrafficRouting` -> `StepMetricsAnalysis` -> `StepPaused` -> `StepReady` -> `StepUpgrade`/`Completed`.
3. When should RolloutHistory record the rollout step information?
RolloutHistory should record this rollout information when the phase of rollout is `RolloutPhaseHealthy` which means that this rollout has completed and the pods have been generated.
4. How is the RolloutHistory phase changed?
* `""` --a--> `PhaseCompleted`
- `""`(empty) means that Rollout is progressing, and RolloutHistory will be generated. A RolloutHistory will be generated only if the rollout have `.spec.rolloutID`, `.status.canaryStatus.observedRolloutID` and there is no other RolloutHistory for this rollout.
- `completed` means that this rollout is completed, and RolloutHistory have a record of this rollout's information, including the information of service, ingress/httpRoute, workload, rollout related and pods released.
- `event a` means that the phase of rollout have been `RolloutPhaseHealthy`
### Risks and Mitigations
- Currently, we don't want to listwatch pods in kruise-rollout, so how to record the pod names will be a problem. We know that Workload cloneSet gets `spec.selector` which is a label query over pods that should match the replica count. What's more, rollout controller will label the canary pods with `RolloutIDLabel` and `RolloutBatchIDLabel`.
- There are many kind of release type, such as `canary rollout`,`blue-green rollout`, `batch release` and so on, the status of RolloutHistory may not cover all the status information in release types.
- RolloutHistory record pods by labels, they depend on the `RolloutBatchIDLabel` and `RolloutIDLabel` labeled by rollout controller.
## Implementation History
- [ ] 24/04/2022: Proposal submission
- [ ] 03/08/2022: Proposal updated submission
- [ ] 21/10/2022: add RolloutHistory API
- [ ] 28/10/2022: add RolloutHistory controller

View File

@ -0,0 +1,224 @@
---
title: v1beta1-apis-proposal
authors:
- "@zmberg"
creation-date: 2023-11-07
---
## Motivation
The Kruise Rollout project has been stable for a year, recently we plan to upgrade the apis from v1alpha1 to v1beta1 and optimize some of the fields in response to past questions and community feedback,
this proposal will organize the v1beta1 apis and discuss it with the community.
## Proposal
To make it easier to understand, I'm going to introduce the v1beta1 field from 6 scenario.
### Canary Release
```
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: workload-demo
strategy:
canary:
# If true, then it will create new deployment for canary, such as: workload-demo-canary.
# When user verifies that the canary version is OK, we will remove the canary deployment and release the deployment workload-demo in full.
# Current only support k8s native deployment
enableExtraWorkloadForCanary: true
steps:
- trafficWeight: 20%
desiredReplicas: 2
trafficRoutings:
- service: service-demo
ingress:
classType: nginx
name: ingress-demo
```
### A/B Testing Release
```
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: workload-demo
strategy:
canary:
enableExtraWorkloadForCanary: true
steps:
- desiredReplicas: 2
trafficMatches:
- headers:
- name: user-agent
type: Exact
value: pc
trafficRoutings:
- service: service-demo
ingress:
classType: nginx
name: ingress-demo
```
### Only Batch Release
```
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: workload-demo
strategy:
canary:
steps:
- desiredReplicas: 1
- desiredReplicas: 10%
# After desiredReplicas Pods are ready, sleep 60 and continue to release later batches.
# If you don't configure it, manual confirmation is required by default.
pause: {duration: 60}
- desiredReplicas: 30%
pause: {duration: 60}
- desiredReplicas: 60%
pause: {duration: 60}
- desiredReplicas: 100%
pause: {duration: 60}
```
### Batch Release + Traffic Weight
```
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: workload-demo
strategy:
canary:
steps:
- trafficWeight: 5%
desiredReplicas: 2
- desiredReplicas: 30%
- desiredReplicas: 60%
- desiredReplicas: 100%
trafficRoutings:
- service: service-demo
ingress:
classType: nginx
name: ingress-demo
```
### Batch Release + Traffic A/B Testing
```
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: workload-demo
strategy:
canary:
steps:
- trafficMatches:
- headers:
- name: user-agent
type: Exact
value: pc
desiredReplicas: 2
- desiredReplicas: 30%
- desiredReplicas: 60%
- desiredReplicas: 100%
trafficRoutings:
- service: service-demo
ingress:
classType: nginx
name: ingress-demo
```
### End-to-End progressive delivery for microservice application
```
apiVersion: rollouts.kruise.io/v1alpha1
kind: TrafficRouting
metadata:
name: mse-traffic
spec:
objectRef:
- service: spring-cloud-a
ingress:
classType: mse
name: spring-cloud-a
strategy:
matches:
- headers:
- type: Exact
name: User-Agent
value: xiaoming
# http request via ingress, and add header[x-mse-tag]=gray
# for mse or istio routing the gray traffic to gray application
requestHeaderModifier:
set:
- name: x-mse-tag
value: gray
---
apiVersion: rollouts.kruise.io/v1alpha1
kind: Rollout
metadata:
name: rollout-a
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: spring-cloud-a
strategy:
canary:
enableExtraWorkloadForCanary: true
# Type TrafficRouting's name
trafficRoutingRef: mse-traffic
steps:
- desiredReplicas: 1
# patch pod template metadata to canary workload
# current only support deployment, and when enableExtraWorkloadForCanary=true
patchPodTemplateMetadata:
labels:
alicloud.service.tag: gray
opensergo.io/canary-gray: gray
---
apiVersion: rollouts.kruise.io/v1alpha1
kind: Rollout
metadata:
name: rollout-a
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: spring-cloud-a
strategy:
canary:
enableExtraWorkloadForCanary: true
# Type TrafficRouting's name
trafficRoutingRef: mse-traffic
steps:
- desiredReplicas: 1
# patch pod template metadata to canary workload
patchPodTemplateMetadata:
labels:
alicloud.service.tag: gray
opensergo.io/canary-gray: gray
```

View File

@ -30,6 +30,7 @@ spec:
spec:
containers:
- name: echoserver
# mac m1 should choics image can support arm64,such as image e2eteam/echoserver:2.2-linux-arm64
image: cilium/echoserver:1.10.2
imagePullPolicy: IfNotPresent
ports:
@ -88,14 +89,12 @@ metadata:
# namespace: xxxx
spec:
objectRef:
type: workloadRef
# rollout of published workloads, currently only supports Deployment, CloneSet
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: echoserver
strategy:
type: canary
canary:
# canary published, e.g. 20%, 40%, 60% ...
steps:
@ -108,9 +107,7 @@ spec:
trafficRoutings:
# echoserver service name
- service: echoserver
# nginx ingress
type: nginx
# echoserver ingress name
# echoserver ingress name, current only nginx ingress
ingress:
name: echoserver
```
@ -129,6 +126,7 @@ spec:
...
containers:
- name: echoserver
# mac m1 can choice image e2eteam/echoserver:2.2-linux-arm
image: cilium/echoserver:1.10.3
imagePullPolicy: IfNotPresent
```
@ -142,7 +140,7 @@ As shown in the figure below, replicas(5)*replicas(20%)=1 new versions of Pods a
**The Rollout status shows that the current rollout status is *StepPaused*, which means that the first 20% of Pods are released success and 5% of traffic is routed to the new version.**
After that, developers can use some other methods, such as prometheus metrics business metrics,
to determine that the release meets expectations and then continue the subsequent releases via **kubectl-kruise rollout approve rollout/rollouts-demo -n default** and wait deployment release is complete, as follows:
to determine that the release meets expectations and then continue the subsequent releases via **[kubectl-kruise](https://github.com/openkruise/kruise-tools) rollout approve rollout/rollouts-demo -n default** and wait deployment release is complete, as follows:
![approve](../images/approve_rollout.png)
@ -181,6 +179,7 @@ spec:
...
containers:
- name: echoserver
# m1 should rollback to e2eteam/echoserver:2.2-linux-arm64
image: cilium/echoserver:1.10.2
imagePullPolicy: IfNotPresent
```
@ -199,6 +198,7 @@ spec:
...
containers:
- name: echoserver
# m1 can choice image e2eteam/echoserver:2.2-linux-arm
image: cilium/echoserver:1.10.3
imagePullPolicy: IfNotPresent
```

84
go.mod
View File

@ -1,19 +1,81 @@
module github.com/openkruise/rollouts
go 1.16
go 1.19
require (
github.com/davecgh/go-spew v1.1.1
github.com/evanphx/json-patch v4.12.0+incompatible
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.17.0
github.com/openkruise/kruise-api v1.0.0
github.com/onsi/gomega v1.24.1
github.com/openkruise/kruise-api v1.3.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.2
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64
golang.org/x/time v0.3.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.22.6
k8s.io/apiextensions-apiserver v0.22.6
k8s.io/apimachinery v0.22.6
k8s.io/client-go v0.22.6
k8s.io/klog/v2 v2.9.0
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a
sigs.k8s.io/controller-runtime v0.10.3
sigs.k8s.io/yaml v1.2.0
k8s.io/api v0.26.3
k8s.io/apiextensions-apiserver v0.26.3
k8s.io/apimachinery v0.26.3
k8s.io/apiserver v0.26.3
k8s.io/client-go v0.26.3
k8s.io/component-base v0.26.3
k8s.io/klog/v2 v2.100.1
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf
sigs.k8s.io/controller-runtime v0.14.6
sigs.k8s.io/gateway-api v0.7.1
sigs.k8s.io/yaml v1.3.0
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/zapr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.24.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/term v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
)

546
go.sum
View File

@ -8,162 +8,104 @@ cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0 h1:3ithwDMr7/3vpAMXiH+ZQnYbuIsh+OPhUPMFC9enmn0=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw=
github.com/Azure/go-autorest/autorest v0.11.18 h1:90Y4srNYrwOtAgVo3ndrQkTYn6kf1Eg/AjTFJ8Is2aM=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/adal v0.9.13 h1:Mp5hbtOePIzM8pJVRa3YLrWWmZtoxRXqUEzCfJt3+/Q=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.11.0+incompatible h1:glyUF9yIYtMHzn8xaKw5rMhdWcwsYV8dZHIq5567/xs=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible h1:7ZaBxOI7TMoYBfyA3cQHErNNyAWIKUMIwqxEtgHOs5c=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/zapr v0.4.0 h1:uc1uML3hRYL9/ZZPdgHS/n8Nzo+eaYL/Efxkkamf7OM=
github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/zapr v1.2.3 h1:a9vnzlIBPQBBkeaR9IuMUfmVOrQlkoC4YfPoFkX3T7A=
github.com/go-logr/zapr v1.2.3/go.mod h1:eIauM6P8qSvTw5o2ez6UEAfGjQKrxQTl5EoK+Qa2oG4=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14 h1:gm3vOOXfiuw5i9p5N9xJvfjvuofpyvLA9Wr6QfK5Fng=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -174,11 +116,14 @@ github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfb
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
@ -188,90 +133,59 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
github.com/googleapis/gnostic v0.5.5 h1:9fHAtK0uDfpveeqqo1hkEZJcFvYXAiCN3UutL8F9xHw=
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11 h1:uVUAXhF2To8cbw/3xN3pxj6kk7TYKs98NIrTqPlMWAQ=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@ -280,194 +194,122 @@ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFB
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
github.com/matttproud/golang_protobuf_extensions v1.0.2 h1:hAHbPm5IJGijwng3PWk09JkG9WeqChjprR5s9bBZ+OM=
github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/ginkgo/v2 v2.6.0 h1:9t9b9vRUbFq3C4qKFCGkVuq/fIHji802N1nrtkh1mNc=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.15.0/go.mod h1:cIuvLEne0aoVhAgh/O6ac0Op8WWw9H6eYCriF+tEHG0=
github.com/onsi/gomega v1.17.0 h1:9Luw4uT5HTjHTN8+aNcSThgH1vdXnmdJ8xIfZ4wyTRE=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/openkruise/kruise-api v1.0.0 h1:ScA0LxRRNBsgbcyLhTzR9B+KpGNWsIMptzzmjTqfYQo=
github.com/openkruise/kruise-api v1.0.0/go.mod h1:kxV/UA/vrf/hz3z+kL21c0NOawC6K1ZjaKcJFgiOwsE=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/onsi/gomega v1.24.1 h1:KORJXNNTzJXzu4ScJWssJfJMnJ+2QJqhoQSRwNlze9E=
github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM=
github.com/openkruise/kruise-api v1.3.0 h1:yfEy64uXgSuX/5RwePLbwUK/uX8RRM8fHJkccel5ZIQ=
github.com/openkruise/kruise-api v1.3.0/go.mod h1:9ZX+ycdHKNzcA5ezAf35xOa2Mwfa2BYagWr0lKgi5dU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0 h1:iMAkS2TDoNWnKM+Kopnx/8tnEStIfpYA0ur0xQzzhMQ=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64 h1:5mLPGnFdSsevFRFc9q3yYbBkB6tsm4aCwwQV/j1JQAQ=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.0 h1:mZQZefskPPCMIBCSEH0v2/iUqqLrYtaeqwD6FUGUnFE=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83 h1:/ZScEX8SfEmUGRHs0gxpqteO5nfNW6axyZbBdw9A12g=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -490,8 +332,6 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
@ -500,14 +340,10 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@ -517,8 +353,8 @@ golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@ -526,38 +362,41 @@ golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211209124913-491a49abca63 h1:iocB37TsdFuN6IBRZ+ry36wrkoV51/tl5vOWqkcPGvY=
golang.org/x/net v0.0.0-20211209124913-491a49abca63/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d h1:TzXSXBo42m9gQenoE3b9BGiEpg5IG2JkU5FkPIawgtw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -566,13 +405,11 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -584,66 +421,60 @@ golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210817190340-bfb29a6856f2 h1:c8PlLMqBbOHoqtjteWm5/kbe6rNY2pbRfbIMVnepueo=
golang.org/x/sys v0.0.0-20210817190340-bfb29a6856f2/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@ -658,17 +489,23 @@ golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.2 h1:kRBLX7v7Af8W7Gdbbc908OJcdgtK8bOz9Uaj8/F1ACA=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
@ -681,12 +518,19 @@ google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsb
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
@ -706,12 +550,19 @@ google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@ -720,11 +571,10 @@ google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQ
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -736,8 +586,9 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -748,16 +599,10 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@ -766,65 +611,46 @@ gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.20.10/go.mod h1:0kei3F6biGjtRQBo5dUeujq6Ji3UCh9aOSfp/THYd7I=
k8s.io/api v0.22.2/go.mod h1:y3ydYpLJAaDI+BbSe2xmGcqxiWHmWjkEeIbiwHvnPR8=
k8s.io/api v0.22.6 h1:acjE5ABt0KpsBI9QCtLqaQEPSF94jOtE/LoFxSYasSE=
k8s.io/api v0.22.6/go.mod h1:q1F7IfaNrbi/83ebLy3YFQYLjPSNyunZ/IXQxMmbwCg=
k8s.io/apiextensions-apiserver v0.22.2/go.mod h1:2E0Ve/isxNl7tWLSUDgi6+cmwHi5fQRdwGVCxbC+KFA=
k8s.io/apiextensions-apiserver v0.22.6 h1:TH+9+EGtoVzzbrlfSDnObzFTnyXKqw1NBfT5XFATeJI=
k8s.io/apiextensions-apiserver v0.22.6/go.mod h1:wNsLwy8mfIkGThiv4Qq/Hy4qRazViKXqmH5pfYiRKyY=
k8s.io/apimachinery v0.20.10/go.mod h1:kQa//VOAwyVwJ2+L9kOREbsnryfsGSkSM1przND4+mw=
k8s.io/apimachinery v0.22.2/go.mod h1:O3oNtNadZdeOMxHFVxOreoznohCpy0z6mocxbZr7oJ0=
k8s.io/apimachinery v0.22.6 h1:z7vxNRkFX0NToA+8D17kzLZ/T4t+DqwzUlqqbqRepRs=
k8s.io/apimachinery v0.22.6/go.mod h1:ZvVLP5iLhwVFg2Yx9Gh5W0um0DUauExbRhe+2Z8I1EU=
k8s.io/apiserver v0.22.2/go.mod h1:vrpMmbyjWrgdyOvZTSpsusQq5iigKNWv9o9KlDAbBHI=
k8s.io/apiserver v0.22.6/go.mod h1:OlL1rGa2kKWGj2JEXnwBcul/BwC9Twe95gm4ohtiIIs=
k8s.io/client-go v0.20.10/go.mod h1:fFg+aLoasv/R+xiVaWjxeqGFYltzgQcOQzkFaSRfnJ0=
k8s.io/client-go v0.22.2/go.mod h1:sAlhrkVDf50ZHx6z4K0S40wISNTarf1r800F+RlCF6U=
k8s.io/client-go v0.22.6 h1:ugAXeC312xeGXsn7zTRz+btgtLBnW3qYhtUUpVQL7YE=
k8s.io/client-go v0.22.6/go.mod h1:TffU4AV2idZGeP+g3kdFZP+oHVHWPL1JYFySOALriw0=
k8s.io/code-generator v0.20.10/go.mod h1:i6FmG+QxaLxvJsezvZp0q/gAEzzOz3U53KFibghWToU=
k8s.io/code-generator v0.22.2/go.mod h1:eV77Y09IopzeXOJzndrDyCI88UBok2h6WxAlBwpxa+o=
k8s.io/code-generator v0.22.6/go.mod h1:iOZwYADSgFPNGWfqHFfg1V0TNJnl1t0WyZluQp4baqU=
k8s.io/component-base v0.22.2/go.mod h1:5Br2QhI9OTe79p+TzPe9JKNQYvEKbq9rTJDWllunGug=
k8s.io/component-base v0.22.6 h1:YgGMDVnr97rhn0eljuYIU/9XFyz8JVDM30slMYrDgPc=
k8s.io/component-base v0.22.6/go.mod h1:ngHLefY4J5fq2fApNdbWyj4yh0lvw36do4aAjNN8rc8=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo v0.0.0-20201214224949-b6c5ce23f027/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.9.0 h1:D7HV+n1V57XeZ0m6tdRkfknthUaM06VFbWldOFh8kzM=
k8s.io/klog/v2 v2.9.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c h1:jvamsI1tn9V0S8jicyX82qaFC0H/NKxv2e5mbqsgR80=
k8s.io/kube-openapi v0.0.0-20211109043538-20434351676c/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a h1:8dYfu/Fc9Gz2rNJKB9IQRGgQOh2clmRzNIPPY1xLY5g=
k8s.io/utils v0.0.0-20210819203725-bdf08cb9a70a/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.26.3 h1:emf74GIQMTik01Aum9dPP0gAypL8JTLl/lHa4V9RFSU=
k8s.io/api v0.26.3/go.mod h1:PXsqwPMXBSBcL1lJ9CYDKy7kIReUydukS5JiRlxC3qE=
k8s.io/apiextensions-apiserver v0.26.3 h1:5PGMm3oEzdB1W/FTMgGIDmm100vn7IaUP5er36dB+YE=
k8s.io/apiextensions-apiserver v0.26.3/go.mod h1:jdA5MdjNWGP+njw1EKMZc64xAT5fIhN6VJrElV3sfpQ=
k8s.io/apimachinery v0.26.3 h1:dQx6PNETJ7nODU3XPtrwkfuubs6w7sX0M8n61zHIV/k=
k8s.io/apimachinery v0.26.3/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
k8s.io/apiserver v0.26.3 h1:blBpv+yOiozkPH2aqClhJmJY+rp53Tgfac4SKPDJnU4=
k8s.io/apiserver v0.26.3/go.mod h1:CJe/VoQNcXdhm67EvaVjYXxR3QyfwpceKPuPaeLibTA=
k8s.io/client-go v0.26.3 h1:k1UY+KXfkxV2ScEL3gilKcF7761xkYsSD6BC9szIu8s=
k8s.io/client-go v0.26.3/go.mod h1:ZPNu9lm8/dbRIPAgteN30RSXea6vrCpFvq+MateTUuQ=
k8s.io/component-base v0.26.3 h1:oC0WMK/ggcbGDTkdcqefI4wIZRYdK3JySx9/HADpV0g=
k8s.io/component-base v0.26.3/go.mod h1:5kj1kZYwSC6ZstHJN7oHBqcJC6yyn41eR+Sqa/mQc8E=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 h1:+70TFaan3hfJzs+7VK2o+OGxg8HsuBr/5f6tVAjDu6E=
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448 h1:KTgPnR10d5zhztWptI952TNtt/4u5h3IzDXkdIMuo2Y=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf h1:rRz0YsF7VXj9fXRF6yQgFI7DzST+hsI3TeFSGupntu0=
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf/go.mod h1:ivKkcY8Zxw5ba0jldhZCYYQfGdb2K6u9tbYK1AwMIBc=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.22/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.27/go.mod h1:tq2nT0Kx7W+/f2JVE+zxYtUhdjuELJkVpNz+x/QN5R4=
sigs.k8s.io/controller-runtime v0.10.3 h1:s5Ttmw/B4AuIbwrXD3sfBkXwnPMMWrqpVj4WRt1dano=
sigs.k8s.io/controller-runtime v0.10.3/go.mod h1:CQp8eyUQZ/Q7PJvnIrB6/hgfTC1kBkGylwsLgOQi1WY=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/controller-runtime v0.14.6 h1:oxstGVvXGNnMvY7TAESYk+lzr6S3V5VFxQ6d92KcwQA=
sigs.k8s.io/controller-runtime v0.14.6/go.mod h1:WqIdsAY6JBsjfc/CqO0CORmNtoCtE4S6qbPc9s68h+0=
sigs.k8s.io/gateway-api v0.7.1 h1:Tts2jeepVkPA5rVG/iO+S43s9n7Vp7jCDhZDQYtPigQ=
sigs.k8s.io/gateway-api v0.7.1/go.mod h1:Xv0+ZMxX0lu1nSSDIIPEfbVztgNZ+3cfiYrJsa2Ooso=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

View File

@ -1,5 +1,5 @@
/*
Copyright 2022 The Kruise Authors.
Copyright 2023 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@ -0,0 +1,241 @@
package main
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
custom "github.com/openkruise/rollouts/pkg/trafficrouting/network/customNetworkProvider"
"github.com/openkruise/rollouts/pkg/util/luamanager"
lua "github.com/yuin/gopher-lua"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/yaml"
)
type TestCase struct {
Rollout *v1beta1.Rollout `json:"rollout,omitempty"`
TrafficRouting *v1alpha1.TrafficRouting `json:"trafficRouting,omitempty"`
Original *unstructured.Unstructured `json:"original,omitempty"`
Expected []*unstructured.Unstructured `json:"expected,omitempty"`
}
// this function aims to convert testdata to lua object for debugging
// run `go run lua.go`, then this program will get all testdata and convert them into lua objects
// copy the generated objects to lua scripts and then you can start debugging your lua scripts
func main() {
err := convertTestCaseToLuaObject()
if err != nil {
fmt.Println(err)
}
}
func convertTestCaseToLuaObject() error {
err := filepath.Walk("./", func(path string, f os.FileInfo, err error) error {
if !strings.Contains(path, "trafficRouting.lua") {
return nil
}
if err != nil {
return fmt.Errorf("failed to walk path: %s", err.Error())
}
dir := filepath.Dir(path)
if _, err := os.Stat(filepath.Join(dir, "testdata")); err != nil {
fmt.Printf("testdata not found in %s\n", dir)
return nil
}
err = filepath.Walk(filepath.Join(dir, "testdata"), func(path string, info os.FileInfo, err error) error {
if !info.IsDir() && filepath.Ext(path) == ".yaml" || filepath.Ext(path) == ".yml" {
fmt.Printf("--- walking path: %s ---\n", path)
err = objectToTable(path)
if err != nil {
return fmt.Errorf("failed to convert object to table: %s", err)
}
}
return nil
})
if err != nil {
return fmt.Errorf("failed to walk path: %s", err.Error())
}
return nil
})
if err != nil {
return fmt.Errorf("failed to walk path: %s", err)
}
return nil
}
// convert a testcase object to lua table for debug
func objectToTable(path string) error {
dir, file := filepath.Split(path)
testCase, err := getLuaTestCase(path)
if err != nil {
return fmt.Errorf("failed to get lua testcase: %s", err)
}
uList := make(map[string]interface{})
rollout := testCase.Rollout
trafficRouting := testCase.TrafficRouting
if rollout != nil {
steps := rollout.Spec.Strategy.GetSteps()
for i, step := range steps {
var weight *int32
if step.TrafficRoutingStrategy.Traffic != nil {
is := intstr.FromString(*step.TrafficRoutingStrategy.Traffic)
weightInt, _ := intstr.GetScaledValueFromIntOrPercent(&is, 100, true)
weight = utilpointer.Int32(int32(weightInt))
} else {
weight = utilpointer.Int32(-1)
}
var canaryService string
stableService := rollout.Spec.Strategy.GetTrafficRouting()[0].Service
canaryService = fmt.Sprintf("%s-canary", stableService)
data := &custom.LuaData{
Data: custom.Data{
Labels: testCase.Original.GetLabels(),
Annotations: testCase.Original.GetAnnotations(),
Spec: testCase.Original.Object["spec"],
},
Matches: step.TrafficRoutingStrategy.Matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
RequestHeaderModifier: step.TrafficRoutingStrategy.RequestHeaderModifier,
}
uList[fmt.Sprintf("step_%d", i)] = data
}
} else if trafficRouting != nil {
weight := trafficRouting.Spec.Strategy.Weight
if weight == nil {
weight = utilpointer.Int32(-1)
}
var canaryService string
stableService := trafficRouting.Spec.ObjectRef[0].Service
canaryService = stableService
matches := make([]v1beta1.HttpRouteMatch, 0)
for _, match := range trafficRouting.Spec.Strategy.Matches {
obj := v1beta1.HttpRouteMatch{}
obj.Headers = match.Headers
matches = append(matches, obj)
}
data := &custom.LuaData{
Data: custom.Data{
Labels: testCase.Original.GetLabels(),
Annotations: testCase.Original.GetAnnotations(),
Spec: testCase.Original.Object["spec"],
},
Matches: matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
RequestHeaderModifier: trafficRouting.Spec.Strategy.RequestHeaderModifier,
}
uList["steps_0"] = data
} else {
return fmt.Errorf("neither rollout nor trafficRouting defined in test case: %s", path)
}
objStr, err := executeLua(uList)
if err != nil {
return fmt.Errorf("failed to execute lua: %s", err.Error())
}
filePath := fmt.Sprintf("%s%s_obj.lua", dir, strings.Split(file, ".")[0])
fileStream, err := os.OpenFile(filePath, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0666)
if err != nil {
return fmt.Errorf("failed to open file: %s", err)
}
defer fileStream.Close()
header := "-- THIS IS GENERATED BY CONVERT_TEST_CASE_TO_LUA_OBJECT.GO FOR DEBUGGING --\n"
_, err = io.WriteString(fileStream, header+objStr)
if err != nil {
return fmt.Errorf("failed to WriteString %s", err)
}
return nil
}
func getLuaTestCase(path string) (*TestCase, error) {
yamlFile, err := os.ReadFile(path)
if err != nil {
return nil, err
}
luaTestCase := &TestCase{}
err = yaml.Unmarshal(yamlFile, luaTestCase)
if err != nil {
return nil, err
}
return luaTestCase, nil
}
func executeLua(steps map[string]interface{}) (string, error) {
luaManager := &luamanager.LuaManager{}
unObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&steps)
if err != nil {
return "", fmt.Errorf("failed to convert to unstructured: %s", err)
}
u := &unstructured.Unstructured{Object: unObj}
script := `
function serialize(obj, isKey)
local lua = ""
local t = type(obj)
if t == "number" then
lua = lua .. obj
elseif t == "boolean" then
lua = lua .. tostring(obj)
elseif t == "string" then
if isKey then
lua = lua .. string.format("%s", obj)
else
lua = lua .. string.format("%q", obj)
end
elseif t == "table" then
lua = lua .. "{"
for k, v in pairs(obj) do
if type(k) == "string" then
lua = lua .. serialize(k, true) .. "=" .. serialize(v, false) .. ","
else
lua = lua .. serialize(v, false) .. ","
end
end
local metatable = getmetatable(obj)
if metatable ~= nil and type(metatable.__index) == "table" then
for k, v in pairs(metatable.__index) do
if type(k) == "string" then
lua = lua .. serialize(k, true) .. "=" .. serialize(v, false) .. ","
else
lua = lua .. serialize(v, false) .. ","
end
end
end
lua = lua .. "}"
elseif t == "nil" then
return nil
else
error("can not serialize a " .. t .. " type.")
end
return lua
end
function table2string(tablevalue)
local stringtable = "steps=" .. serialize(tablevalue)
print(stringtable)
return stringtable
end
return table2string(obj)
`
l, err := luaManager.RunLuaScript(u, script)
if err != nil {
return "", fmt.Errorf("failed to run lua script: %s", err)
}
returnValue := l.Get(-1)
if returnValue.Type() == lua.LTString {
return returnValue.String(), nil
} else {
return "", fmt.Errorf("unexpected lua output type")
}
}

View File

@ -0,0 +1,49 @@
trafficRouting:
apiVersion: rollouts.kruise.io/v1alpha1
kind: TrafficRouting
metadata:
name: tr-demo
spec:
strategy:
matches:
- headers:
- type: Exact
name: version
value: canary
objectRef:
- service: svc-demo
customNetworkRefs:
- apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
name: ds-demo
original:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ds-demo
spec:
host: svc-demo
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
subsets:
- labels:
version: base
name: version-base
expected:
- apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ds-demo
spec:
host: svc-demo
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
subsets:
- labels:
version: base
name: version-base
- labels:
istio.service.tag: gray
name: canary

View File

@ -0,0 +1,8 @@
local spec = obj.data.spec
local canary = {}
canary.labels = {}
canary.name = "canary"
local podLabelKey = "istio.service.tag"
canary.labels[podLabelKey] = "gray"
table.insert(spec.subsets, canary)
return obj.data

View File

@ -0,0 +1,126 @@
rollout:
apiVersion: rollouts.kruise.io/v1beta1
kind: Rollout
metadata:
name: rollouts-demo
spec:
workloadRef:
apiVersion: apps/v1
kind: Deployment
name: deploy-demo
strategy:
canary:
steps:
- matches:
- headers:
- type: Exact
name: user-agent
value: pc
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
- matches:
- headers:
- type: Exact
name: user-agent
value: pc
- headers:
- type: RegularExpression
name: name
value: ".*demo"
- traffic: "50%"
trafficRoutings:
- service: svc-demo
customNetworkRefs:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
name: vs-demo
original:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
expected:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- match:
- headers:
user-agent:
exact: pc
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo-canary
- route:
- destination:
host: svc-demo
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- match:
- headers:
name:
regex: .*demo
route:
- destination:
host: svc-demo-canary
- match:
- headers:
user-agent:
exact: pc
route:
- destination:
host: svc-demo-canary
- route:
- destination:
host: svc-demo
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
weight: 50
- destination:
host: svc-demo-canary
weight: 50

View File

@ -0,0 +1,69 @@
trafficRouting:
apiVersion: rollouts.kruise.io/v1alpha1
kind: TrafficRouting
metadata:
name: tr-demo
spec:
strategy:
matches:
- headers:
- type: Exact
name: user-agent
value: pc
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
objectRef:
- service: svc-demo
customNetworkRefs:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
name: vs-demo
original:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
subset: base
expected:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- match:
- headers:
user-agent:
exact: pc
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo
subset: canary
- route:
- destination:
host: svc-demo
subset: base

View File

@ -0,0 +1,80 @@
trafficRouting:
apiVersion: rollouts.kruise.io/v1alpha1
kind: TrafficRouting
metadata:
name: tr-demo
spec:
strategy:
matches:
- headers:
- type: Exact
name: user-agent
value: pc
- headers:
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
objectRef:
- service: svc-demo
customNetworkRefs:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
name: vs-demo
original:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
subset: base
expected:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- match:
- headers:
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo
subset: canary
- match:
- headers:
user-agent:
exact: pc
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo
subset: canary
- route:
- destination:
host: svc-demo
subset: base

View File

@ -0,0 +1,50 @@
trafficRouting:
apiVersion: rollouts.kruise.io/v1alpha1
kind: TrafficRouting
metadata:
name: tr-demo
spec:
strategy:
weight: 50
objectRef:
- service: svc-demo
customNetworkRefs:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
name: vs-demo
original:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
subset: base
expected:
- apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-vs
namespace: demo
spec:
hosts:
- "*"
gateways:
- nginx-gateway
http:
- route:
- destination:
host: svc-demo
subset: base
weight: 50
- destination:
host: svc-demo
subset: canary
weight: 50

View File

@ -0,0 +1,156 @@
spec = obj.data.spec
if obj.canaryWeight == -1 then
obj.canaryWeight = 100
obj.stableWeight = 0
end
function GetHost(destination)
local host = destination.destination.host
dot_position = string.find(host, ".", 1, true)
if (dot_position) then
host = string.sub(host, 1, dot_position - 1)
end
return host
end
-- find routes of VirtualService with stableService
function GetRulesToPatch(spec, stableService, protocol)
local matchedRoutes = {}
if (spec[protocol] ~= nil) then
for _, rule in ipairs(spec[protocol]) do
-- skip routes contain matches
if (rule.match == nil) then
for _, route in ipairs(rule.route) do
if GetHost(route) == stableService then
table.insert(matchedRoutes, rule)
end
end
end
end
end
return matchedRoutes
end
function CalculateWeight(route, stableWeight, n)
local weight
if (route.weight) then
weight = math.floor(route.weight * stableWeight / 100)
else
weight = math.floor(stableWeight / n)
end
return weight
end
-- generate routes with matches, insert a rule before other rules, only support http headers, cookies etc.
function GenerateRoutesWithMatches(spec, matches, stableService, canaryService, requestHeaderModifier)
for _, match in ipairs(matches) do
local route = {}
route["match"] = {}
local vsMatch = {}
for key, value in pairs(match) do
if key == "path" then
vsMatch["uri"] = {}
local rule = value
if rule["type"] == "RegularExpression" then
matchType = "regex"
elseif rule["type"] == "Exact" then
matchType = "exact"
elseif rule["type"] == "PathPrefix" then
matchType = "prefix"
end
vsMatch["uri"][matchType] = rule.value
else
vsMatch[key] = {}
for _, rule in ipairs(value) do
if rule["type"] == "RegularExpression" then
matchType = "regex"
elseif rule["type"] == "Exact" then
matchType = "exact"
elseif rule["type"] == "Prefix" then
matchType = "prefix"
end
if key == "headers" or key == "queryParams" then
vsMatch[key][rule["name"]] = {}
vsMatch[key][rule["name"]][matchType] = rule.value
else
vsMatch[key][matchType] = rule.value
end
end
end
end
table.insert(route["match"], vsMatch)
if requestHeaderModifier then
route["headers"] = {}
route["headers"]["request"] = {}
for action, headers in pairs(requestHeaderModifier) do
if action == "set" or action == "add" then
route["headers"]["request"][action] = {}
for _, header in ipairs(headers) do
route["headers"]["request"][action][header["name"]] = header["value"]
end
elseif action == "remove" then
route["headers"]["request"]["remove"] = {}
for _, rHeader in ipairs(headers) do
table.insert(route["headers"]["request"]["remove"], rHeader)
end
end
end
end
route.route = {
{
destination = {}
}
}
-- stableService == canaryService indicates DestinationRule exists and subset is set to be canary by default
if stableService == canaryService then
route.route[1].destination.host = stableService
route.route[1].destination.subset = "canary"
else
route.route[1].destination.host = canaryService
end
table.insert(spec.http, 1, route)
end
end
-- generate routes without matches, change every rule whose host is stableService
function GenerateRoutes(spec, stableService, canaryService, stableWeight, canaryWeight, protocol)
local matchedRules = GetRulesToPatch(spec, stableService, protocol)
for _, rule in ipairs(matchedRules) do
local canary
if stableService ~= canaryService then
canary = {
destination = {
host = canaryService,
},
weight = canaryWeight,
}
else
canary = {
destination = {
host = stableService,
subset = "canary",
},
weight = canaryWeight,
}
end
-- incase there are multiple versions traffic already, do a for-loop
for _, route in ipairs(rule.route) do
-- update stable service weight
route.weight = CalculateWeight(route, stableWeight, #rule.route)
end
table.insert(rule.route, canary)
end
end
if (obj.matches and next(obj.matches) ~= nil)
then
GenerateRoutesWithMatches(spec, obj.matches, obj.stableService, obj.canaryService, obj.requestHeaderModifier)
else
GenerateRoutes(spec, obj.stableService, obj.canaryService, obj.stableWeight, obj.canaryWeight, "http")
GenerateRoutes(spec, obj.stableService, obj.canaryService, obj.stableWeight, obj.canaryWeight, "tcp")
GenerateRoutes(spec, obj.stableService, obj.canaryService, obj.stableWeight, obj.canaryWeight, "tls")
end
return obj.data

View File

@ -0,0 +1,36 @@
annotations = {}
if ( obj.annotations )
then
annotations = obj.annotations
end
annotations["alb.ingress.kubernetes.io/canary"] = "true"
annotations["alb.ingress.kubernetes.io/canary-by-cookie"] = nil
annotations["alb.ingress.kubernetes.io/canary-by-header"] = nil
annotations["alb.ingress.kubernetes.io/canary-by-header-pattern"] = nil
annotations["alb.ingress.kubernetes.io/canary-by-header-value"] = nil
annotations["alb.ingress.kubernetes.io/canary-weight"] = nil
annotations["alb.ingress.kubernetes.io/order"] = "1"
if ( obj.weight ~= "-1" )
then
annotations["alb.ingress.kubernetes.io/canary-weight"] = obj.weight
end
if ( not obj.matches )
then
return annotations
end
for _,match in ipairs(obj.matches) do
local header = match.headers[1]
if ( header.name == "canary-by-cookie" )
then
annotations["alb.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["alb.ingress.kubernetes.io/canary-by-header"] = header.name
if ( header.type == "RegularExpression" )
then
annotations["alb.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["alb.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
return annotations

View File

@ -0,0 +1,46 @@
annotations = {}
-- obj.annotations is ingress annotations, it is recommended not to remove the part of the lua script, it must be kept
if ( obj.annotations )
then
annotations = obj.annotations
end
-- indicates the ingress is nginx canary api
annotations["nginx.ingress.kubernetes.io/canary"] = "true"
-- First, set all canary api to nil
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = nil
annotations["nginx.ingress.kubernetes.io/canary-weight"] = nil
-- if rollout.spec.strategy.canary.steps.weight is nil, obj.weight will be -1,
-- then we need remove the canary-weight annotation
if ( obj.weight ~= "-1" )
then
annotations["nginx.ingress.kubernetes.io/canary-weight"] = obj.weight
end
-- if don't contains headers, immediate return annotations
if ( not obj.matches )
then
return annotations
end
-- headers & cookie apis
-- traverse matches
for _,match in ipairs(obj.matches) do
local header = match.headers[1]
-- cookie
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
-- if regular expression
if ( header.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
-- must be return annotations
return annotations

View File

@ -0,0 +1,66 @@
function split(input, delimiter)
local arr = {}
string.gsub(input, '[^' .. delimiter ..']+', function(w) table.insert(arr, w) end)
return arr
end
annotations = obj.annotations
annotations["nginx.ingress.kubernetes.io/canary"] = "true"
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = nil
-- MSE extended annotations
annotations["mse.ingress.kubernetes.io/canary-by-query"] = nil
annotations["mse.ingress.kubernetes.io/canary-by-query-pattern"] = nil
annotations["mse.ingress.kubernetes.io/canary-by-query-value"] = nil
annotations["nginx.ingress.kubernetes.io/canary-weight"] = nil
if ( obj.weight ~= "-1" )
then
annotations["nginx.ingress.kubernetes.io/canary-weight"] = obj.weight
end
if ( annotations["mse.ingress.kubernetes.io/service-subset"] )
then
annotations["mse.ingress.kubernetes.io/service-subset"] = "gray"
end
if ( obj.requestHeaderModifier )
then
local str = ''
for _,header in ipairs(obj.requestHeaderModifier.set) do
str = str..string.format("%s %s", header.name, header.value)
end
annotations["mse.ingress.kubernetes.io/request-header-control-update"] = str
end
if ( not obj.matches )
then
return annotations
end
for _,match in ipairs(obj.matches) do
if match.headers and next(match.headers) ~= nil then
header = match.headers[1]
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
if ( header.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
if match.queryParams and next(match.queryParams) ~= nil then
queryParam = match.queryParams[1]
annotations["nginx.ingress.kubernetes.io/canary-by-query"] = queryParam.name
if ( queryParam.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-query-pattern"] = queryParam.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-query-value"] = queryParam.value
end
end
end
return annotations

View File

@ -0,0 +1,48 @@
annotations = {}
-- obj.annotations is ingress annotations, it is recommended not to remove the part of the lua script, it must be kept
if ( obj.annotations )
then
annotations = obj.annotations
end
-- indicates the ingress is nginx canary api
annotations["nginx.ingress.kubernetes.io/canary"] = "true"
-- First, set all canary api to nil
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = nil
annotations["nginx.ingress.kubernetes.io/canary-weight"] = nil
-- if rollout.spec.strategy.canary.steps.weight is nil, obj.weight will be -1,
-- then we need remove the canary-weight annotation
if ( obj.weight ~= "-1" )
then
annotations["nginx.ingress.kubernetes.io/canary-weight"] = obj.weight
end
-- if don't contains headers, immediate return annotations
if ( not obj.matches )
then
return annotations
end
-- headers & cookie apis
-- traverse matches
for _,match in ipairs(obj.matches) do
if match.headers and next(match.headers) ~= nil then
local header = match.headers[1]
-- cookie
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
-- if regular expression
if ( header.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
end
-- must be return annotations
return annotations

53
main.go
View File

@ -20,12 +20,19 @@ import (
"flag"
"os"
kruisev1aplphal "github.com/openkruise/kruise-api/apps/v1alpha1"
rolloutsv1alpha1 "github.com/openkruise/rollouts/api/v1alpha1"
kruisev1aplphal1 "github.com/openkruise/kruise-api/apps/v1alpha1"
kruisev1beta1 "github.com/openkruise/kruise-api/apps/v1beta1"
rolloutapi "github.com/openkruise/rollouts/api"
br "github.com/openkruise/rollouts/pkg/controller/batchrelease"
"github.com/openkruise/rollouts/pkg/controller/deployment"
"github.com/openkruise/rollouts/pkg/controller/rollout"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/controller/rollouthistory"
"github.com/openkruise/rollouts/pkg/controller/trafficrouting"
utilclient "github.com/openkruise/rollouts/pkg/util/client"
utilfeature "github.com/openkruise/rollouts/pkg/util/feature"
"github.com/openkruise/rollouts/pkg/webhook"
"github.com/spf13/pflag"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
@ -33,6 +40,7 @@ import (
"k8s.io/klog/v2/klogr"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
gatewayv1beta1 "sigs.k8s.io/gateway-api/apis/v1beta1"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
@ -47,8 +55,11 @@ var (
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(kruisev1aplphal.AddToScheme(scheme))
utilruntime.Must(rolloutsv1alpha1.AddToScheme(scheme))
utilruntime.Must(kruisev1aplphal1.AddToScheme(scheme))
utilruntime.Must(kruisev1beta1.AddToScheme(scheme))
utilruntime.Must(rolloutapi.AddToScheme(scheme))
utilruntime.Must(gatewayv1beta1.AddToScheme(scheme))
utilruntime.Must(admissionregistrationv1.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
@ -61,24 +72,30 @@ func main() {
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
utilfeature.DefaultMutableFeatureGate.AddFlag(pflag.CommandLine)
klog.InitFlags(nil)
flag.Parse()
pflag.CommandLine.AddGoFlagSet(flag.CommandLine)
pflag.Parse()
ctrl.SetLogger(klogr.New())
cfg := ctrl.GetConfigOrDie()
cfg.UserAgent = "kruise-rollout"
setupLog.Info("new clientset registry")
err := util.NewRegistry(ctrl.GetConfigOrDie())
err := utilclient.NewRegistry(cfg)
if err != nil {
setupLog.Error(err, "unable to init clientset and informer")
os.Exit(1)
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
mgr, err := ctrl.NewManager(cfg, ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "71ddec2c.kruise.io",
NewClient: utilclient.NewClient,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
@ -88,10 +105,19 @@ func main() {
if err = (&rollout.RolloutReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Finder: util.NewControllerFinder(mgr.GetClient()),
Recorder: mgr.GetEventRecorderFor("rollout-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Rollout")
os.Exit(1)
}
if err = (&trafficrouting.TrafficRoutingReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("trafficrouting-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "TrafficRouting")
os.Exit(1)
}
if err = br.Add(mgr); err != nil {
@ -99,6 +125,15 @@ func main() {
os.Exit(1)
}
if err = rollouthistory.Add(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "rollouthistory")
os.Exit(1)
}
if err = deployment.Add(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "advanceddeployment")
os.Exit(1)
}
//+kubebuilder:scaffold:builder
setupLog.Info("setup webhook")
if err = webhook.SetupWithManager(mgr); err != nil {

View File

@ -21,14 +21,15 @@ import (
"encoding/json"
"flag"
"reflect"
"sync"
"time"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
"k8s.io/klog/v2"
@ -46,14 +47,25 @@ import (
var (
concurrentReconciles = 3
workloadHandler handler.EventHandler
runtimeController controller.Controller
watchedWorkload sync.Map
)
const ReleaseFinalizer = "rollouts.kruise.io/batch-release-finalizer"
func init() {
flag.IntVar(&concurrentReconciles, "batchrelease-workers", concurrentReconciles, "Max concurrent workers for BatchRelease controller.")
flag.IntVar(&concurrentReconciles, "batchrelease-workers", 3, "Max concurrent workers for batchRelease controller.")
watchedWorkload = sync.Map{}
watchedWorkload.LoadOrStore(util.ControllerKindDep.String(), struct{}{})
watchedWorkload.LoadOrStore(util.ControllerKindSts.String(), struct{}{})
watchedWorkload.LoadOrStore(util.ControllerKruiseKindDS.String(), struct{}{})
watchedWorkload.LoadOrStore(util.ControllerKruiseKindCS.String(), struct{}{})
watchedWorkload.LoadOrStore(util.ControllerKruiseKindSts.String(), struct{}{})
watchedWorkload.LoadOrStore(util.ControllerKruiseOldKindSts.String(), struct{}{})
}
const ReleaseFinalizer = "rollouts.kruise.io/batch-release-finalizer"
// Add creates a new Rollout Controller and adds it to the Manager with default RBAC. The Manager will set fields on the Controller
// and Start it when the Manager is Started.
func Add(mgr manager.Manager) error {
@ -82,14 +94,18 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error {
}
// Watch for changes to BatchRelease
err = c.Watch(&source.Kind{Type: &v1alpha1.BatchRelease{}}, &handler.EnqueueRequestForObject{}, predicate.Funcs{
err = c.Watch(&source.Kind{Type: &v1beta1.BatchRelease{}}, &handler.EnqueueRequestForObject{}, predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
oldObject := e.ObjectOld.(*v1alpha1.BatchRelease)
newObject := e.ObjectNew.(*v1alpha1.BatchRelease)
oldObject := e.ObjectOld.(*v1beta1.BatchRelease)
newObject := e.ObjectNew.(*v1beta1.BatchRelease)
if oldObject.Generation != newObject.Generation || newObject.DeletionTimestamp != nil {
klog.V(3).Infof("Observed updated Spec for BatchRelease: %s/%s", newObject.Namespace, newObject.Name)
return true
}
if len(oldObject.Annotations) != len(newObject.Annotations) || !reflect.DeepEqual(oldObject.Annotations, newObject.Annotations) {
klog.V(3).Infof("Observed updated Annotation for BatchRelease: %s/%s", newObject.Namespace, newObject.Name)
return true
}
return false
},
})
@ -97,20 +113,14 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error {
return err
}
if util.DiscoverGVK(util.CloneSetGVK) {
// Watch changes to CloneSet
err = c.Watch(&source.Kind{Type: &kruiseappsv1alpha1.CloneSet{}}, &workloadEventHandler{Reader: mgr.GetCache()})
if err != nil {
return err
}
}
// Watch changes to Deployment
err = c.Watch(&source.Kind{Type: &apps.Deployment{}}, &workloadEventHandler{Reader: mgr.GetCache()})
err = c.Watch(&source.Kind{Type: &corev1.Pod{}}, &podEventHandler{Reader: mgr.GetCache()})
if err != nil {
return err
}
return nil
runtimeController = c
workloadHandler = &workloadEventHandler{Reader: mgr.GetCache()}
return util.AddWorkloadWatcher(c, workloadHandler)
}
var _ reconcile.Reconciler = &BatchReleaseReconciler{}
@ -132,11 +142,18 @@ type BatchReleaseReconciler struct {
// +kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=replicasets,verbs=get;list;watch
// +kubebuilder:rbac:groups=apps,resources=replicasets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=statefulsets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps,resources=statefulsets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps.kruise.io,resources=statefulsets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps.kruise.io,resources=statefulsets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps.kruise.io,resources=daemonsets,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups=apps.kruise.io,resources=daemonsets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=autoscaling,resources=horizontalpodautoscalers,verbs=get;list;watch;update;patch
// Reconcile reads that state of the cluster for a Rollout object and makes changes based on the state read
// and what is in the Rollout.Spec
func (r *BatchReleaseReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
release := new(v1alpha1.BatchRelease)
release := new(v1beta1.BatchRelease)
err := r.Get(context.TODO(), req.NamespacedName, release)
if err != nil {
if errors.IsNotFound(err) {
@ -150,6 +167,23 @@ func (r *BatchReleaseReconciler) Reconcile(ctx context.Context, req ctrl.Request
klog.Infof("Begin to reconcile BatchRelease(%v/%v), release-phase: %v", release.Namespace, release.Name, release.Status.Phase)
// If workload watcher does not exist, then add the watcher dynamically
workloadRef := release.Spec.WorkloadRef
workloadGVK := util.GetGVKFrom(&workloadRef)
_, exists := watchedWorkload.Load(workloadGVK.String())
if !exists {
succeeded, err := util.AddWatcherDynamically(runtimeController, workloadHandler, workloadGVK)
if err != nil {
return ctrl.Result{}, err
} else if succeeded {
watchedWorkload.LoadOrStore(workloadGVK.String(), struct{}{})
klog.Infof("Rollout controller begin to watch workload type: %s", workloadGVK.String())
// return, and wait informer cache to be synced
return ctrl.Result{}, nil
}
}
// finalizer will block the deletion of batchRelease
// util all canary resources and settings are cleaned up.
reconcileDone, err := r.handleFinalizer(release)
@ -157,12 +191,14 @@ func (r *BatchReleaseReconciler) Reconcile(ctx context.Context, req ctrl.Request
return reconcile.Result{}, err
}
// set the release info for executor before executing.
r.executor.SetReleaseInfo(release)
errList := field.ErrorList{}
// executor start to execute the batch release plan.
startTimestamp := time.Now()
result, currentStatus := r.executor.Do()
result, currentStatus, err := r.executor.Do(release)
if err != nil {
errList = append(errList, field.InternalError(field.NewPath("do-release"), err))
}
defer func() {
klog.InfoS("Finished one round of reconciling release plan",
@ -173,10 +209,15 @@ func (r *BatchReleaseReconciler) Reconcile(ctx context.Context, req ctrl.Request
"reconcile-result ", result, "time-cost", time.Since(startTimestamp))
}()
return result, r.updateStatus(release, currentStatus)
err = r.updateStatus(release, currentStatus)
if err != nil {
errList = append(errList, field.InternalError(field.NewPath("update-status"), err))
}
return result, errList.ToAggregate()
}
func (r *BatchReleaseReconciler) updateStatus(release *v1alpha1.BatchRelease, newStatus *v1alpha1.BatchReleaseStatus) error {
// updateStatus update BatchRelease status to newStatus
func (r *BatchReleaseReconciler) updateStatus(release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleaseStatus) error {
var err error
defer func() {
if err != nil {
@ -194,7 +235,7 @@ func (r *BatchReleaseReconciler) updateStatus(release *v1alpha1.BatchRelease, ne
objectKey := client.ObjectKeyFromObject(release)
if !reflect.DeepEqual(release.Status, *newStatus) {
err = retry.RetryOnConflict(retry.DefaultBackoff, func() error {
clone := &v1alpha1.BatchRelease{}
clone := &v1beta1.BatchRelease{}
getErr := r.Get(context.TODO(), objectKey, clone)
if getErr != nil {
return getErr
@ -206,7 +247,8 @@ func (r *BatchReleaseReconciler) updateStatus(release *v1alpha1.BatchRelease, ne
return err
}
func (r *BatchReleaseReconciler) handleFinalizer(release *v1alpha1.BatchRelease) (bool, error) {
// handleFinalizer will remove finalizer in finalized phase and add finalizer in the other phases.
func (r *BatchReleaseReconciler) handleFinalizer(release *v1beta1.BatchRelease) (bool, error) {
var err error
defer func() {
if err != nil {
@ -216,7 +258,7 @@ func (r *BatchReleaseReconciler) handleFinalizer(release *v1alpha1.BatchRelease)
// remove the release finalizer if it needs
if !release.DeletionTimestamp.IsZero() &&
HasTerminatingCondition(release.Status) &&
release.Status.Phase == v1beta1.RolloutPhaseCompleted &&
controllerutil.ContainsFinalizer(release, ReleaseFinalizer) {
err = util.UpdateFinalizer(r.Client, release, util.RemoveFinalizerOpType, ReleaseFinalizer)
if client.IgnoreNotFound(err) != nil {

View File

@ -27,7 +27,8 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1alpha1"
rolloutapi "github.com/openkruise/rollouts/api"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
@ -49,9 +50,9 @@ const TIME_LAYOUT = "2006-01-02 15:04:05"
var (
scheme *runtime.Scheme
releaseDeploy = &v1alpha1.BatchRelease{
releaseDeploy = &v1beta1.BatchRelease{
TypeMeta: metav1.TypeMeta{
APIVersion: v1alpha1.GroupVersion.String(),
APIVersion: v1beta1.GroupVersion.String(),
Kind: "BatchRelease",
},
ObjectMeta: metav1.ObjectMeta{
@ -59,27 +60,24 @@ var (
Namespace: "application",
UID: types.UID("87076677"),
},
Spec: v1alpha1.BatchReleaseSpec{
TargetRef: v1alpha1.ObjectRef{
WorkloadRef: &v1alpha1.WorkloadRef{
APIVersion: "apps/v1",
Kind: "Deployment",
Name: "sample",
},
Spec: v1beta1.BatchReleaseSpec{
WorkloadRef: v1beta1.ObjectRef{
APIVersion: "apps/v1",
Kind: "Deployment",
Name: "sample",
},
ReleasePlan: v1alpha1.ReleasePlan{
Batches: []v1alpha1.ReleaseBatch{
ReleasePlan: v1beta1.ReleasePlan{
RollingStyle: v1beta1.CanaryRollingStyle,
BatchPartition: pointer.Int32(0),
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
PauseSeconds: 100,
},
{
CanaryReplicas: intstr.FromString("50%"),
PauseSeconds: 100,
},
{
CanaryReplicas: intstr.FromString("80%"),
PauseSeconds: 100,
},
},
},
@ -102,7 +100,7 @@ var (
},
},
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(100),
Replicas: pointer.Int32(100),
Strategy: apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
@ -131,9 +129,9 @@ var (
)
var (
releaseClone = &v1alpha1.BatchRelease{
releaseClone = &v1beta1.BatchRelease{
TypeMeta: metav1.TypeMeta{
APIVersion: v1alpha1.GroupVersion.String(),
APIVersion: v1beta1.GroupVersion.String(),
Kind: "BatchRelease",
},
ObjectMeta: metav1.ObjectMeta{
@ -141,27 +139,24 @@ var (
Namespace: "application",
UID: types.UID("87076677"),
},
Spec: v1alpha1.BatchReleaseSpec{
TargetRef: v1alpha1.ObjectRef{
WorkloadRef: &v1alpha1.WorkloadRef{
APIVersion: "apps.kruise.io/v1alpha1",
Kind: "CloneSet",
Name: "sample",
},
Spec: v1beta1.BatchReleaseSpec{
WorkloadRef: v1beta1.ObjectRef{
APIVersion: "apps.kruise.io/v1alpha1",
Kind: "CloneSet",
Name: "sample",
},
ReleasePlan: v1alpha1.ReleasePlan{
Batches: []v1alpha1.ReleaseBatch{
ReleasePlan: v1beta1.ReleasePlan{
BatchPartition: pointer.Int32(0),
RollingStyle: v1beta1.PartitionRollingStyle,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
PauseSeconds: 100,
},
{
CanaryReplicas: intstr.FromString("50%"),
PauseSeconds: 100,
},
{
CanaryReplicas: intstr.FromString("80%"),
PauseSeconds: 100,
},
},
},
@ -183,7 +178,7 @@ var (
},
},
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32Ptr(100),
Replicas: pointer.Int32(100),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
Partition: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(1)},
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(2)},
@ -213,7 +208,7 @@ var (
func init() {
scheme = runtime.NewScheme()
apimachineryruntime.Must(apps.AddToScheme(scheme))
apimachineryruntime.Must(v1alpha1.AddToScheme(scheme))
apimachineryruntime.Must(rolloutapi.AddToScheme(scheme))
apimachineryruntime.Must(kruiseappsv1alpha1.AddToScheme(scheme))
controlInfo, _ := json.Marshal(metav1.NewControllerRef(releaseDeploy, releaseDeploy.GroupVersionKind()))
@ -236,54 +231,13 @@ func TestReconcile_CloneSet(t *testing.T) {
GetRelease func() client.Object
GetCloneSet func() []client.Object
ExpectedBatch int32
ExpectedPhase v1alpha1.RolloutPhase
ExpectedState v1alpha1.BatchReleaseBatchStateType
ExpectedPhase v1beta1.RolloutPhase
ExpectedState v1beta1.BatchReleaseBatchStateType
}{
// Following cases of Linear Transaction on State Machine
{
Name: "IfNeedProgress=false, Input-Phase=Initial, Output-Phase=Healthy",
GetRelease: func() client.Object {
return setPhase(releaseClone, v1alpha1.RolloutPhaseInitial)
},
GetCloneSet: func() []client.Object {
clone := stableClone.DeepCopy()
clone.Annotations = nil
return []client.Object{
clone,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseHealthy,
},
{
Name: "IfNeedProgress=false, Input-Phase=Healthy, Output-Phase=Healthy",
GetRelease: func() client.Object {
return setPhase(releaseClone, v1alpha1.RolloutPhaseHealthy)
},
GetCloneSet: func() []client.Object {
return []client.Object{
stableClone.DeepCopy(),
}
},
ExpectedPhase: v1alpha1.RolloutPhaseHealthy,
},
{
Name: "IfNeedProgress=true, Input-Phase=Healthy, Output-Phase=Preparing",
GetRelease: func() client.Object {
return setPhase(releaseClone, v1alpha1.RolloutPhaseHealthy)
},
GetCloneSet: func() []client.Object {
stable := getStableWithReady(stableClone, "v2")
canary := getCanaryWithStage(stable, "v2", -1, true)
return []client.Object{
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhasePreparing,
},
{
Name: "Preparing, Input-Phase=Preparing, Output-Phase=Progressing",
GetRelease: func() client.Object {
release := setPhase(releaseClone, v1alpha1.RolloutPhasePreparing)
release := setPhase(releaseClone, v1beta1.RolloutPhasePreparing)
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
@ -299,12 +253,12 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
},
{
Name: "Progressing, stage=0, Input-State=Upgrade, Output-State=Verify",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.UpgradingBatchState)
release := setState(releaseClone, v1beta1.UpgradingBatchState)
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
@ -320,13 +274,13 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.VerifyingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.VerifyingBatchState,
},
{
Name: "Progressing, stage=0, Input-State=Upgrade, Output-State=Verify",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.UpgradingBatchState)
release := setState(releaseClone, v1beta1.UpgradingBatchState)
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
@ -342,19 +296,21 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.VerifyingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.VerifyingBatchState,
},
{
Name: "Progressing, stage=0, Input-State=Verify, Output-State=BatchReady",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.VerifyingBatchState)
release := setState(releaseClone, v1beta1.VerifyingBatchState)
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return release
},
GetCloneSet: func() []client.Object {
@ -364,13 +320,13 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: "Progressing, stage=0->1, Input-State=BatchReady, Output-State=Upgrade",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
release.Status.CanaryStatus.BatchReadyTime = getOldTime()
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
@ -378,6 +334,9 @@ func TestReconcile_CloneSet(t *testing.T) {
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = pointer.Int32(1)
return release
},
GetCloneSet: func() []client.Object {
@ -387,14 +346,14 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.UpgradingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.UpgradingBatchState,
ExpectedBatch: 1,
},
{
Name: "Progressing, stage=0->1, Input-State=BatchReady, Output-State=BatchReady",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
@ -403,6 +362,8 @@ func TestReconcile_CloneSet(t *testing.T) {
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return release
},
GetCloneSet: func() []client.Object {
@ -412,13 +373,13 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: "Special Case: Scaling, Input-State=BatchReady, Output-State=Upgrade",
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
@ -427,23 +388,25 @@ func TestReconcile_CloneSet(t *testing.T) {
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return release
},
GetCloneSet: func() []client.Object {
stable := getStableWithReady(stableClone, "v2").(*kruiseappsv1alpha1.CloneSet)
stable.Spec.Replicas = pointer.Int32Ptr(200)
stable.Spec.Replicas = pointer.Int32(200)
canary := getCanaryWithStage(stable, "v2", 0, true)
return []client.Object{
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.UpgradingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.UpgradingBatchState,
},
{
Name: `Special Case: RollBack, Input-Phase=Progressing, Output-Phase=Abort`,
Name: `Special Case: RollBack, Input-Phase=Progressing, Output-Phase=Progressing`,
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
@ -452,6 +415,8 @@ func TestReconcile_CloneSet(t *testing.T) {
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return release
},
GetCloneSet: func() []client.Object {
@ -465,13 +430,13 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseFinalizing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: `Special Case: Deletion, Input-Phase=Progressing, Output-Phase=Terminating`,
Name: `Special Case: Deletion, Input-Phase=Progressing, Output-Phase=Finalizing`,
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
@ -482,6 +447,8 @@ func TestReconcile_CloneSet(t *testing.T) {
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.DeletionTimestamp = &metav1.Time{Time: time.Now()}
release.Finalizers = append(release.Finalizers, ReleaseFinalizer)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return release
},
GetCloneSet: func() []client.Object {
@ -491,13 +458,13 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseTerminating,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseFinalizing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: `Special Case: Continuous Release, Input-Phase=Progressing, Output-Phase=Initial`,
Name: `Special Case: Continuous Release, Input-Phase=Progressing, Output-Phase=Progressing`,
GetRelease: func() client.Object {
release := setState(releaseClone, v1alpha1.ReadyBatchState)
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
@ -506,6 +473,10 @@ func TestReconcile_CloneSet(t *testing.T) {
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = pointer.Int32(1)
release.Status.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
return release
},
GetCloneSet: func() []client.Object {
@ -521,13 +492,41 @@ func TestReconcile_CloneSet(t *testing.T) {
canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseInitial,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: `Special Case: BatchPartition=nil, Input-Phase=Progressing, Output-Phase=Finalizing`,
GetRelease: func() client.Object {
release := setState(releaseClone, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableClone.Spec.Template.DeepCopy()
canaryTemplate := stableClone.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
canaryTemplate.Spec.Containers = containers("v2")
release.Status.StableRevision = util.ComputeHash(stableTemplate, nil)
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Finalizers = append(release.Finalizers, ReleaseFinalizer)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = nil
return release
},
GetCloneSet: func() []client.Object {
stable := getStableWithReady(stableClone, "v2")
canary := getCanaryWithStage(stable, "v2", 0, true)
return []client.Object{
canary,
}
},
ExpectedPhase: v1beta1.RolloutPhaseFinalizing,
ExpectedState: v1beta1.ReadyBatchState,
},
}
for _, cs := range cases {
t.Run(cs.Name, func(t *testing.T) {
defer GinkgoRecover()
release := cs.GetRelease()
clonesets := cs.GetCloneSet()
rec := record.NewFakeRecorder(100)
@ -545,7 +544,7 @@ func TestReconcile_CloneSet(t *testing.T) {
Expect(err).NotTo(HaveOccurred())
Expect(result.RequeueAfter).Should(BeNumerically(">=", int64(0)))
newRelease := v1alpha1.BatchRelease{}
newRelease := v1beta1.BatchRelease{}
err = cli.Get(context.TODO(), key, &newRelease)
Expect(err).NotTo(HaveOccurred())
Expect(newRelease.Status.Phase).Should(Equal(cs.ExpectedPhase))
@ -563,38 +562,14 @@ func TestReconcile_Deployment(t *testing.T) {
GetRelease func() client.Object
GetDeployments func() []client.Object
ExpectedBatch int32
ExpectedPhase v1alpha1.RolloutPhase
ExpectedState v1alpha1.BatchReleaseBatchStateType
ExpectedPhase v1beta1.RolloutPhase
ExpectedState v1beta1.BatchReleaseBatchStateType
}{
// Following cases of Linear Transaction on State Machine
{
Name: "IfNeedProgress=false, Input-Phase=Initial, Output-Phase=Healthy",
Name: "IfNeedProgress=true, Input-Phase=Healthy, Output-Phase=Progressing",
GetRelease: func() client.Object {
return setPhase(releaseDeploy, v1alpha1.RolloutPhaseInitial)
},
GetDeployments: func() []client.Object {
return []client.Object{
stableDeploy.DeepCopy(),
}
},
ExpectedPhase: v1alpha1.RolloutPhaseHealthy,
},
{
Name: "IfNeedProgress=false, Input-Phase=Healthy, Output-Phase=Healthy",
GetRelease: func() client.Object {
return setPhase(releaseDeploy, v1alpha1.RolloutPhaseHealthy)
},
GetDeployments: func() []client.Object {
return []client.Object{
stableDeploy.DeepCopy(),
}
},
ExpectedPhase: v1alpha1.RolloutPhaseHealthy,
},
{
Name: "IfNeedProgress=true, Input-Phase=Healthy, Output-Phase=Preparing",
GetRelease: func() client.Object {
return setPhase(releaseDeploy, v1alpha1.RolloutPhaseHealthy)
return setPhase(releaseDeploy, v1beta1.RolloutPhaseHealthy)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
@ -603,12 +578,12 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhasePreparing,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
},
{
Name: "Preparing, Input-Phase=Preparing, Output-Phase=Progressing",
GetRelease: func() client.Object {
return setPhase(releaseDeploy, v1alpha1.RolloutPhasePreparing)
return setPhase(releaseDeploy, v1beta1.RolloutPhasePreparing)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2")
@ -617,12 +592,12 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
},
{
Name: "Progressing, stage=0, Input-State=Upgrade, Output-State=Verify",
GetRelease: func() client.Object {
return setState(releaseDeploy, v1alpha1.UpgradingBatchState)
return setState(releaseDeploy, v1beta1.UpgradingBatchState)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2")
@ -631,28 +606,34 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.VerifyingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.VerifyingBatchState,
},
{
Name: "Progressing, stage=0, Input-State=Upgrade, Output-State=Verify",
Name: "Progressing, stage=0, Input-State=Verify, Output-State=Upgrade",
GetRelease: func() client.Object {
return setState(releaseDeploy, v1alpha1.UpgradingBatchState)
release := releaseDeploy.DeepCopy()
release.Status.CanaryStatus.UpdatedReplicas = 5
release.Status.CanaryStatus.UpdatedReadyReplicas = 5
return setState(release, v1beta1.VerifyingBatchState)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2")
canary := getCanaryWithStage(stable, "v2", -1, true)
canary := getCanaryWithStage(stable, "v2", 0, false)
return []client.Object{
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.VerifyingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.UpgradingBatchState,
},
{
Name: "Progressing, stage=0, Input-State=Verify, Output-State=BatchReady",
GetRelease: func() client.Object {
return setState(releaseDeploy, v1alpha1.VerifyingBatchState)
release := releaseDeploy.DeepCopy()
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
return setState(release, v1beta1.VerifyingBatchState)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2")
@ -661,15 +642,17 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: "Progressing, stage=0->1, Input-State=BatchReady, Output-State=Upgrade",
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
release.Status.CanaryStatus.BatchReadyTime = getOldTime()
return release
release := releaseDeploy.DeepCopy()
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = pointer.Int32(1)
return setState(release, v1beta1.ReadyBatchState)
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2")
@ -678,16 +661,17 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.UpgradingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.UpgradingBatchState,
ExpectedBatch: 1,
},
{
Name: "Progressing, stage=0->1, Input-State=BatchReady, Output-State=BatchReady",
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
release := releaseDeploy.DeepCopy()
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release = setState(release, v1beta1.ReadyBatchState)
return release
},
GetDeployments: func() []client.Object {
@ -697,32 +681,32 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: "Special Case: Scaling, Input-State=BatchReady, Output-State=Upgrade",
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
release := setState(releaseDeploy, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
return release
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
stable.Spec.Replicas = pointer.Int32Ptr(200)
stable.Spec.Replicas = pointer.Int32(200)
canary := getCanaryWithStage(stable, "v2", 0, true)
return []client.Object{
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseProgressing,
ExpectedState: v1alpha1.UpgradingBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.UpgradingBatchState,
},
{
Name: `Special Case: RollBack, Input-Phase=Progressing, Output-Phase=Abort`,
Name: `Special Case: RollBack, Input-Phase=Progressing, Output-Phase=Progressing`,
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
release := setState(releaseDeploy, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableDeploy.Spec.Template.DeepCopy()
@ -740,13 +724,13 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseFinalizing,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: `Special Case: Deletion, Input-Phase=Progressing, Output-Phase=Terminating`,
Name: `Special Case: Deletion, Input-Phase=Progressing, Output-Phase=Finalizing`,
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
release := setState(releaseDeploy, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableDeploy.Spec.Template.DeepCopy()
@ -766,13 +750,13 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseTerminating,
ExpectedState: v1alpha1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseFinalizing,
ExpectedState: v1beta1.ReadyBatchState,
},
{
Name: `Special Case: Continuous Release, Input-Phase=Progressing, Output-Phase=Initial`,
Name: `Special Case: Continuous Release, Input-Phase=Progressing, Output-Phase=Progressing`,
GetRelease: func() client.Object {
release := setState(releaseDeploy, v1alpha1.ReadyBatchState)
release := setState(releaseDeploy, v1beta1.ReadyBatchState)
now := metav1.Now()
release.Status.CanaryStatus.BatchReadyTime = &now
stableTemplate := stableDeploy.Spec.Template.DeepCopy()
@ -790,13 +774,13 @@ func TestReconcile_Deployment(t *testing.T) {
stable, canary,
}
},
ExpectedPhase: v1alpha1.RolloutPhaseInitial,
ExpectedState: v1beta1.ReadyBatchState,
ExpectedPhase: v1beta1.RolloutPhaseProgressing,
},
}
for _, cs := range cases {
t.Run(cs.Name, func(t *testing.T) {
defer GinkgoRecover()
release := cs.GetRelease()
deployments := cs.GetDeployments()
rec := record.NewFakeRecorder(100)
@ -820,12 +804,11 @@ func TestReconcile_Deployment(t *testing.T) {
key := client.ObjectKeyFromObject(release)
request := reconcile.Request{NamespacedName: key}
result, err := reconciler.Reconcile(context.TODO(), request)
Expect(err).NotTo(HaveOccurred())
result, _ := reconciler.Reconcile(context.TODO(), request)
Expect(result.RequeueAfter).Should(BeNumerically(">=", int64(0)))
newRelease := v1alpha1.BatchRelease{}
err = cli.Get(context.TODO(), key, &newRelease)
newRelease := v1beta1.BatchRelease{}
err := cli.Get(context.TODO(), key, &newRelease)
Expect(err).NotTo(HaveOccurred())
Expect(newRelease.Status.Phase).Should(Equal(cs.ExpectedPhase))
Expect(newRelease.Status.CanaryStatus.CurrentBatch).Should(Equal(cs.ExpectedBatch))
@ -843,21 +826,17 @@ func containers(version string) []corev1.Container {
}
}
func setPhase(release *v1alpha1.BatchRelease, phase v1alpha1.RolloutPhase) *v1alpha1.BatchRelease {
func setPhase(release *v1beta1.BatchRelease, phase v1beta1.RolloutPhase) *v1beta1.BatchRelease {
r := release.DeepCopy()
r.Status.Phase = phase
switch phase {
case v1alpha1.RolloutPhaseInitial, v1alpha1.RolloutPhaseHealthy:
default:
r.Status.ObservedWorkloadReplicas = 100
r.Status.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
}
r.Status.ObservedWorkloadReplicas = 100
r.Status.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
return r
}
func setState(release *v1alpha1.BatchRelease, state v1alpha1.BatchReleaseBatchStateType) *v1alpha1.BatchRelease {
func setState(release *v1beta1.BatchRelease, state v1beta1.BatchReleaseBatchStateType) *v1beta1.BatchRelease {
r := release.DeepCopy()
r.Status.Phase = v1alpha1.RolloutPhaseProgressing
r.Status.Phase = v1beta1.RolloutPhaseProgressing
r.Status.CanaryStatus.CurrentBatchState = state
r.Status.ObservedWorkloadReplicas = 100
r.Status.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
@ -910,9 +889,9 @@ func getCanaryWithStage(workload client.Object, version string, stage int, ready
d.UID = uuid.NewUUID()
d.Spec.Paused = false
d.ResourceVersion = strconv.Itoa(rand.Intn(100000000000))
d.Labels[util.CanaryDeploymentLabelKey] = "87076677"
d.Labels[util.CanaryDeploymentLabel] = "87076677"
d.Finalizers = []string{util.CanaryDeploymentFinalizer}
d.Spec.Replicas = pointer.Int32Ptr(int32(stageReplicas))
d.Spec.Replicas = pointer.Int32(int32(stageReplicas))
d.Spec.Template.Spec.Containers = containers(version)
d.Status.Replicas = int32(stageReplicas)
d.Status.ReadyReplicas = int32(stageReplicas)

View File

@ -19,13 +19,18 @@ package batchrelease
import (
"context"
"encoding/json"
"reflect"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/workloads"
kruiseappsv1beta1 "github.com/openkruise/kruise-api/apps/v1beta1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/util"
utilclient "github.com/openkruise/rollouts/pkg/util/client"
expectations "github.com/openkruise/rollouts/pkg/util/expectation"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/workqueue"
@ -43,121 +48,116 @@ const (
DeleteEventAction EventAction = "Delete"
)
var (
controllerKruiseKindCS = kruiseappsv1alpha1.SchemeGroupVersion.WithKind("CloneSet")
controllerKindDep = appsv1.SchemeGroupVersion.WithKind("Deployment")
)
var _ handler.EventHandler = &workloadEventHandler{}
var _ handler.EventHandler = &podEventHandler{}
type podEventHandler struct {
client.Reader
}
func (p podEventHandler) Create(evt event.CreateEvent, q workqueue.RateLimitingInterface) {
pod, ok := evt.Object.(*corev1.Pod)
if !ok {
return
}
p.enqueue(pod, q)
}
func (p podEventHandler) Generic(evt event.GenericEvent, q workqueue.RateLimitingInterface) {
}
func (p podEventHandler) Delete(evt event.DeleteEvent, q workqueue.RateLimitingInterface) {
}
func (p podEventHandler) Update(evt event.UpdateEvent, q workqueue.RateLimitingInterface) {
oldPod, oldOK := evt.ObjectOld.(*corev1.Pod)
newPod, newOK := evt.ObjectNew.(*corev1.Pod)
if !oldOK || !newOK {
return
}
if oldPod.ResourceVersion == newPod.ResourceVersion || (util.IsEqualRevision(oldPod, newPod) && util.IsPodReady(oldPod) == util.IsPodReady(newPod)) {
return
}
klog.Infof("Pod %v ready condition changed, then enqueue", client.ObjectKeyFromObject(newPod))
p.enqueue(newPod, q)
}
func (p podEventHandler) enqueue(pod *corev1.Pod, q workqueue.RateLimitingInterface) {
owner := metav1.GetControllerOfNoCopy(pod)
if owner == nil {
return
}
workloadNamespacedName := types.NamespacedName{
Name: owner.Name, Namespace: pod.Namespace,
}
workloadGVK := schema.FromAPIVersionAndKind(owner.APIVersion, owner.Kind)
workloadObj, err := util.GetOwnerWorkload(p.Reader, pod)
if err != nil || workloadObj == nil {
//klog.Errorf("Failed to get owner workload for pod %v, err: %v", client.ObjectKeyFromObject(pod), err)
return
}
controlInfo, ok := workloadObj.GetAnnotations()[util.BatchReleaseControlAnnotation]
// only consider enqueue during rollout progressing
if !ok || controlInfo == "" {
return
}
brNsn, err := getBatchRelease(p.Reader, workloadNamespacedName, workloadGVK, controlInfo)
if err != nil {
klog.Errorf("unable to get BatchRelease related with %s (%s/%s), error: %v",
workloadGVK.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, err)
return
}
if len(brNsn.Name) != 0 {
klog.V(3).Infof("Pod (%s/%s) ready condition changed, managed by BatchRelease (%v)",
workloadNamespacedName.Namespace, workloadNamespacedName.Name, brNsn)
q.Add(reconcile.Request{NamespacedName: brNsn})
}
}
type workloadEventHandler struct {
client.Reader
}
func (w workloadEventHandler) Create(evt event.CreateEvent, q workqueue.RateLimitingInterface) {
expectationObserved(evt.Object)
w.handleWorkload(q, evt.Object, CreateEventAction)
}
func (w workloadEventHandler) Update(evt event.UpdateEvent, q workqueue.RateLimitingInterface) {
var oldAccessor, newAccessor *workloads.WorkloadInfo
var gvk schema.GroupVersionKind
switch evt.ObjectNew.(type) {
switch obj := evt.ObjectNew.(type) {
case *kruiseappsv1alpha1.CloneSet:
gvk = controllerKruiseKindCS
oldClone := evt.ObjectOld.(*kruiseappsv1alpha1.CloneSet)
newClone := evt.ObjectNew.(*kruiseappsv1alpha1.CloneSet)
var oldReplicas, newReplicas int32
if oldClone.Spec.Replicas != nil {
oldReplicas = *oldClone.Spec.Replicas
}
if newClone.Spec.Replicas != nil {
newReplicas = *newClone.Spec.Replicas
}
oldAccessor = &workloads.WorkloadInfo{
Replicas: &oldReplicas,
Paused: oldClone.Spec.UpdateStrategy.Paused,
Status: &workloads.WorkloadStatus{
Replicas: oldClone.Status.Replicas,
ReadyReplicas: oldClone.Status.ReadyReplicas,
UpdatedReplicas: oldClone.Status.UpdatedReplicas,
UpdatedReadyReplicas: oldClone.Status.UpdatedReadyReplicas,
ObservedGeneration: oldClone.Status.ObservedGeneration,
},
Metadata: &oldClone.ObjectMeta,
}
newAccessor = &workloads.WorkloadInfo{
Replicas: &newReplicas,
Paused: newClone.Spec.UpdateStrategy.Paused,
Status: &workloads.WorkloadStatus{
Replicas: newClone.Status.Replicas,
ReadyReplicas: newClone.Status.ReadyReplicas,
UpdatedReplicas: newClone.Status.UpdatedReplicas,
UpdatedReadyReplicas: newClone.Status.UpdatedReadyReplicas,
ObservedGeneration: newClone.Status.ObservedGeneration,
},
Metadata: &newClone.ObjectMeta,
}
gvk = util.ControllerKruiseKindCS
case *kruiseappsv1alpha1.DaemonSet:
gvk = util.ControllerKruiseKindDS
case *appsv1.Deployment:
gvk = controllerKindDep
oldDeploy := evt.ObjectOld.(*appsv1.Deployment)
newDeploy := evt.ObjectNew.(*appsv1.Deployment)
var oldReplicas, newReplicas int32
if oldDeploy.Spec.Replicas != nil {
oldReplicas = *oldDeploy.Spec.Replicas
}
if newDeploy.Spec.Replicas != nil {
newReplicas = *newDeploy.Spec.Replicas
}
oldAccessor = &workloads.WorkloadInfo{
Replicas: &oldReplicas,
Paused: oldDeploy.Spec.Paused,
Status: &workloads.WorkloadStatus{
Replicas: oldDeploy.Status.Replicas,
ReadyReplicas: oldDeploy.Status.AvailableReplicas,
UpdatedReplicas: oldDeploy.Status.UpdatedReplicas,
ObservedGeneration: oldDeploy.Status.ObservedGeneration,
},
Metadata: &oldDeploy.ObjectMeta,
}
newAccessor = &workloads.WorkloadInfo{
Replicas: &newReplicas,
Paused: newDeploy.Spec.Paused,
Status: &workloads.WorkloadStatus{
Replicas: newDeploy.Status.Replicas,
ReadyReplicas: newDeploy.Status.AvailableReplicas,
UpdatedReplicas: newDeploy.Status.UpdatedReplicas,
ObservedGeneration: newDeploy.Status.ObservedGeneration,
},
Metadata: &newDeploy.ObjectMeta,
}
gvk = util.ControllerKindDep
case *appsv1.StatefulSet:
gvk = util.ControllerKindSts
case *kruiseappsv1beta1.StatefulSet:
gvk = util.ControllerKruiseKindSts
case *unstructured.Unstructured:
gvk = obj.GroupVersionKind()
default:
return
}
if newAccessor.Metadata.ResourceVersion == oldAccessor.Metadata.ResourceVersion {
newObject := evt.ObjectNew
oldObject := evt.ObjectOld
expectationObserved(newObject)
if newObject.GetResourceVersion() == oldObject.GetResourceVersion() {
return
}
if observeGenerationChanged(newAccessor, oldAccessor) ||
observeLatestGeneration(newAccessor, oldAccessor) ||
observeScaleEventDone(newAccessor, oldAccessor) ||
observeReplicasChanged(newAccessor, oldAccessor) {
workloadNamespacedName := types.NamespacedName{
Namespace: newAccessor.Metadata.Namespace,
Name: newAccessor.Metadata.Name,
}
brNsn, err := w.getBatchRelease(workloadNamespacedName, gvk, newAccessor.Metadata.Annotations[util.BatchReleaseControlAnnotation])
oldStatus := util.ParseWorkloadStatus(oldObject)
newStatus := util.ParseWorkloadStatus(newObject)
if oldObject.GetGeneration() != newObject.GetGeneration() || !reflect.DeepEqual(oldStatus, newStatus) {
workloadNamespacedName := client.ObjectKeyFromObject(newObject)
controllerInfo := newObject.GetAnnotations()[util.BatchReleaseControlAnnotation]
brNsn, err := getBatchRelease(w.Reader, workloadNamespacedName, gvk, controllerInfo)
if err != nil {
klog.Errorf("unable to get BatchRelease related with %s (%s/%s), error: %v",
gvk.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, err)
@ -166,7 +166,7 @@ func (w workloadEventHandler) Update(evt event.UpdateEvent, q workqueue.RateLimi
if len(brNsn.Name) != 0 {
klog.V(3).Infof("%s (%s/%s) changed generation from %d to %d managed by BatchRelease (%v)",
gvk.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, oldAccessor.Metadata.Generation, newAccessor.Metadata.Generation, brNsn)
gvk.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, oldObject.GetGeneration(), newObject.GetGeneration(), brNsn)
q.Add(reconcile.Request{NamespacedName: brNsn})
}
}
@ -179,39 +179,44 @@ func (w workloadEventHandler) Delete(evt event.DeleteEvent, q workqueue.RateLimi
func (w workloadEventHandler) Generic(evt event.GenericEvent, q workqueue.RateLimitingInterface) {
}
func (w *workloadEventHandler) handleWorkload(q workqueue.RateLimitingInterface,
obj client.Object, action EventAction) {
var controlInfo string
func (w *workloadEventHandler) handleWorkload(q workqueue.RateLimitingInterface, obj client.Object, action EventAction) {
var gvk schema.GroupVersionKind
switch obj.(type) {
switch o := obj.(type) {
case *kruiseappsv1alpha1.CloneSet:
gvk = controllerKruiseKindCS
controlInfo = obj.(*kruiseappsv1alpha1.CloneSet).Annotations[util.BatchReleaseControlAnnotation]
gvk = util.ControllerKruiseKindCS
case *kruiseappsv1alpha1.DaemonSet:
gvk = util.ControllerKruiseKindDS
case *appsv1.Deployment:
gvk = controllerKindDep
controlInfo = obj.(*appsv1.Deployment).Annotations[util.BatchReleaseControlAnnotation]
gvk = util.ControllerKindDep
case *appsv1.StatefulSet:
gvk = util.ControllerKindSts
case *kruiseappsv1beta1.StatefulSet:
gvk = util.ControllerKruiseKindSts
case *unstructured.Unstructured:
gvk = o.GroupVersionKind()
default:
return
}
controlInfo := obj.GetAnnotations()[util.BatchReleaseControlAnnotation]
workloadNamespacedName := types.NamespacedName{
Namespace: obj.GetNamespace(),
Name: obj.GetName(),
}
brNsn, err := w.getBatchRelease(workloadNamespacedName, gvk, controlInfo)
brNsn, err := getBatchRelease(w.Reader, workloadNamespacedName, gvk, controlInfo)
if err != nil {
klog.Errorf("Unable to get BatchRelease related with %s (%s/%s), err: %v",
gvk.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, err)
return
}
if len(brNsn.Name) != 0 {
klog.V(5).Infof("Something related %s %s (%s/%s) happen and will reconcile BatchRelease (%v)",
klog.V(3).Infof("Something related %s %s (%s/%s) happen and will reconcile BatchRelease (%v)",
action, gvk.Kind, workloadNamespacedName.Namespace, workloadNamespacedName.Name, brNsn)
q.Add(reconcile.Request{NamespacedName: brNsn})
}
}
func (w *workloadEventHandler) getBatchRelease(workloadNamespaceName types.NamespacedName, gvk schema.GroupVersionKind, controlInfo string) (nsn types.NamespacedName, err error) {
func getBatchRelease(c client.Reader, workloadNamespaceName types.NamespacedName, gvk schema.GroupVersionKind, controlInfo string) (nsn types.NamespacedName, err error) {
if len(controlInfo) > 0 {
br := &metav1.OwnerReference{}
err = json.Unmarshal([]byte(controlInfo), br)
@ -219,30 +224,30 @@ func (w *workloadEventHandler) getBatchRelease(workloadNamespaceName types.Names
klog.Errorf("Failed to unmarshal controller info annotations for %v(%v)", gvk, workloadNamespaceName)
}
if br.APIVersion == v1alpha1.GroupVersion.String() && br.Kind == "BatchRelease" {
if br.APIVersion == v1beta1.GroupVersion.String() && br.Kind == "BatchRelease" {
klog.V(3).Infof("%s (%v) is managed by BatchRelease (%s), append queue and will reconcile BatchRelease", gvk.Kind, workloadNamespaceName, br.Name)
nsn = types.NamespacedName{Namespace: workloadNamespaceName.Namespace, Name: br.Name}
return
}
}
brList := &v1alpha1.BatchReleaseList{}
listOptions := &client.ListOptions{Namespace: workloadNamespaceName.Namespace}
if err = w.List(context.TODO(), brList, listOptions); err != nil {
brList := &v1beta1.BatchReleaseList{}
namespace := workloadNamespaceName.Namespace
if err = c.List(context.TODO(), brList, client.InNamespace(namespace), utilclient.DisableDeepCopy); err != nil {
klog.Errorf("List BatchRelease failed: %s", err.Error())
return
}
for i := range brList.Items {
br := &brList.Items[i]
targetRef := br.Spec.TargetRef
targetGV, err := schema.ParseGroupVersion(targetRef.WorkloadRef.APIVersion)
targetRef := br.Spec.WorkloadRef
targetGV, err := schema.ParseGroupVersion(targetRef.APIVersion)
if err != nil {
klog.Errorf("Failed to parse targetRef's group version: %s for BatchRelease(%v)", targetRef.WorkloadRef.APIVersion, client.ObjectKeyFromObject(br))
klog.Errorf("Failed to parse targetRef's group version: %s for BatchRelease(%v)", targetRef.APIVersion, client.ObjectKeyFromObject(br))
continue
}
if targetRef.WorkloadRef.Kind == gvk.Kind && targetGV.Group == gvk.Group && targetRef.WorkloadRef.Name == workloadNamespaceName.Name {
if targetRef.Kind == gvk.Kind && targetGV.Group == gvk.Group && targetRef.Name == workloadNamespaceName.Name {
nsn = client.ObjectKeyFromObject(br)
}
}
@ -250,38 +255,23 @@ func (w *workloadEventHandler) getBatchRelease(workloadNamespaceName types.Names
return
}
func observeGenerationChanged(newOne, oldOne *workloads.WorkloadInfo) bool {
return newOne.Metadata.Generation != oldOne.Metadata.Generation
}
func observeLatestGeneration(newOne, oldOne *workloads.WorkloadInfo) bool {
oldNot := oldOne.Metadata.Generation != oldOne.Status.ObservedGeneration
newDid := newOne.Metadata.Generation == newOne.Status.ObservedGeneration
return oldNot && newDid
}
func observeScaleEventDone(newOne, oldOne *workloads.WorkloadInfo) bool {
_, controlled := newOne.Metadata.Annotations[util.BatchReleaseControlAnnotation]
if !controlled {
return false
func expectationObserved(object client.Object) {
controllerKey := getControllerKey(object)
if controllerKey != nil {
klog.V(3).Infof("observed %v, remove from expectation %s: %s",
klog.KObj(object), *controllerKey, string(object.GetUID()))
expectations.ResourceExpectations.Observe(*controllerKey, expectations.Create, string(object.GetUID()))
}
oldScaling := *oldOne.Replicas != *newOne.Replicas ||
*oldOne.Replicas != oldOne.Status.Replicas
newDone := newOne.Metadata.Generation == newOne.Status.ObservedGeneration &&
*newOne.Replicas == newOne.Status.Replicas
return oldScaling && newDone
}
func observeReplicasChanged(newOne, oldOne *workloads.WorkloadInfo) bool {
_, controlled := newOne.Metadata.Annotations[util.BatchReleaseControlAnnotation]
if !controlled {
return false
func getControllerKey(object client.Object) *string {
owner := metav1.GetControllerOfNoCopy(object)
if owner == nil {
return nil
}
return *oldOne.Replicas != *newOne.Replicas ||
oldOne.Status.Replicas != newOne.Status.Replicas ||
oldOne.Status.ReadyReplicas != newOne.Status.ReadyReplicas ||
oldOne.Status.UpdatedReplicas != newOne.Status.UpdatedReplicas ||
oldOne.Status.UpdatedReadyReplicas != newOne.Status.UpdatedReadyReplicas
if owner.Kind == "BatchRelease" {
key := types.NamespacedName{Namespace: object.GetNamespace(), Name: owner.Name}.String()
return &key
}
return nil
}

View File

@ -23,10 +23,12 @@ import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
@ -34,7 +36,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/event"
)
func TestEventHandler_Update(t *testing.T) {
func TestWorkloadEventHandler_Update(t *testing.T) {
RegisterFailHandler(Fail)
cases := []struct {
@ -89,14 +91,14 @@ func TestEventHandler_Update(t *testing.T) {
oldObject := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
oldObject.SetGeneration(2)
oldObject.Status.ObservedGeneration = 2
oldObject.Spec.Replicas = pointer.Int32Ptr(1000)
oldObject.Spec.Replicas = pointer.Int32(1000)
return oldObject
},
GetNewWorkload: func() client.Object {
newObject := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
newObject.SetGeneration(2)
newObject.Status.ObservedGeneration = 2
newObject.Spec.Replicas = pointer.Int32Ptr(1000)
newObject.Spec.Replicas = pointer.Int32(1000)
newObject.Status.Replicas = 1000
return newObject
},
@ -141,7 +143,7 @@ func TestEventHandler_Update(t *testing.T) {
}
}
func TestEventHandler_Create(t *testing.T) {
func TestWorkloadEventHandler_Create(t *testing.T) {
RegisterFailHandler(Fail)
cases := []struct {
@ -170,7 +172,7 @@ func TestEventHandler_Create(t *testing.T) {
GetNewWorkload: func() client.Object {
object := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
controlInfo, _ := json.Marshal(&metav1.OwnerReference{
APIVersion: v1alpha1.GroupVersion.String(),
APIVersion: v1beta1.GroupVersion.String(),
Kind: "Rollout",
Name: "whatever",
})
@ -197,7 +199,7 @@ func TestEventHandler_Create(t *testing.T) {
}
}
func TestEventHandler_Delete(t *testing.T) {
func TestWorkloadEventHandler_Delete(t *testing.T) {
RegisterFailHandler(Fail)
cases := []struct {
@ -226,7 +228,7 @@ func TestEventHandler_Delete(t *testing.T) {
GetNewWorkload: func() client.Object {
object := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
controlInfo, _ := json.Marshal(&metav1.OwnerReference{
APIVersion: v1alpha1.GroupVersion.String(),
APIVersion: v1beta1.GroupVersion.String(),
Kind: "Rollout",
Name: "whatever",
})
@ -252,3 +254,245 @@ func TestEventHandler_Delete(t *testing.T) {
})
}
}
func TestPodEventHandler_Update(t *testing.T) {
RegisterFailHandler(Fail)
cases := []struct {
Name string
GetOldPod func() client.Object
GetNewPod func() client.Object
GetWorkload func() client.Object
ExpectedQueueLen int
}{
{
Name: "CloneSet Pod NotReady -> Ready",
GetOldPod: func() client.Object {
return generatePod(false, true, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, true, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 1,
},
{
Name: "CloneSet Pod Ready -> Ready",
GetOldPod: func() client.Object {
return generatePod(true, true, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, true, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "Orphan Pod NotReady -> Ready",
GetOldPod: func() client.Object {
return generatePod(false, false, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "Orphan Pod Ready -> Ready",
GetOldPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "Free CloneSet Pod NotReady -> Ready",
GetOldPod: func() client.Object {
return generatePod(false, false, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "Free CloneSet Pod Ready -> Ready",
GetOldPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, false, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "CloneSet Pod V1 -> V2",
GetOldPod: func() client.Object {
return generatePod(true, true, "version-1")
},
GetNewPod: func() client.Object {
return generatePod(true, true, "version-2")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 1,
},
}
for _, cs := range cases {
t.Run(cs.Name, func(t *testing.T) {
oldObject := cs.GetOldPod()
newObject := cs.GetNewPod()
workload := cs.GetWorkload()
newSJk := scheme
fmt.Println(newSJk)
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(releaseDeploy.DeepCopy(), workload).Build()
handler := podEventHandler{Reader: cli}
updateQ := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
updateEvt := event.UpdateEvent{
ObjectOld: oldObject,
ObjectNew: newObject,
}
handler.Update(updateEvt, updateQ)
Expect(updateQ.Len()).Should(Equal(cs.ExpectedQueueLen))
})
}
}
func TestPodEventHandler_Create(t *testing.T) {
RegisterFailHandler(Fail)
cases := []struct {
Name string
GetNewPod func() client.Object
GetWorkload func() client.Object
ExpectedQueueLen int
}{
{
Name: "CloneSet Pod",
GetNewPod: func() client.Object {
return generatePod(false, true, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 1,
},
{
Name: "Orphan Pod",
GetNewPod: func() client.Object {
return generatePod(false, false, "version-1")
},
GetWorkload: func() client.Object {
clone := stableClone.DeepCopy()
owner, _ := json.Marshal(metav1.NewControllerRef(releaseClone, releaseClone.GetObjectKind().GroupVersionKind()))
clone.Annotations = map[string]string{
util.BatchReleaseControlAnnotation: string(owner),
}
return clone
},
ExpectedQueueLen: 0,
},
{
Name: "Free CloneSet Pod",
GetNewPod: func() client.Object {
return generatePod(false, true, "version-1")
},
GetWorkload: func() client.Object {
return stableClone.DeepCopy()
},
ExpectedQueueLen: 0,
},
}
for _, cs := range cases {
t.Run(cs.Name, func(t *testing.T) {
newObject := cs.GetNewPod()
workload := cs.GetWorkload()
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(releaseDeploy.DeepCopy(), workload).Build()
handler := podEventHandler{Reader: cli}
createQ := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
createEvt := event.CreateEvent{
Object: newObject,
}
handler.Create(createEvt, createQ)
Expect(createQ.Len()).Should(Equal(cs.ExpectedQueueLen))
})
}
}
func generatePod(ready, owned bool, version string) *corev1.Pod {
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod",
Namespace: stableClone.Namespace,
ResourceVersion: string(uuid.NewUUID()),
Labels: map[string]string{
apps.ControllerRevisionHashLabelKey: version,
},
},
}
if ready {
pod.Status.Phase = corev1.PodRunning
pod.Status.Conditions = append(pod.Status.Conditions, corev1.PodCondition{
Type: corev1.PodReady,
Status: corev1.ConditionTrue,
})
}
if owned {
pod.OwnerReferences = append(pod.OwnerReferences,
*metav1.NewControllerRef(stableClone, stableClone.GroupVersionKind()))
}
return pod
}

View File

@ -0,0 +1,287 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package batchrelease
import (
"fmt"
"reflect"
"time"
appsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle"
bgcloneset "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/cloneset"
bgdeplopyment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/canarystyle"
canarydeployment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/canarystyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/cloneset"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/daemonset"
partitiondeployment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/statefulset"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/errors"
apps "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
const (
DefaultDuration = 2 * time.Second
)
// Executor is the controller that controls the release plan resource
type Executor struct {
client client.Client
recorder record.EventRecorder
}
// NewReleasePlanExecutor creates a RolloutPlanController
func NewReleasePlanExecutor(cli client.Client, recorder record.EventRecorder) *Executor {
return &Executor{
client: cli,
recorder: recorder,
}
}
// Do execute the release plan
func (r *Executor) Do(release *v1beta1.BatchRelease) (reconcile.Result, *v1beta1.BatchReleaseStatus, error) {
klog.InfoS("Starting one round of reconciling release plan",
"BatchRelease", client.ObjectKeyFromObject(release),
"phase", release.Status.Phase,
"current-batch", release.Status.CanaryStatus.CurrentBatch,
"current-batch-state", release.Status.CanaryStatus.CurrentBatchState)
newStatus := getInitializedStatus(&release.Status)
workloadController, err := r.getReleaseController(release, newStatus)
if err != nil || workloadController == nil {
return reconcile.Result{}, nil, nil
}
stop, result, err := r.syncStatusBeforeExecuting(release, newStatus, workloadController)
if stop || err != nil {
return result, newStatus, err
}
return r.executeBatchReleasePlan(release, newStatus, workloadController)
}
func (r *Executor) executeBatchReleasePlan(release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleaseStatus, workloadController control.Interface) (reconcile.Result, *v1beta1.BatchReleaseStatus, error) {
var err error
result := reconcile.Result{}
klog.V(3).Infof("BatchRelease(%v) State Machine into '%s' state", klog.KObj(release), newStatus.Phase)
switch newStatus.Phase {
default:
// for compatibility. if it is an unknown phase, should start from beginning.
newStatus.Phase = v1beta1.RolloutPhasePreparing
fallthrough
case v1beta1.RolloutPhasePreparing:
// prepare and initialize something before progressing in this state.
err = workloadController.Initialize()
switch {
case err == nil:
newStatus.Phase = v1beta1.RolloutPhaseProgressing
result = reconcile.Result{RequeueAfter: DefaultDuration}
default:
klog.Warningf("Failed to initialize %v, err %v", klog.KObj(release), err)
}
case v1beta1.RolloutPhaseProgressing:
// progress the release plan in this state.
result, err = r.progressBatches(release, newStatus, workloadController)
case v1beta1.RolloutPhaseFinalizing:
err = workloadController.Finalize()
switch {
case err == nil:
newStatus.Phase = v1beta1.RolloutPhaseCompleted
default:
klog.Warningf("Failed to finalize %v, err %v", klog.KObj(release), err)
}
case v1beta1.RolloutPhaseCompleted:
// this state indicates that the plan is executed/cancelled successfully, should do nothing in these states.
}
return result, newStatus, err
}
// reconcile logic when we are in the middle of release, we have to go through finalizing state before succeed or fail
func (r *Executor) progressBatches(release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleaseStatus, workloadController control.Interface) (reconcile.Result, error) {
var err error
result := reconcile.Result{}
klog.V(3).Infof("BatchRelease(%v) Canary Batch State Machine into '%s' state", klog.KObj(release), newStatus.CanaryStatus.CurrentBatchState)
switch newStatus.CanaryStatus.CurrentBatchState {
default:
// for compatibility. if it is an unknown state, should start from beginning.
newStatus.CanaryStatus.CurrentBatchState = v1beta1.UpgradingBatchState
fallthrough
case v1beta1.UpgradingBatchState:
// modify workload replicas/partition based on release plan in this state.
err = workloadController.UpgradeBatch()
switch {
case err == nil:
result = reconcile.Result{RequeueAfter: DefaultDuration}
removeProgressingCondition(newStatus)
newStatus.CanaryStatus.CurrentBatchState = v1beta1.VerifyingBatchState
case errors.IsBadRequest(err):
progressingStateTransition(newStatus, v1.ConditionTrue, v1beta1.ProgressingReasonInRolling, err.Error())
fallthrough
default:
klog.Warningf("Failed to upgrade %v, err %v", klog.KObj(release), err)
}
case v1beta1.VerifyingBatchState:
// replicas/partition has been modified, should wait pod ready in this state.
err = workloadController.EnsureBatchPodsReadyAndLabeled()
switch {
case err != nil:
// should go to upgrade state to do again to avoid dead wait.
newStatus.CanaryStatus.CurrentBatchState = v1beta1.UpgradingBatchState
klog.Warningf("%v current batch is not ready, err %v", klog.KObj(release), err)
default:
now := metav1.Now()
newStatus.CanaryStatus.BatchReadyTime = &now
result = reconcile.Result{RequeueAfter: DefaultDuration}
newStatus.CanaryStatus.CurrentBatchState = v1beta1.ReadyBatchState
}
case v1beta1.ReadyBatchState:
// replicas/partition may be modified even though ready, should recheck in this state.
err = workloadController.EnsureBatchPodsReadyAndLabeled()
switch {
case err != nil:
// if the batch ready condition changed due to some reasons, just recalculate the current batch.
newStatus.CanaryStatus.BatchReadyTime = nil
newStatus.CanaryStatus.CurrentBatchState = v1beta1.UpgradingBatchState
klog.Warningf("%v current batch is not ready, err %v", klog.KObj(release), err)
case !isPartitioned(release):
r.moveToNextBatch(release, newStatus)
result = reconcile.Result{RequeueAfter: DefaultDuration}
}
}
return result, err
}
// GetWorkloadController pick the right workload controller to work on the workload
func (r *Executor) getReleaseController(release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleaseStatus) (control.Interface, error) {
targetRef := release.Spec.WorkloadRef
gvk := schema.FromAPIVersionAndKind(targetRef.APIVersion, targetRef.Kind)
if !util.IsSupportedWorkload(gvk) {
message := fmt.Sprintf("the workload type '%v' is not supported", gvk)
r.recorder.Event(release, v1.EventTypeWarning, "UnsupportedWorkload", message)
return nil, fmt.Errorf(message)
}
targetKey := types.NamespacedName{
Namespace: release.Namespace,
Name: targetRef.Name,
}
rollingStyle := release.Spec.ReleasePlan.RollingStyle
if len(rollingStyle) == 0 && release.Spec.ReleasePlan.EnableExtraWorkloadForCanary {
rollingStyle = v1beta1.CanaryRollingStyle
}
klog.Infof("BatchRelease(%v) using %s-style release controller for this batch release", klog.KObj(release), rollingStyle)
switch rollingStyle {
case v1beta1.BlueGreenRollingStyle:
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.CloneSet{}).Name() {
klog.InfoS("Using CloneSet bluegreen-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return bluegreenstyle.NewControlPlane(bgcloneset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment bluegreen-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return bluegreenstyle.NewControlPlane(bgdeplopyment.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
case v1beta1.CanaryRollingStyle:
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment canary-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return canarystyle.NewControlPlane(canarydeployment.NewController, r.client, r.recorder, release, newStatus, targetKey), nil
}
fallthrough
case v1beta1.PartitionRollingStyle, "":
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.CloneSet{}).Name() {
klog.InfoS("Using CloneSet partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(cloneset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.DaemonSet{}).Name() {
klog.InfoS("Using DaemonSet partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(daemonset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(partitiondeployment.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
klog.Info("Partition, but use StatefulSet-Like partition-style release controller for this batch release")
}
// try to use StatefulSet-like rollout controller by default
klog.InfoS("Using StatefulSet-Like partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(statefulset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
func (r *Executor) moveToNextBatch(release *v1beta1.BatchRelease, status *v1beta1.BatchReleaseStatus) {
currentBatch := int(status.CanaryStatus.CurrentBatch)
if currentBatch >= len(release.Spec.ReleasePlan.Batches)-1 {
klog.V(3).Infof("BatchRelease(%v) finished all batch, release current batch: %v", klog.KObj(release), status.CanaryStatus.CurrentBatch)
}
if release.Spec.ReleasePlan.BatchPartition == nil || *release.Spec.ReleasePlan.BatchPartition > status.CanaryStatus.CurrentBatch {
status.CanaryStatus.CurrentBatch++
}
status.CanaryStatus.CurrentBatchState = v1beta1.UpgradingBatchState
klog.V(3).Infof("BatchRelease(%v) finished one batch, release current batch: %v", klog.KObj(release), status.CanaryStatus.CurrentBatch)
}
func isPartitioned(release *v1beta1.BatchRelease) bool {
return release.Spec.ReleasePlan.BatchPartition != nil &&
*release.Spec.ReleasePlan.BatchPartition <= release.Status.CanaryStatus.CurrentBatch
}
func progressingStateTransition(status *v1beta1.BatchReleaseStatus, condStatus v1.ConditionStatus, reason, message string) {
cond := util.GetBatchReleaseCondition(*status, v1beta1.RolloutConditionProgressing)
if cond == nil {
cond = util.NewRolloutCondition(v1beta1.RolloutConditionProgressing, condStatus, reason, message)
} else {
cond.Status = condStatus
cond.Reason = reason
if message != "" {
cond.Message = message
}
}
util.SetBatchReleaseCondition(status, *cond)
status.Message = cond.Message
}
func removeProgressingCondition(status *v1beta1.BatchReleaseStatus) {
util.RemoveBatchReleaseCondition(status, v1beta1.RolloutConditionProgressing)
status.Message = ""
}

Some files were not shown because too many files have changed in this diff Show More