Compare commits

...

35 Commits

Author SHA1 Message Date
Ai Ranthem 6554cfcf08
Changelog: v0.6.1 (#270)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-05-08 19:12:37 +08:00
Ai Ranthem 402cb9cd90
Chore: upgrade e2e ubuntu version from 20.04 to 22.04 (#268)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-05-08 12:41:00 +08:00
handagou 2aa692dc2f
Fix order of object in batchrelease event handler (#265)
Signed-off-by: z760087139 <z760087139@gmail.com>
2025-04-10 16:24:51 +08:00
Ai Ranthem ca0a71ff52
Chore: add e2e workflows for k8s 1.26 (#263)
* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* Chore: add e2e workflows for k8s 1.26

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-04-08 18:27:46 +08:00
Ai Ranthem 744430356d
Fix: blue-green batch-id e2e fails sometime (#261)
* Fix: blue-green batch-id e2e fails sometime

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-04-01 13:20:48 +08:00
PersistentJZH 6094e966ac
supoort patch batch id in cacary release (#251)
Signed-off-by: zhihao jian <zhihao.jian@shopee.com>
Co-authored-by: zhihao jian <zhihao.jian@shopee.com>
2025-04-01 10:09:45 +08:00
Ai Ranthem 3e66fa1ad8
Feature: support batch-id labeling for bluegreen strategy (#250)
* Feature: support batch-id labeling for blue-green strategy

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-03-20 19:26:46 +08:00
liheng.zms 7baf47d70e v0.5.1 changelog
Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2025-03-20 18:03:41 +08:00
Ai Ranthem 334fa1cbf3
add docker-image workflow (#253)
* add docker-image workflow

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* add docker-image workflow

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

* fix typo

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>

---------

Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-02-07 11:16:27 +08:00
Ai Ranthem 3562934ae5
Release note 0.6.0 (#252)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2025-01-21 15:51:12 +08:00
zhihao jian faa2d03338 fix patch rollout batch id
Signed-off-by: zhihao jian <zhihao.jian@shopee.com>

add rollback prefix to identify in rollback pods

fix patch rollout id

fix test

fix

add prefix when rollout id is empty

fix test
2025-01-03 09:55:23 +08:00
yunbo 5bbbc046b0 fix the pod-recreate issue in partition style
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
2024-12-27 10:00:08 +08:00
myname4423 efbd8ba8f9
support bluegreen release: support workload of deployment and cloneSet (#238)
* support bluegreen release: Deployment and CloneSet

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* support bluegreen release: webhook update

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add unit test & split workload mutating webhook

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* fix a bug caused by previous merged PR

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* improve some log information

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* fix kruise version problem

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-12-24 19:38:52 +08:00
Jiajing LU 056c77dbd2
Patch canary service selector from PodTemplateMetadata (#243)
* patch canary service selector

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* check null

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix nil check

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* remove len check

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-12-09 19:25:49 +08:00
yunbo 3f66aae0ae support bluegreen release: release logic
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
2024-11-13 11:26:44 +08:00
Ai Ranthem 09e01cb95b
upgrade: gateway-api(0.5.1=>0.7.1), along with controller-runtime(0.12.1=>0.14.6) (#237)
Signed-off-by: AiRanthem <zhongtianyun.zty@alibaba-inc.com>
2024-11-13 09:38:49 +08:00
Jiajing LU 6854752435
Add composite provider to support multiple network providers (#224)
* add composite provider

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix nginx lua script and add E2E

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix test case

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert image

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix indent

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* follow latest interface change

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* move e2e to v1beta1 file and add workflow

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-11-01 11:28:58 +08:00
ls-2018 f0363f28c0
fix: lua encode structural error (#209)
Signed-off-by: acejilam <acejilam@gmail.com>
2024-09-06 19:38:09 +08:00
myname4423 d2613132aa
improve finalising logic for canary release (#229)
improve finalising logic for canary release-2

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-09-04 17:29:58 +08:00
myname4423 5378dc2cf7
refactor the grace system (#226)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-08-13 15:17:38 +08:00
myname4423 78273c2998
add restriction for traffic configuration of partition-style step (#225)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-08-02 13:42:28 +08:00
myname4423 16a3f0acc1
update runCanary traffic step for special cases (#219)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-07-30 10:30:26 +08:00
Jiajing LU 6fae7085e5
* inject headerModifier to the luaData (#223)
* fix lua script and test case
* use ptr instead of struct
* add requestHeaderModifier to testcase debugging toolkit

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
2024-07-22 10:47:18 +08:00
myname4423 e7652cbc7c
traffic: Refactor continous logic (#222)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-07-15 11:18:11 +08:00
myname4423 db761a979c
add 2 fields to status to support showing result of kubectl get (#220)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-25 14:58:53 +08:00
myname4423 aa28f4e12e
Allow to jump between steps (#218)
* allow jump between steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

allow jump between steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

allow jump among steps

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

safte index check

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

add e2e test for step jump

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

amend: style-agonstic reference for webhook

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

improve existing e2e logic to avoid unexpected behaviour

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

jump: nextStep Index default value from 0 to -1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

after rebase

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* jump: fix out of range

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-25 13:21:54 +08:00
myname4423 1e8af4a4c1
update api for future bluegreen (#214)
add status conversion



nextStepIndex default value from 0 to -1



restore the enableExtra field in BR

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-12 13:27:41 +08:00
Jiajing LU 62794dc883
Support Path and QueryParams in http route matches (#204)
* support queryparams for gateway api

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* support mse

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* support path and queryParams

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix lint

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix testcase

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix manifests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* do not provide default value for path

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* Allow Not generating Canary Service && Fixed a bug caused by NOT considering case-insensitivity. (#200)

* Fixed a bug caused by NOT considering case-insensitivity.

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add DisableGenerateCanaryService for CanaryStrategy

amend1: update crd yaml

amend2: add DisableGenerateCanaryService for v1alpha1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert test images

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* polish comments

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* add gateway api tests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix MSE cases

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update golang lint ci

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* regenerate manifests

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* remove generic usage

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update istio lua script

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* update v1alpha1 in e2e to v1beta1

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix cases

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* refactor istio case to include queryParams and path

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix cloneset issue

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* fix typo

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

* revert images

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>

---------

Signed-off-by: Megrez Lu <lujiajing1126@gmail.com>
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: myname4423 <57184070+myname4423@users.noreply.github.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-06-12 13:21:42 +08:00
myname4423 3eeb7b4ddc
modify the helm version from latest to v3.14.0 (#215)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-05-25 09:45:24 +08:00
pnr 07c1731e8a
fix: don't check for strategy when finalize (#198)
Co-authored-by: nwaiyatharee <nattadej.waiyatharee@agoda.com>
2024-04-02 13:45:36 +08:00
myname4423 25b053b8be
update unit test for PR #200 (#206)
Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-04-02 10:34:36 +08:00
myname4423 0ff23f6636
Allow Not generating Canary Service && Fixed a bug caused by NOT considering case-insensitivity. (#200)
* Fixed a bug caused by NOT considering case-insensitivity.

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

* add DisableGenerateCanaryService for CanaryStrategy

amend1: update crd yaml

amend2: add DisableGenerateCanaryService for v1alpha1

Signed-off-by: yunbo <yunbo10124scut@gmail.com>

---------

Signed-off-by: yunbo <yunbo10124scut@gmail.com>
Co-authored-by: yunbo <yunbo10124scut@gmail.com>
2024-03-12 16:37:29 +08:00
Wei-Xiang Sun 678d4d2b34
refresh observed rollout-id for BatchRelease (#193)
Signed-off-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
Co-authored-by: mingzhou.swx <mingzhou.swx@alibaba-inc.com>
2023-12-27 11:16:07 +08:00
zhengjr9 1e84129ff1
bugfix: Filter rs that are not part of the current Deployement (#191)
Signed-off-by: zhengjr <zhengjiarui_pro@163.com>
2023-12-26 17:54:07 +08:00
berg 83eedb354e
rollout v0.5.0 changelog (#190)
* rollout v0.5.0 changelog

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify rollout types description

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* limit secret & configmaps namespace rbac

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

* modify rollout v0.5.0 changelog

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>

---------

Signed-off-by: liheng.zms <liheng.zms@alibaba-inc.com>
2023-12-21 15:23:02 +08:00
163 changed files with 19939 additions and 4782 deletions

View File

@ -11,12 +11,12 @@ on:
env:
# Common versions
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.42'
GOLANGCI_VERSION: 'v1.52'
jobs:
golangci-lint:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v2
@ -27,7 +27,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -36,13 +36,13 @@ jobs:
run: |
make generate
- name: Lint golang code
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v6
with:
version: ${{ env.GOLANGCI_VERSION }}
args: --verbose
unit-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -54,7 +54,7 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}

28
.github/workflows/docker-image.yaml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Docker Image CI
on:
workflow_dispatch:
# Declare default permissions as read only.
permissions: read-all
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.HUB_KRUISE }}
- name: Build the Docker image
run: |
docker buildx create --use --platform=linux/amd64,linux/arm64,linux/ppc64le --name multi-platform-builder
docker buildx ls
IMG=openkruise/kruise-rollout:${{ github.ref_name }} make docker-multiarch

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -0,0 +1,128 @@
name: E2E-Advanced-Deployment-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests For Deployment Controller
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Run E2E Tests For Control Plane
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Advanced Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

110
.github/workflows/e2e-cloneset-1.26.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-CloneSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='CloneSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -10,14 +10,15 @@ on:
env:
# Common versions
GO_VERSION: '1.17'
KIND_IMAGE: 'kindest/node:v1.23.3'
GO_VERSION: '1.19'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -29,6 +30,7 @@ jobs:
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
version: ${{ env.KIND_VERSION }}
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
@ -70,7 +72,7 @@ jobs:
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/lua_script_configmap.yaml
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with custon network provider' test/e2e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with custom network provider' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -0,0 +1,110 @@
name: E2E-DaemonSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='DaemonSet canary rollout' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -0,0 +1,110 @@
name: E2E-Deployment-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Deployment canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:

View File

@ -0,0 +1,87 @@
name: E2E-Multiple-NetworkProvider
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_VERSION: 'v0.18.0'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
version: ${{ env.KIND_VERSION }}
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/istio_crd.yaml
kubectl apply -f ./test/e2e/test_data/customNetworkProvider/lua_script_configmap.yaml
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Canary rollout with multiple network providers' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

110
.github/workflows/e2e-others-1.26.yaml vendored Normal file
View File

@ -0,0 +1,110 @@
name: E2E-Others-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Others' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -17,7 +17,7 @@ env:
jobs:
rollout:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@ -44,7 +44,7 @@ jobs:
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e

View File

@ -0,0 +1,110 @@
name: E2E-StatefulSet-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='StatefulSet canary rollout with Ingress' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v1 for v1.19' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v2 for v1.23' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,146 @@
name: E2E-V1Beta1-BlueGreen-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Bluegreen Release Disable HPA
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='bluegreen disable hpa test case - autoscaling/v2 for v1.26' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: Deployment Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Deployment - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal
- name: CloneSet Bluegreen Release
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Bluegreen Release - Cloneset - Ingress' test/e2e
retVal=$?
kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.19
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.19.16'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.23
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.23.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.2.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

View File

@ -0,0 +1,110 @@
name: E2E-V1Beta1-JUMP-1.26
on:
push:
branches:
- master
- release-*
pull_request: {}
workflow_dispatch: {}
env:
# Common versions
GO_VERSION: '1.19'
KIND_IMAGE: 'kindest/node:v1.26.3'
KIND_CLUSTER_NAME: 'ci-testing'
jobs:
rollout:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Setup Kind Cluster
uses: helm/kind-action@v1.6.0
with:
node_image: ${{ env.KIND_IMAGE }}
cluster_name: ${{ env.KIND_CLUSTER_NAME }}
config: ./test/kind-conf.yaml
- name: Build image
run: |
export IMAGE="openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID}"
docker build --pull --no-cache . -t $IMAGE
kind load docker-image --name=${KIND_CLUSTER_NAME} $IMAGE || { echo >&2 "kind not installed or error loading image: $IMAGE"; exit 1; }
- name: Install Kruise
run: |
set -ex
kubectl cluster-info
make helm
helm repo add openkruise https://openkruise.github.io/charts/
helm repo update
helm install kruise openkruise/kruise --version 1.7.0
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-system | grep '1/1' | grep kruise-controller-manager | wc -l)
set -e
if [ "$PODS" -eq "2" ]; then
echo "Wait for kruise-manager ready successfully"
else
echo "Timeout to wait for kruise-manager ready"
exit 1
fi
- name: Install Kruise Rollout
run: |
set -ex
kubectl cluster-info
IMG=openkruise/kruise-rollout:e2e-${GITHUB_RUN_ID} ./scripts/deploy_kind.sh
for ((i=1;i<10;i++));
do
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
set -e
if [ "$PODS" -eq "1" ]; then
break
fi
sleep 3
done
set +e
PODS=$(kubectl get pod -n kruise-rollout | grep '1/1' | wc -l)
kubectl get node -o yaml
kubectl get all -n kruise-rollout -o yaml
set -e
if [ "$PODS" -eq "1" ]; then
echo "Wait for kruise-rollout ready successfully"
else
echo "Timeout to wait for kruise-rollout ready"
exit 1
fi
- name: Run E2E Tests
run: |
export KUBECONFIG=/home/runner/.kube/config
make ginkgo
set +e
./bin/ginkgo -timeout 60m -v --focus='Step Jump' test/e2e
retVal=$?
# kubectl get pod -n kruise-rollout --no-headers | grep manager | awk '{print $1}' | xargs kubectl logs -n kruise-rollout
restartCount=$(kubectl get pod -n kruise-rollout --no-headers | awk '{print $4}')
if [ "${restartCount}" -eq "0" ];then
echo "Kruise-rollout has not restarted"
else
kubectl get pod -n kruise-rollout --no-headers
echo "Kruise-rollout has restarted, abort!!!"
kubectl get pod -n kruise-rollout --no-headers| awk '{print $1}' | xargs kubectl logs -p -n kruise-rollout
exit 1
fi
exit $retVal

2
.gitignore vendored
View File

@ -29,3 +29,5 @@ test/e2e/generated/bindata.go
.vscode
.DS_Store
lua_configuration/networking.istio.io/**/testdata/*.lua

View File

@ -1,5 +1,78 @@
# Change Log
## v0.6.1
### Key Features:
- Introduced `rollout-batch-id` labeling for blue-green and canary style releases ([#250](https://github.com/openkruise/rollouts/pull/250),([#251](https://github.com/openkruise/rollouts/pull/251),[#261](https://github.com/openkruise/rollouts/pull/261),[@PersistentJZH](https://github.com/PersistentJZH),[@AiRanthem](https://github.com/AiRanthem))
### Bugfix:
- The order of object in BatchRelease event handler is fixed ([#265](https://github.com/openkruise/rollouts/pull/265),[@z760087139](https://github.com/z760087139))
## v0.5.1
### Bugfix:
- Fix Rollout v1alpha1 and v1beta1 version conversions with incorrectly converted partition fields. ([#200](https://github.com/openkruise/rollouts/pull/200),[@myname4423](https://github.com/myname4423))
## v0.6.0
### Key Features:
- 🎊 Support for blue-green style releases has been added ([#214](https://github.com/openkruise/rollouts/pull/214),[#229](https://github.com/openkruise/rollouts/pull/229),[#238](https://github.com/openkruise/rollouts/pull/238),[#220](https://github.com/openkruise/rollouts/pull/220),[@myname4423](https://github.com/myname4423))
- ⭐ Traffic loss during rollouts in certain scenarios is avoided ([#226](https://github.com/openkruise/rollouts/pull/226),[#219](https://github.com/openkruise/rollouts/pull/219),[#222](https://github.com/openkruise/rollouts/pull/222),[@myname4423](https://github.com/myname4423))
### Other Features:
- Enhanced flexibility by allowing free navigation between different steps within a rollout ([#218](https://github.com/openkruise/rollouts/pull/218),[@myname4423](https://github.com/myname4423))
- Added support for HTTPQueryParamMatch and HTTPPathMatch of the Gateway API in traffic management ([#204](https://github.com/openkruise/rollouts/pull/204),[@lujiajing1126](https://github.com/lujiajing1126))
- Integrated RequestHeaderModifier into LuaData, facilitating its use with custom network references like Istio ([#223](https://github.com/openkruise/rollouts/pull/223),[@lujiajing1126](https://github.com/lujiajing1126))
- Introduced a composite provider to support multiple network providers ([#224](https://github.com/openkruise/rollouts/pull/224),[@lujiajing1126](https://github.com/lujiajing1126))
- Got the canary service selector patched from pod template metadata ([#243](https://github.com/openkruise/rollouts/pull/243),[@lujiajing1126](https://github.com/lujiajing1126))
- Patched label `rollout-batch-id` to pods even without `rollout-id` in the workload ([#248](https://github.com/openkruise/rollouts/pull/248),[@PersistentJZH](https://github.com/PersistentJZH))
- Upgraded Gateway API version from 0.5.1 to 0.7.1 and Kubernetes version from 1.24 to 1.26 ([#237](https://github.com/openkruise/rollouts/pull/237),[@AiRanthem](https://github.com/AiRanthem))
- Enabled the option to skip canary service generation when using `trafficRoutings.customNetworkRefs` ([#200](https://github.com/openkruise/rollouts/pull/200),[@myname4423](https://github.com/myname4423))
### Bugfix:
- Filtered out ReplicaSets not part of the current Deployment to prevent frequent scaling issues when multiple deployments share the same selectors ([#191](https://github.com/openkruise/rollouts/pull/191),[@zhengjr9](https://github.com/zhengjr9))
- Synced the observed `rollout-id` to the BatchRelease CR status ([#193](https://github.com/openkruise/rollouts/pull/193),[@veophi](https://github.com/veophi))
- Checked deployment strategy during finalization to prevent random stuck states when used with KubeVela ([#198](https://github.com/openkruise/rollouts/pull/198),[@phantomnat](https://github.com/phantomnat))
- Resolved a Lua encoding structural error ([#209](https://github.com/openkruise/rollouts/pull/209),[@ls-2018](https://github.com/ls-2018))
- Corrected batch ID labeling in partition-style releases when pod recreation happens ([#246](https://github.com/openkruise/rollouts/pull/246),[@myname4423](https://github.com/myname4423))
### Breaking Changes:
- Restricted the ability to set traffic percentage or match selectors in a partition-style release step when exceeding 30% replicas. Use the `rollouts.kruise.io/partition-replicas-limit` annotation to override this default threshold. Setting the threshold to 100% restores the previous behavior ([#225](https://github.com/openkruise/rollouts/pull/225),[@myname4423](https://github.com/myname4423))
## v0.5.0
### Resources Graduating to BETA
After more than a year of development, we have now decided to upgrade the following resources to v1beta1, as follows:
- Rollout
- BatchRelease
Please refer to the [community documentation](https://openkruise.io/rollouts/user-manuals/api-specifications) for detailed api definitions.
**Note:** The v1alpha1 api is still available, and you can still use the v1alpha1 api in v0.5.0.
But we still recommend that you migrate to v1beta1 gradually, as some of the new features will only be available in v1beta1,
e.g., [Extensible Traffic Routing Based on Lua Script](https://openkruise.io/rollouts/developer-manuals/custom-network-provider/).
### Bump To V1beta1 Gateway API
Support for GatewayAPI from v1alpha2 to v1beta1, you can use v1beta1 gateway API.
### Extensible Traffic Routing Based on Lua Script
The Gateway API is a standard gateway resource given by the K8S community, but there are still a large number of users in the community who are still using some customized gateway resources, such as VirtualService, Apisix, and so on.
In order to adapt to this behavior and meet the diverse demands of the community for gateway resources, we support a traffic routing scheme based on Lua scripts.
Kruise Rollout utilizes a Lua-script-based customization approach for API Gateway resources (Istio VirtualService, Apisix ApisixRoute, Kuma TrafficRoute and etc.).
Kruise Rollout involves invoking Lua scripts to retrieve and update the desired configurations of resources based on release strategies and the original configurations of API Gateway resources (including spec, labels, and annotations).
It enables users to easily adapt and integrate various types of API Gateway resources without modifying existing code and configurations.
By using Kruise Rollout, users can:
- Customize Lua scripts for handling API Gateway resources, allowing for flexible implementation of resource processing and providing support for a wider range of resources.
- Utilize a common Rollout configuration template to configure different resources, reducing configuration complexity and facilitating user configuration.
### Traffic Routing with Istio
Based on the lua script approach, now we add built-in support for Istio resources VirtualService,
you can directly use Kruise Rollout to achieve Istio scenarios Canary, A/B Testing release.
### Others
- Bug fix: wait grace period seconds after pod creation/upgrade. ([#185](https://github.com/openkruise/rollouts/pull/185), [@veophi](https://github.com/veophi))
## v0.4.0
### Kruise-Rollout-Controller
- Rollout Support Kruise Advanced DaemonSet. ([#134](https://github.com/openkruise/rollouts/pull/134), [@Yadan-Wei](https://github.com/Yadan-Wei))

View File

@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.18 as builder
FROM golang:1.19-alpine3.17 AS builder
WORKDIR /workspace

View File

@ -1,7 +1,7 @@
# Build the manager binary
ARG BASE_IMAGE=alpine
ARG BASE_IMAGE_VERION=3.17
FROM --platform=$BUILDPLATFORM golang:1.18-alpine3.17 as builder
FROM --platform=$BUILDPLATFORM golang:1.19-alpine3.17 as builder
WORKDIR /workspace
@ -23,12 +23,25 @@ ARG BASE_IMAGE
ARG BASE_IMAGE_VERION
FROM ${BASE_IMAGE}:${BASE_IMAGE_VERION}
RUN apk add --no-cache ca-certificates=~20220614-r4 bash=~5.2.15-r0 expat=~2.5.0-r0 \
&& rm -rf /var/cache/apk/*
RUN set -eux; \
apk --no-cache --update upgrade && \
apk --no-cache add ca-certificates && \
apk --no-cache add tzdata && \
rm -rf /var/cache/apk/* && \
update-ca-certificates && \
echo "only include root and nobody user" && \
echo -e "root:x:0:0:root:/root:/bin/ash\nnobody:x:65534:65534:nobody:/:/sbin/nologin" | tee /etc/passwd && \
echo -e "root:x:0:root\nnobody:x:65534:" | tee /etc/group && \
rm -rf /usr/local/sbin/* && \
rm -rf /usr/local/bin/* && \
rm -rf /usr/sbin/* && \
rm -rf /usr/bin/* && \
rm -rf /sbin/* && \
rm -rf /bin/*
WORKDIR /
COPY --from=builder /workspace/manager .
COPY lua_configuration /lua_configuration
USER 1000
USER 65534
ENTRYPOINT ["/manager"]

View File

@ -91,11 +91,12 @@ undeploy: ## Undeploy controller from the K8s cluster specified in ~/.kube/confi
CONTROLLER_GEN = $(shell pwd)/bin/controller-gen
CONTROLLER_GEN_VERSION = v0.11.0
controller-gen: ## Download controller-gen locally if necessary.
ifeq ("$(shell $(CONTROLLER_GEN) --version)", "Version: v0.7.0")
ifeq ("$(shell $(CONTROLLER_GEN) --version)", "Version: ${CONTROLLER_GEN_VERSION}")
else
rm -rf $(CONTROLLER_GEN)
$(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.7.0)
$(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@${CONTROLLER_GEN_VERSION})
endif
KUSTOMIZE = $(shell pwd)/bin/kustomize
@ -108,7 +109,7 @@ ginkgo: ## Download ginkgo locally if necessary.
HELM = $(shell pwd)/bin/helm
helm: ## Download helm locally if necessary.
$(call go-get-tool,$(HELM),helm.sh/helm/v3/cmd/helm@latest)
$(call go-get-tool,$(HELM),helm.sh/helm/v3/cmd/helm@v3.14.0)
# go-get-tool will 'go get' any package $2 and install it to $1.
PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))

View File

@ -54,6 +54,16 @@ type ReleasePlan struct {
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// RollingStyle can be "Canary", "Partiton" or "BlueGreen"
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// EnableExtraWorkloadForCanary indicates whether to create extra workload for canary
// True corresponds to RollingStyle "Canary".
// False corresponds to RollingStyle "Partiton".
// Ignored in BlueGreen-style.
// This field is about to deprecate, use RollingStyle instead.
// If both of them are set, controller will only consider this
// filed when RollingStyle is empty
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary"`
}
type FinalizingPolicyType string

View File

@ -19,6 +19,8 @@ package v1alpha1
import (
"fmt"
"strings"
"github.com/openkruise/rollouts/api/v1beta1"
"k8s.io/apimachinery/pkg/util/intstr"
utilpointer "k8s.io/utils/pointer"
@ -74,7 +76,7 @@ func (src *Rollout) ConvertTo(dst conversion.Hub) error {
obj.Spec.Strategy.Canary.PatchPodTemplateMetadata.Labels[k] = v
}
}
if src.Annotations[RolloutStyleAnnotation] != string(PartitionRollingStyle) {
if !strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(PartitionRollingStyle)) {
obj.Spec.Strategy.Canary.EnableExtraWorkloadForCanary = true
}
if src.Annotations[TrafficRoutingAnnotation] != "" {
@ -102,18 +104,22 @@ func (src *Rollout) ConvertTo(dst conversion.Hub) error {
return nil
}
obj.Status.CanaryStatus = &v1beta1.CanaryStatus{
ObservedWorkloadGeneration: src.Status.CanaryStatus.ObservedWorkloadGeneration,
ObservedRolloutID: src.Status.CanaryStatus.ObservedRolloutID,
RolloutHash: src.Status.CanaryStatus.RolloutHash,
StableRevision: src.Status.CanaryStatus.StableRevision,
CanaryRevision: src.Status.CanaryStatus.CanaryRevision,
PodTemplateHash: src.Status.CanaryStatus.PodTemplateHash,
CanaryReplicas: src.Status.CanaryStatus.CanaryReplicas,
CanaryReadyReplicas: src.Status.CanaryStatus.CanaryReadyReplicas,
CurrentStepIndex: src.Status.CanaryStatus.CurrentStepIndex,
CurrentStepState: v1beta1.CanaryStepState(src.Status.CanaryStatus.CurrentStepState),
Message: src.Status.CanaryStatus.Message,
LastUpdateTime: src.Status.CanaryStatus.LastUpdateTime,
CommonStatus: v1beta1.CommonStatus{
ObservedWorkloadGeneration: src.Status.CanaryStatus.ObservedWorkloadGeneration,
ObservedRolloutID: src.Status.CanaryStatus.ObservedRolloutID,
RolloutHash: src.Status.CanaryStatus.RolloutHash,
StableRevision: src.Status.CanaryStatus.StableRevision,
PodTemplateHash: src.Status.CanaryStatus.PodTemplateHash,
CurrentStepIndex: src.Status.CanaryStatus.CurrentStepIndex,
CurrentStepState: v1beta1.CanaryStepState(src.Status.CanaryStatus.CurrentStepState),
Message: src.Status.CanaryStatus.Message,
LastUpdateTime: src.Status.CanaryStatus.LastUpdateTime,
FinalisingStep: v1beta1.FinalisingStepType(src.Status.CanaryStatus.FinalisingStep),
NextStepIndex: src.Status.CanaryStatus.NextStepIndex,
},
CanaryRevision: src.Status.CanaryStatus.CanaryRevision,
CanaryReplicas: src.Status.CanaryStatus.CanaryReplicas,
CanaryReadyReplicas: src.Status.CanaryStatus.CanaryReadyReplicas,
}
return nil
default:
@ -148,7 +154,7 @@ func ConversionToV1beta1TrafficRoutingRef(src TrafficRoutingRef) (dst v1beta1.Tr
func ConversionToV1beta1TrafficRoutingStrategy(src TrafficRoutingStrategy) (dst v1beta1.TrafficRoutingStrategy) {
if src.Weight != nil {
dst.Traffic = utilpointer.StringPtr(fmt.Sprintf("%d", *src.Weight) + "%")
dst.Traffic = utilpointer.String(fmt.Sprintf("%d", *src.Weight) + "%")
}
dst.RequestHeaderModifier = src.RequestHeaderModifier
for _, match := range src.Matches {
@ -165,7 +171,11 @@ func (dst *Rollout) ConvertFrom(src conversion.Hub) error {
case *v1beta1.Rollout:
srcV1beta1 := src.(*v1beta1.Rollout)
dst.ObjectMeta = srcV1beta1.ObjectMeta
if !srcV1beta1.Spec.Strategy.IsCanaryStragegy() {
// only v1beta1 supports bluegreen strategy
// Don't log the message because it will print too often
return nil
}
// spec
dst.Spec = RolloutSpec{
ObjectRef: ObjectRef{
@ -211,9 +221,9 @@ func (dst *Rollout) ConvertFrom(src conversion.Hub) error {
dst.Annotations = map[string]string{}
}
if srcV1beta1.Spec.Strategy.Canary.EnableExtraWorkloadForCanary {
dst.Annotations[RolloutStyleAnnotation] = string(CanaryRollingStyle)
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(CanaryRollingStyle))
} else {
dst.Annotations[RolloutStyleAnnotation] = string(PartitionRollingStyle)
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(PartitionRollingStyle))
}
if srcV1beta1.Spec.Strategy.Canary.TrafficRoutingRef != "" {
dst.Annotations[TrafficRoutingAnnotation] = srcV1beta1.Spec.Strategy.Canary.TrafficRoutingRef
@ -252,6 +262,8 @@ func (dst *Rollout) ConvertFrom(src conversion.Hub) error {
CurrentStepState: CanaryStepState(srcV1beta1.Status.CanaryStatus.CurrentStepState),
Message: srcV1beta1.Status.CanaryStatus.Message,
LastUpdateTime: srcV1beta1.Status.CanaryStatus.LastUpdateTime,
FinalisingStep: FinalizeStateType(srcV1beta1.Status.CanaryStatus.FinalisingStep),
NextStepIndex: srcV1beta1.Status.CanaryStatus.NextStepIndex,
}
return nil
default:
@ -336,9 +348,18 @@ func (src *BatchRelease) ConvertTo(dst conversion.Hub) error {
obj.Spec.ReleasePlan.PatchPodTemplateMetadata.Labels[k] = v
}
}
if src.Annotations[RolloutStyleAnnotation] != string(PartitionRollingStyle) {
obj.Spec.ReleasePlan.EnableExtraWorkloadForCanary = true
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(PartitionRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.PartitionRollingStyle
}
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(CanaryRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.CanaryRollingStyle
}
if strings.EqualFold(src.Annotations[RolloutStyleAnnotation], string(BlueGreenRollingStyle)) {
obj.Spec.ReleasePlan.RollingStyle = v1beta1.BlueGreenRollingStyle
}
obj.Spec.ReleasePlan.EnableExtraWorkloadForCanary = srcSpec.ReleasePlan.EnableExtraWorkloadForCanary
// status
obj.Status = v1beta1.BatchReleaseStatus{
@ -415,11 +436,9 @@ func (dst *BatchRelease) ConvertFrom(src conversion.Hub) error {
if dst.Annotations == nil {
dst.Annotations = map[string]string{}
}
if srcV1beta1.Spec.ReleasePlan.EnableExtraWorkloadForCanary {
dst.Annotations[RolloutStyleAnnotation] = string(CanaryRollingStyle)
} else {
dst.Annotations[RolloutStyleAnnotation] = string(PartitionRollingStyle)
}
dst.Annotations[RolloutStyleAnnotation] = strings.ToLower(string(srcV1beta1.Spec.ReleasePlan.RollingStyle))
dst.Spec.ReleasePlan.RollingStyle = RollingStyleType(srcV1beta1.Spec.ReleasePlan.RollingStyle)
dst.Spec.ReleasePlan.EnableExtraWorkloadForCanary = srcV1beta1.Spec.ReleasePlan.EnableExtraWorkloadForCanary
// status
dst.Status = BatchReleaseStatus{

View File

@ -59,6 +59,8 @@ const (
PartitionRollingStyle RollingStyleType = "Partition"
// CanaryRollingStyle means rolling in canary way, and will create a canary Deployment.
CanaryRollingStyle RollingStyleType = "Canary"
// BlueGreenRollingStyle means rolling in blue-green way, and will NOT create a canary Deployment.
BlueGreenRollingStyle RollingStyleType = "BlueGreen"
)
// DeploymentExtraStatus is extra status field for Advanced Deployment
@ -74,7 +76,7 @@ type DeploymentExtraStatus struct {
}
func SetDefaultDeploymentStrategy(strategy *DeploymentStrategy) {
if strategy.RollingStyle == CanaryRollingStyle {
if strategy.RollingStyle != PartitionRollingStyle {
return
}
if strategy.RollingUpdate == nil {

View File

@ -121,6 +121,8 @@ type CanaryStrategy struct {
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
type PatchPodTemplateMetadata struct {
@ -169,8 +171,6 @@ type RolloutStatus struct {
// Conditions a list of conditions a rollout can have.
// +optional
Conditions []RolloutCondition `json:"conditions,omitempty"`
// +optional
//BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
// Phase is the rollout phase.
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
@ -240,16 +240,28 @@ type CanaryStatus struct {
CanaryReplicas int32 `json:"canaryReplicas"`
// CanaryReadyReplicas the numbers of ready canary revision pods
CanaryReadyReplicas int32 `json:"canaryReadyReplicas"`
// CurrentStepIndex defines the current step of the rollout is on. If the current step index is null, the
// controller will execute the rollout.
// NextStepIndex defines the next step of the rollout is on.
// In normal case, NextStepIndex is equal to CurrentStepIndex + 1
// If the current step is the last step, NextStepIndex is equal to -1
// Before the release, NextStepIndex is also equal to -1
// 0 is not used and won't appear in any case
// It is allowed to patch NextStepIndex by design,
// e.g. if CurrentStepIndex is 2, user can patch NextStepIndex to 3 (if exists) to
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
// Patching it with a non-positive value is meaningless, which will be corrected
// in the next reconciliation
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
NextStepIndex int32 `json:"nextStepIndex"`
// +optional
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
FinalisingStep FinalizeStateType `json:"finalisingStep"`
}
type CanaryStepState string
type FinalizeStateType string
const (
CanaryStepStateUpgrade CanaryStepState = "StepUpgrade"

View File

@ -30,6 +30,7 @@ type TrafficRoutingRef struct {
// Service holds the name of a service which selects pods with stable version and don't select any pods with canary version.
Service string `json:"service"`
// Optional duration in seconds the traffic provider(e.g. nginx ingress controller) consumes the service, ingress configuration changes gracefully.
// +kubebuilder:default=3
GracePeriodSeconds int32 `json:"gracePeriodSeconds,omitempty"`
// Ingress holds Ingress specific configuration to route traffic, e.g. Nginx, Alb.
Ingress *IngressTrafficRouting `json:"ingress,omitempty"`
@ -89,7 +90,7 @@ type TrafficRoutingStrategy struct {
// my-header: bar
//
// +optional
RequestHeaderModifier *gatewayv1beta1.HTTPRequestHeaderFilter `json:"requestHeaderModifier,omitempty"`
RequestHeaderModifier *gatewayv1beta1.HTTPHeaderFilter `json:"requestHeaderModifier,omitempty"`
// Matches define conditions used for matching the incoming HTTP requests to canary service.
// Each match is independent, i.e. this rule will be matched if **any** one of the matches is satisfied.
// If Gateway API, current only support one match.

View File

@ -981,7 +981,7 @@ func (in *TrafficRoutingStrategy) DeepCopyInto(out *TrafficRoutingStrategy) {
}
if in.RequestHeaderModifier != nil {
in, out := &in.RequestHeaderModifier, &out.RequestHeaderModifier
*out = new(v1beta1.HTTPRequestHeaderFilter)
*out = new(v1beta1.HTTPHeaderFilter)
(*in).DeepCopyInto(*out)
}
if in.Matches != nil {

View File

@ -54,9 +54,15 @@ type ReleasePlan struct {
// only support for canary deployment
// +optional
PatchPodTemplateMetadata *PatchPodTemplateMetadata `json:"patchPodTemplateMetadata,omitempty"`
// If true, then it will create new deployment for canary, such as: workload-demo-canary.
// When user verifies that the canary version is ready, we will remove the canary deployment and release the deployment workload-demo in full.
// Current only support k8s native deployment
// RollingStyle can be "Canary", "Partiton" or "BlueGreen"
RollingStyle RollingStyleType `json:"rollingStyle,omitempty"`
// EnableExtraWorkloadForCanary indicates whether to create extra workload for canary
// True corresponds to RollingStyle "Canary".
// False corresponds to RollingStyle "Partiton".
// Ignored in BlueGreen-style.
// This field is about to deprecate, use RollingStyle instead.
// If both of them are set, controller will only consider this
// filed when RollingStyle is empty
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary"`
}
@ -64,7 +70,7 @@ type FinalizingPolicyType string
const (
// WaitResumeFinalizingPolicyType will wait workload to be resumed, which means
// controller will be hold at Finalizing phase util all pods of workload is upgraded.
// controller will be hold at Finalizing phase until all pods of workload is upgraded.
// WaitResumeFinalizingPolicyType only works in canary-style BatchRelease controller.
WaitResumeFinalizingPolicyType FinalizingPolicyType = "WaitResume"
// ImmediateFinalizingPolicyType will not to wait workload to be resumed.
@ -111,6 +117,8 @@ type BatchReleaseStatus struct {
// Phase is the release plan phase, which indicates the current state of release
// plan state machine in BatchRelease controller.
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
Message string `json:"message,omitempty"`
}
type BatchReleaseCanaryStatus struct {

View File

@ -37,6 +37,16 @@ const (
// AdvancedDeploymentControlLabel is label for deployment,
// which labels whether the deployment is controlled by advanced-deployment-controller.
AdvancedDeploymentControlLabel = "rollouts.kruise.io/controlled-by-advanced-deployment-controller"
// OriginalDeploymentStrategyAnnotation is annotation for workload in BlueGreen Release,
// it will store the original setting of the workload, which will be used to restore the workload
OriginalDeploymentStrategyAnnotation = "rollouts.kruise.io/original-deployment-strategy"
// MaxProgressSeconds is the value we set for ProgressDeadlineSeconds
// MaxReadySeconds is the value we set for MinReadySeconds, which is one less than ProgressDeadlineSeconds
// MaxInt32: 2147483647, ≈ 68 years
MaxProgressSeconds = 1<<31 - 1
MaxReadySeconds = MaxProgressSeconds - 1
)
// DeploymentStrategy is strategy field for Advanced Deployment
@ -59,6 +69,8 @@ const (
PartitionRollingStyle RollingStyleType = "Partition"
// CanaryRollingStyle means rolling in canary way, and will create a canary Deployment.
CanaryRollingStyle RollingStyleType = "Canary"
// BlueGreenRollingStyle means rolling in blue-green way, and will NOT create a extra Deployment.
BlueGreenRollingStyle RollingStyleType = "BlueGreen"
)
// DeploymentExtraStatus is extra status field for Advanced Deployment
@ -74,7 +86,7 @@ type DeploymentExtraStatus struct {
}
func SetDefaultDeploymentStrategy(strategy *DeploymentStrategy) {
if strategy.RollingStyle == CanaryRollingStyle {
if strategy.RollingStyle != PartitionRollingStyle {
return
}
if strategy.RollingUpdate == nil {

View File

@ -17,6 +17,9 @@ limitations under the License.
package v1beta1
import (
"reflect"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
@ -75,6 +78,115 @@ type RolloutStrategy struct {
Paused bool `json:"paused,omitempty"`
// +optional
Canary *CanaryStrategy `json:"canary,omitempty"`
// +optional
BlueGreen *BlueGreenStrategy `json:"blueGreen,omitempty" protobuf:"bytes,1,opt,name=blueGreen"`
}
// Get the rolling style based on the strategy
func (r *RolloutStrategy) GetRollingStyle() RollingStyleType {
if r.BlueGreen != nil {
return BlueGreenRollingStyle
}
//NOTE - even EnableExtraWorkloadForCanary is true, as long as it is not Deployment,
//we won't do canary release. BatchRelease will treat it as Partiton release
if r.Canary.EnableExtraWorkloadForCanary {
return CanaryRollingStyle
}
return PartitionRollingStyle
}
// using single field EnableExtraWorkloadForCanary to distinguish partition-style from canary-style
// is not enough, for example, a v1alaph1 Rollout can be converted to v1beta1 Rollout
// with EnableExtraWorkloadForCanary set as true, even the objectRef is cloneset (which doesn't support canary release)
func IsRealPartition(rollout *Rollout) bool {
if rollout.Spec.Strategy.IsEmptyRelease() {
return false
}
estimation := rollout.Spec.Strategy.GetRollingStyle()
if estimation == BlueGreenRollingStyle {
return false
}
targetRef := rollout.Spec.WorkloadRef
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() &&
estimation == CanaryRollingStyle {
return false
}
return true
}
// r.GetRollingStyle() == BlueGreenRollingStyle
func (r *RolloutStrategy) IsBlueGreenRelease() bool {
return r.GetRollingStyle() == BlueGreenRollingStyle
}
// r.GetRollingStyle() == CanaryRollingStyle || r.GetRollingStyle() == PartitionRollingStyle
func (r *RolloutStrategy) IsCanaryStragegy() bool {
return r.GetRollingStyle() == CanaryRollingStyle || r.GetRollingStyle() == PartitionRollingStyle
}
func (r *RolloutStrategy) IsEmptyRelease() bool {
return r.BlueGreen == nil && r.Canary == nil
}
// Get the steps based on the rolling style
func (r *RolloutStrategy) GetSteps() []CanaryStep {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.Steps
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.Steps
default:
return nil
}
}
// Get the traffic routing based on the rolling style
func (r *RolloutStrategy) GetTrafficRouting() []TrafficRoutingRef {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.TrafficRoutings
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.TrafficRoutings
default:
return nil
}
}
// Check if there are traffic routings
func (r *RolloutStrategy) HasTrafficRoutings() bool {
return len(r.GetTrafficRouting()) > 0
}
// Check the value of DisableGenerateCanaryService
func (r *RolloutStrategy) DisableGenerateCanaryService() bool {
switch r.GetRollingStyle() {
case BlueGreenRollingStyle:
return r.BlueGreen.DisableGenerateCanaryService
case CanaryRollingStyle, PartitionRollingStyle:
return r.Canary.DisableGenerateCanaryService
default:
return false
}
}
// BlueGreenStrategy defines parameters for Blue Green Release
type BlueGreenStrategy struct {
// Steps define the order of phases to execute release in batches(20%, 40%, 60%, 80%, 100%)
// +optional
Steps []CanaryStep `json:"steps,omitempty"`
// TrafficRoutings support ingress, gateway api and custom network resource(e.g. istio, apisix) to enable more fine-grained traffic routing
// and current only support one TrafficRouting
TrafficRoutings []TrafficRoutingRef `json:"trafficRoutings,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
// Only when FailureThreshold are satisfied, Rollout can enter ready state.
// If FailureThreshold is nil, Rollout will use the MaxUnavailable of workload as its
// FailureThreshold.
// Defaults to nil.
FailureThreshold *intstr.IntOrString `json:"failureThreshold,omitempty"`
// TrafficRoutingRef is TrafficRouting's Name
TrafficRoutingRef string `json:"trafficRoutingRef,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
// CanaryStrategy defines parameters for a Replica Based Canary
@ -82,7 +194,7 @@ type CanaryStrategy struct {
// Steps define the order of phases to execute release in batches(20%, 40%, 60%, 80%, 100%)
// +optional
Steps []CanaryStep `json:"steps,omitempty"`
// TrafficRoutings hosts all the supported service meshes supported to enable more fine-grained traffic routing
// TrafficRoutings support ingress, gateway api and custom network resource(e.g. istio, apisix) to enable more fine-grained traffic routing
// and current only support one TrafficRouting
TrafficRoutings []TrafficRoutingRef `json:"trafficRoutings,omitempty"`
// FailureThreshold indicates how many failed pods can be tolerated in all upgraded pods.
@ -101,6 +213,8 @@ type CanaryStrategy struct {
EnableExtraWorkloadForCanary bool `json:"enableExtraWorkloadForCanary,omitempty"`
// TrafficRoutingRef is TrafficRouting's Name
TrafficRoutingRef string `json:"trafficRoutingRef,omitempty"`
// canary service will not be generated if DisableGenerateCanaryService is true
DisableGenerateCanaryService bool `json:"disableGenerateCanaryService,omitempty"`
}
type PatchPodTemplateMetadata struct {
@ -123,6 +237,7 @@ type CanaryStep struct {
type TrafficRoutingStrategy struct {
// Traffic indicate how many percentage of traffic the canary pods should receive
// Value is of string type and is a percentage, e.g. 5%.
// +optional
Traffic *string `json:"traffic,omitempty"`
// Set overwrites the request with the given header (name, value)
@ -142,20 +257,48 @@ type TrafficRoutingStrategy struct {
// my-header: bar
//
// +optional
RequestHeaderModifier *gatewayv1beta1.HTTPRequestHeaderFilter `json:"requestHeaderModifier,omitempty"`
// Matches define conditions used for matching the incoming HTTP requests to canary service.
// Each match is independent, i.e. this rule will be matched if **any** one of the matches is satisfied.
// If Gateway API, current only support one match.
// And cannot support both weight and matches, if both are configured, then matches takes precedence.
RequestHeaderModifier *gatewayv1beta1.HTTPHeaderFilter `json:"requestHeaderModifier,omitempty"`
// Matches define conditions used for matching incoming HTTP requests to the canary service.
// Each match is independent, i.e. this rule will be matched as long as **any** one of the matches is satisfied.
//
// It cannot support Traffic (weight-based routing) and Matches simultaneously, if both are configured.
// In such cases, Matches takes precedence.
Matches []HttpRouteMatch `json:"matches,omitempty"`
}
type HttpRouteMatch struct {
// Path specifies a HTTP request path matcher.
// Supported list:
// - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
// - GatewayAPI: If path is defined, the whole HttpRouteMatch will be used as a standalone matcher
//
// +optional
Path *gatewayv1beta1.HTTPPathMatch `json:"path,omitempty"`
// Headers specifies HTTP request header matchers. Multiple match values are
// ANDed together, meaning, a request must match all the specified headers
// to select the route.
//
// +listType=map
// +listMapKey=name
// +optional
// +kubebuilder:validation:MaxItems=16
Headers []gatewayv1beta1.HTTPHeaderMatch `json:"headers,omitempty"`
// QueryParams specifies HTTP query parameter matchers. Multiple match
// values are ANDed together, meaning, a request must match all the
// specified query parameters to select the route.
// Supported list:
// - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
// - MSE Ingress: https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/annotations-supported-by-mse-ingress-gateways-1
// Header/Cookie > QueryParams
// - Gateway API
//
// +listType=map
// +listMapKey=name
// +optional
// +kubebuilder:validation:MaxItems=16
QueryParams []gatewayv1beta1.HTTPQueryParamMatch `json:"queryParams,omitempty"`
}
// RolloutPause defines a pause stage for a rollout
@ -175,6 +318,9 @@ type RolloutStatus struct {
// Canary describes the state of the canary rollout
// +optional
CanaryStatus *CanaryStatus `json:"canaryStatus,omitempty"`
// BlueGreen describes the state of the blueGreen rollout
// +optional
BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
// Conditions a list of conditions a rollout can have.
// +optional
Conditions []RolloutCondition `json:"conditions,omitempty"`
@ -184,6 +330,10 @@ type RolloutStatus struct {
Phase RolloutPhase `json:"phase,omitempty"`
// Message provides details on why the rollout is in its current phase
Message string `json:"message,omitempty"`
// These two values will be synchronized with the same fileds in CanaryStatus or BlueGreeenStatus
// mainly used to provide info for kubectl get command
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
}
// RolloutCondition describes the state of a rollout at a certain point.
@ -206,6 +356,8 @@ type RolloutCondition struct {
type RolloutConditionType string
// These are valid conditions of a rollout.
//
//goland:noinspection GoUnusedConst
const (
// RolloutConditionProgressing means the rollout is progressing. Progress for a rollout is
// considered when a new replica set is created or adopted, when pods scale
@ -228,10 +380,26 @@ const (
// Terminating Reason
TerminatingReasonInTerminating = "InTerminating"
TerminatingReasonCompleted = "Completed"
// Finalise Reason
// Finalise when the last batch is released and all pods will update to new version
FinaliseReasonSuccess = "Success"
// Finalise when rollback detected
FinaliseReasonRollback = "Rollback"
// Finalise when Continuous Release detected
FinaliseReasonContinuous = "Continuous"
// Finalise when Rollout is disabling
FinaliseReasonDisalbed = "RolloutDisabled"
// Finalise when Rollout is deleting
FinaliseReasonDelete = "RolloutDeleting"
)
// CanaryStatus status fields that only pertain to the canary rollout
type CanaryStatus struct {
// fields in CommonStatus are shared between canary status and bluegreen status
// if a field is accessed in strategy-agnostic way, e.g. accessed from rollout_progressing.go, or rollout_status.go
// then it can be put into CommonStatus
// if a field is only accessed in strategy-specific way, e.g. accessed from rollout_canary.go or rollout_bluegreen.go
// then it should stay behind with CanaryStatus or BlueGreenStatus
type CommonStatus struct {
// observedWorkloadGeneration is the most recent generation observed for this Rollout ref workload generation.
ObservedWorkloadGeneration int64 `json:"observedWorkloadGeneration,omitempty"`
// ObservedRolloutID will record the newest spec.RolloutID if status.canaryRevision equals to workload.updateRevision
@ -240,27 +408,129 @@ type CanaryStatus struct {
RolloutHash string `json:"rolloutHash,omitempty"`
// StableRevision indicates the revision of stable pods
StableRevision string `json:"stableRevision,omitempty"`
// pod template hash is used as service selector label
PodTemplateHash string `json:"podTemplateHash"`
// CurrentStepIndex defines the current step of the rollout is on.
// +optional
CurrentStepIndex int32 `json:"currentStepIndex"`
// NextStepIndex defines the next step of the rollout is on.
// In normal case, NextStepIndex is equal to CurrentStepIndex + 1
// If the current step is the last step, NextStepIndex is equal to -1
// Before the release, NextStepIndex is also equal to -1
// 0 is not used and won't appear in any case
// It is allowed to patch NextStepIndex by design,
// e.g. if CurrentStepIndex is 2, user can patch NextStepIndex to 3 (if exists) to
// achieve batch jump, or patch NextStepIndex to 1 to implement a re-execution of step 1
// Patching it with a non-positive value is useless and meaningless, which will be corrected
// in the next reconciliation
NextStepIndex int32 `json:"nextStepIndex"`
// FinalisingStep the step of finalising
FinalisingStep FinalisingStepType `json:"finalisingStep"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
}
// CanaryStatus status fields that only pertain to the canary rollout
type CanaryStatus struct {
// must be inline
CommonStatus `json:",inline"`
// CanaryRevision is calculated by rollout based on podTemplateHash, and the internal logic flow uses
// It may be different from rs podTemplateHash in different k8s versions, so it cannot be used as service selector label
CanaryRevision string `json:"canaryRevision"`
// pod template hash is used as service selector label
PodTemplateHash string `json:"podTemplateHash"`
// CanaryReplicas the numbers of canary revision pods
CanaryReplicas int32 `json:"canaryReplicas"`
// CanaryReadyReplicas the numbers of ready canary revision pods
CanaryReadyReplicas int32 `json:"canaryReadyReplicas"`
// CurrentStepIndex defines the current step of the rollout is on. If the current step index is null, the
// controller will execute the rollout.
// +optional
CurrentStepIndex int32 `json:"currentStepIndex"`
CurrentStepState CanaryStepState `json:"currentStepState"`
Message string `json:"message,omitempty"`
LastUpdateTime *metav1.Time `json:"lastUpdateTime,omitempty"`
}
// BlueGreenStatus status fields that only pertain to the blueGreen rollout
type BlueGreenStatus struct {
CommonStatus `json:",inline"`
// UpdatedRevision is calculated by rollout based on podTemplateHash, and the internal logic flow uses
// It may be different from rs podTemplateHash in different k8s versions, so it cannot be used as service selector label
UpdatedRevision string `json:"updatedRevision"`
// UpdatedReplicas the numbers of updated pods
UpdatedReplicas int32 `json:"updatedReplicas"`
// UpdatedReadyReplicas the numbers of updated ready pods
UpdatedReadyReplicas int32 `json:"updatedReadyReplicas"`
}
// GetSubStatus returns the ethier canary or bluegreen status
func (r *RolloutStatus) GetSubStatus() *CommonStatus {
if r.CanaryStatus == nil && r.BlueGreenStatus == nil {
return nil
}
if r.CanaryStatus != nil {
return &r.CanaryStatus.CommonStatus
}
return &r.BlueGreenStatus.CommonStatus
}
func (r *RolloutStatus) IsSubStatusEmpty() bool {
return r.CanaryStatus == nil && r.BlueGreenStatus == nil
}
func (r *RolloutStatus) Clear() {
r.CanaryStatus = nil
r.BlueGreenStatus = nil
}
//TODO - the following functions seem awkward, is there better way for our case?
func (r *RolloutStatus) GetCanaryRevision() string {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryRevision
}
return r.BlueGreenStatus.UpdatedRevision
}
func (r *RolloutStatus) SetCanaryRevision(revision string) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryRevision = revision
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedRevision = revision
}
}
func (r *RolloutStatus) GetCanaryReplicas() int32 {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryReplicas
}
return r.BlueGreenStatus.UpdatedReplicas
}
func (r *RolloutStatus) SetCanaryReplicas(replicas int32) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryReplicas = replicas
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedReplicas = replicas
}
}
func (r *RolloutStatus) GetCanaryReadyReplicas() int32 {
if r.CanaryStatus != nil {
return r.CanaryStatus.CanaryReadyReplicas
}
return r.BlueGreenStatus.UpdatedReadyReplicas
}
func (r *RolloutStatus) SetCanaryReadyReplicas(replicas int32) {
if r.CanaryStatus != nil {
r.CanaryStatus.CanaryReadyReplicas = replicas
}
if r.BlueGreenStatus != nil {
r.BlueGreenStatus.UpdatedReadyReplicas = replicas
}
}
type CanaryStepState string
const (
// the first step, handle some special cases before step upgrade, to prevent traffic loss
CanaryStepStateInit CanaryStepState = "BeforeStepUpgrade"
CanaryStepStateUpgrade CanaryStepState = "StepUpgrade"
CanaryStepStateTrafficRouting CanaryStepState = "StepTrafficRouting"
CanaryStepStateMetricsAnalysis CanaryStepState = "StepMetricsAnalysis"
@ -287,13 +557,39 @@ const (
RolloutPhaseDisabling RolloutPhase = "Disabling"
)
type FinalisingStepType string
//goland:noinspection GoUnusedConst
const (
// Route all traffic to new version (for bluegreen)
FinalisingStepRouteTrafficToNew FinalisingStepType = "FinalisingStepRouteTrafficToNew"
// Restore the GatewayAPI/Ingress/Istio
FinalisingStepRouteTrafficToStable FinalisingStepType = "FinalisingStepRouteTrafficToStable"
// Restore the stable Service, i.e. remove corresponding selector
FinalisingStepRestoreStableService FinalisingStepType = "RestoreStableService"
// Remove the Canary Service
FinalisingStepRemoveCanaryService FinalisingStepType = "RemoveCanaryService"
// Patch Batch Release to scale down (exception: the canary Deployment will be
// scaled down in FinalisingStepTypeDeleteBR step)
// For Both BlueGreenStrategy and CanaryStrategy:
// set workload.pause=false, set workload.partition=0
FinalisingStepResumeWorkload FinalisingStepType = "ResumeWorkload"
// Delete Batch Release
FinalisingStepReleaseWorkloadControl FinalisingStepType = "ReleaseWorkloadControl"
// All needed work done
FinalisingStepTypeEnd FinalisingStepType = "END"
// Only for debugging use
FinalisingStepWaitEndless FinalisingStepType = "WaitEndless"
)
// +genclient
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// +kubebuilder:storageversion
// +kubebuilder:printcolumn:name="STATUS",type="string",JSONPath=".status.phase",description="The rollout status phase"
// +kubebuilder:printcolumn:name="CANARY_STEP",type="integer",JSONPath=".status.canaryStatus.currentStepIndex",description="The rollout canary status step"
// +kubebuilder:printcolumn:name="CANARY_STATE",type="string",JSONPath=".status.canaryStatus.currentStepState",description="The rollout canary status step state"
// +kubebuilder:printcolumn:name="CANARY_STEP",type="integer",JSONPath=".status.currentStepIndex",description="The rollout canary status step"
// +kubebuilder:printcolumn:name="CANARY_STATE",type="string",JSONPath=".status.currentStepState",description="The rollout canary status step state"
// +kubebuilder:printcolumn:name="MESSAGE",type="string",JSONPath=".status.message",description="The rollout canary status message"
// +kubebuilder:printcolumn:name="AGE",type=date,JSONPath=".metadata.creationTimestamp"

View File

@ -21,6 +21,7 @@ type TrafficRoutingRef struct {
// Service holds the name of a service which selects pods with stable version and don't select any pods with canary version.
Service string `json:"service"`
// Optional duration in seconds the traffic provider(e.g. nginx ingress controller) consumes the service, ingress configuration changes gracefully.
// +kubebuilder:default=3
GracePeriodSeconds int32 `json:"gracePeriodSeconds,omitempty"`
// Ingress holds Ingress specific configuration to route traffic, e.g. Nginx, Alb.
Ingress *IngressTrafficRouting `json:"ingress,omitempty"`

View File

@ -156,13 +156,60 @@ func (in *BatchReleaseStatus) DeepCopy() *BatchReleaseStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BlueGreenStatus) DeepCopyInto(out *BlueGreenStatus) {
*out = *in
in.CommonStatus.DeepCopyInto(&out.CommonStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BlueGreenStatus.
func (in *BlueGreenStatus) DeepCopy() *BlueGreenStatus {
if in == nil {
return nil
}
out := new(BlueGreenStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BlueGreenStrategy) DeepCopyInto(out *BlueGreenStrategy) {
*out = *in
if in.Steps != nil {
in, out := &in.Steps, &out.Steps
*out = make([]CanaryStep, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.TrafficRoutings != nil {
in, out := &in.TrafficRoutings, &out.TrafficRoutings
*out = make([]TrafficRoutingRef, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(intstr.IntOrString)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BlueGreenStrategy.
func (in *BlueGreenStrategy) DeepCopy() *BlueGreenStrategy {
if in == nil {
return nil
}
out := new(BlueGreenStrategy)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CanaryStatus) DeepCopyInto(out *CanaryStatus) {
*out = *in
if in.LastUpdateTime != nil {
in, out := &in.LastUpdateTime, &out.LastUpdateTime
*out = (*in).DeepCopy()
}
in.CommonStatus.DeepCopyInto(&out.CommonStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CanaryStatus.
@ -236,6 +283,25 @@ func (in *CanaryStrategy) DeepCopy() *CanaryStrategy {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CommonStatus) DeepCopyInto(out *CommonStatus) {
*out = *in
if in.LastUpdateTime != nil {
in, out := &in.LastUpdateTime, &out.LastUpdateTime
*out = (*in).DeepCopy()
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CommonStatus.
func (in *CommonStatus) DeepCopy() *CommonStatus {
if in == nil {
return nil
}
out := new(CommonStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentExtraStatus) DeepCopyInto(out *DeploymentExtraStatus) {
*out = *in
@ -295,6 +361,11 @@ func (in *GatewayTrafficRouting) DeepCopy() *GatewayTrafficRouting {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HttpRouteMatch) DeepCopyInto(out *HttpRouteMatch) {
*out = *in
if in.Path != nil {
in, out := &in.Path, &out.Path
*out = new(apisv1beta1.HTTPPathMatch)
(*in).DeepCopyInto(*out)
}
if in.Headers != nil {
in, out := &in.Headers, &out.Headers
*out = make([]apisv1beta1.HTTPHeaderMatch, len(*in))
@ -302,6 +373,13 @@ func (in *HttpRouteMatch) DeepCopyInto(out *HttpRouteMatch) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.QueryParams != nil {
in, out := &in.QueryParams, &out.QueryParams
*out = make([]apisv1beta1.HTTPQueryParamMatch, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HttpRouteMatch.
@ -545,6 +623,11 @@ func (in *RolloutStatus) DeepCopyInto(out *RolloutStatus) {
*out = new(CanaryStatus)
(*in).DeepCopyInto(*out)
}
if in.BlueGreenStatus != nil {
in, out := &in.BlueGreenStatus, &out.BlueGreenStatus
*out = new(BlueGreenStatus)
(*in).DeepCopyInto(*out)
}
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]RolloutCondition, len(*in))
@ -572,6 +655,11 @@ func (in *RolloutStrategy) DeepCopyInto(out *RolloutStrategy) {
*out = new(CanaryStrategy)
(*in).DeepCopyInto(*out)
}
if in.BlueGreen != nil {
in, out := &in.BlueGreen, &out.BlueGreen
*out = new(BlueGreenStrategy)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RolloutStrategy.
@ -624,7 +712,7 @@ func (in *TrafficRoutingStrategy) DeepCopyInto(out *TrafficRoutingStrategy) {
}
if in.RequestHeaderModifier != nil {
in, out := &in.RequestHeaderModifier, &out.RequestHeaderModifier
*out = new(apisv1beta1.HTTPRequestHeaderFilter)
*out = new(apisv1beta1.HTTPHeaderFilter)
(*in).DeepCopyInto(*out)
}
if in.Matches != nil {

View File

@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: batchreleases.rollouts.kruise.io
spec:
@ -88,6 +87,14 @@ spec:
- canaryReplicas
type: object
type: array
enableExtraWorkloadForCanary:
description: EnableExtraWorkloadForCanary indicates whether to
create extra workload for canary True corresponds to RollingStyle
"Canary". False corresponds to RollingStyle "Partiton". Ignored
in BlueGreen-style. This field is about to deprecate, use RollingStyle
instead. If both of them are set, controller will only consider
this filed when RollingStyle is empty
type: boolean
failureThreshold:
anyOf:
- type: integer
@ -118,9 +125,14 @@ spec:
description: labels
type: object
type: object
rollingStyle:
description: RollingStyle can be "Canary", "Partiton" or "BlueGreen"
type: string
rolloutID:
description: RolloutID indicates an id for each rollout progress
type: string
required:
- enableExtraWorkloadForCanary
type: object
targetReference:
description: TargetRef contains the GVK and name of the workload that
@ -346,11 +358,12 @@ spec:
type: object
type: array
enableExtraWorkloadForCanary:
description: 'If true, then it will create new deployment for
canary, such as: workload-demo-canary. When user verifies that
the canary version is ready, we will remove the canary deployment
and release the deployment workload-demo in full. Current only
support k8s native deployment'
description: EnableExtraWorkloadForCanary indicates whether to
create extra workload for canary True corresponds to RollingStyle
"Canary". False corresponds to RollingStyle "Partiton". Ignored
in BlueGreen-style. This field is about to deprecate, use RollingStyle
instead. If both of them are set, controller will only consider
this filed when RollingStyle is empty
type: boolean
failureThreshold:
anyOf:
@ -382,6 +395,9 @@ spec:
description: labels
type: object
type: object
rollingStyle:
description: RollingStyle can be "Canary", "Partiton" or "BlueGreen"
type: string
rolloutID:
description: RolloutID indicates an id for each rollout progress
type: string
@ -491,6 +507,10 @@ spec:
- type
type: object
type: array
message:
description: Message provides details on why the rollout is in its
current phase
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for this BatchRelease. It corresponds to this BatchRelease's generation,
@ -533,9 +553,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: rollouthistories.rollouts.kruise.io
spec:
@ -168,9 +167,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: rollouts.rollouts.kruise.io
spec:
@ -99,6 +98,10 @@ spec:
description: CanaryStrategy defines parameters for a Replica Based
Canary
properties:
disableGenerateCanaryService:
description: canary service will not be generated if DisableGenerateCanaryService
is true
type: boolean
failureThreshold:
anyOf:
- type: integer
@ -177,13 +180,14 @@ spec:
default: Exact
description: "Type specifies how to match
against the value of the header. \n Support:
Core (Exact) \n Support: Custom (RegularExpression)
\n Since RegularExpression HeaderMatchType
has custom conformance, implementations
can support POSIX, PCRE or any other dialects
of regular expressions. Please read the
implementation's documentation to determine
the supported dialect."
Core (Exact) \n Support: Implementation-specific
(RegularExpression) \n Since RegularExpression
HeaderMatchType has implementation-specific
conformance, implementations can support
POSIX, PCRE or any other dialects of regular
expressions. Please read the implementation's
documentation to determine the supported
dialect."
enum:
- Exact
- RegularExpression
@ -223,18 +227,17 @@ spec:
requestHeaderModifier:
description: "Set overwrites the request with the given
header (name, value) before the action. \n Input:
\ GET /foo HTTP/1.1 my-header: foo \n requestHeaderModifier:
\ set: - name: \"my-header\" value: \"bar\"
\n Output: GET /foo HTTP/1.1 my-header: bar"
GET /foo HTTP/1.1 my-header: foo \n requestHeaderModifier:
set: - name: \"my-header\" value: \"bar\" \n Output:
GET /foo HTTP/1.1 my-header: bar"
properties:
add:
description: "Add adds the given header(s) (name,
value) to the request before the action. It appends
to any existing values associated with the header
name. \n Input: GET /foo HTTP/1.1 my-header:
foo \n Config: add: - name: \"my-header\"
\ value: \"bar\" \n Output: GET /foo HTTP/1.1
\ my-header: foo my-header: bar"
name. \n Input: GET /foo HTTP/1.1 my-header: foo
\n Config: add: - name: \"my-header\" value: \"bar,baz\"
\n Output: GET /foo HTTP/1.1 my-header: foo,bar,baz"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
@ -274,10 +277,10 @@ spec:
HTTP request before the action. The value of Remove
is a list of HTTP header names. Note that the
header names are case-insensitive (see https://datatracker.ietf.org/doc/html/rfc2616#section-4.2).
\n Input: GET /foo HTTP/1.1 my-header1: foo
\ my-header2: bar my-header3: baz \n Config:
\ remove: [\"my-header1\", \"my-header3\"] \n
Output: GET /foo HTTP/1.1 my-header2: bar"
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
bar"
items:
type: string
maxItems: 16
@ -285,10 +288,9 @@ spec:
set:
description: "Set overwrites the request with the
given header (name, value) before the action.
\n Input: GET /foo HTTP/1.1 my-header: foo
\n Config: set: - name: \"my-header\" value:
\"bar\" \n Output: GET /foo HTTP/1.1 my-header:
bar"
\n Input: GET /foo HTTP/1.1 my-header: foo \n
Config: set: - name: \"my-header\" value: \"bar\"
\n Output: GET /foo HTTP/1.1 my-header: bar"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
@ -369,6 +371,7 @@ spec:
type: string
type: object
gracePeriodSeconds:
default: 3
description: Optional duration in seconds the traffic
provider(e.g. nginx ingress controller) consumes the
service, ingress configuration changes gracefully.
@ -431,18 +434,32 @@ spec:
so it cannot be used as service selector label
type: string
currentStepIndex:
description: CurrentStepIndex defines the current step of the
rollout is on. If the current step index is null, the controller
will execute the rollout.
format: int32
type: integer
currentStepState:
type: string
finalisingStep:
type: string
lastUpdateTime:
format: date-time
type: string
message:
type: string
nextStepIndex:
description: NextStepIndex defines the next step of the rollout
is on. In normal case, NextStepIndex is equal to CurrentStepIndex
+ 1 If the current step is the last step, NextStepIndex is equal
to -1 Before the release, NextStepIndex is also equal to -1
0 is not used and won't appear in any case It is allowed to
patch NextStepIndex by design, e.g. if CurrentStepIndex is 2,
user can patch NextStepIndex to 3 (if exists) to achieve batch
jump, or patch NextStepIndex to 1 to implement a re-execution
of step 1 Patching it with a non-positive value is meaningless,
which will be corrected in the next reconciliation achieve batch
jump, or patch NextStepIndex to 1 to implement a re-execution
of step 1
format: int32
type: integer
observedRolloutID:
description: ObservedRolloutID will record the newest spec.RolloutID
if status.canaryRevision equals to workload.updateRevision
@ -466,6 +483,8 @@ spec:
- canaryReplicas
- canaryRevision
- currentStepState
- finalisingStep
- nextStepIndex
- podTemplateHash
type: object
conditions:
@ -513,8 +532,7 @@ spec:
format: int64
type: integer
phase:
description: BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
Phase is the rollout phase.
description: Phase is the rollout phase.
type: string
type: object
type: object
@ -528,11 +546,11 @@ spec:
name: STATUS
type: string
- description: The rollout canary status step
jsonPath: .status.canaryStatus.currentStepIndex
jsonPath: .status.currentStepIndex
name: CANARY_STEP
type: integer
- description: The rollout canary status step state
jsonPath: .status.canaryStatus.currentStepState
jsonPath: .status.currentStepState
name: CANARY_STATE
type: string
- description: The rollout canary status message
@ -570,10 +588,414 @@ spec:
strategy:
description: rollout strategy
properties:
blueGreen:
description: BlueGreenStrategy defines parameters for Blue Green
Release
properties:
disableGenerateCanaryService:
description: canary service will not be generated if DisableGenerateCanaryService
is true
type: boolean
failureThreshold:
anyOf:
- type: integer
- type: string
description: FailureThreshold indicates how many failed pods
can be tolerated in all upgraded pods. Only when FailureThreshold
are satisfied, Rollout can enter ready state. If FailureThreshold
is nil, Rollout will use the MaxUnavailable of workload
as its FailureThreshold. Defaults to nil.
x-kubernetes-int-or-string: true
steps:
description: Steps define the order of phases to execute release
in batches(20%, 40%, 60%, 80%, 100%)
items:
description: CanaryStep defines a step of a canary workload.
properties:
matches:
description: "Matches define conditions used for matching
incoming HTTP requests to the canary service. Each
match is independent, i.e. this rule will be matched
as long as **any** one of the matches is satisfied.
\n It cannot support Traffic (weight-based routing)
and Matches simultaneously, if both are configured.
In such cases, Matches takes precedence."
items:
properties:
headers:
description: Headers specifies HTTP request header
matchers. Multiple match values are ANDed together,
meaning, a request must match all the specified
headers to select the route.
items:
description: HTTPHeaderMatch describes how to
select a HTTP route by matching HTTP request
headers.
properties:
name:
description: "Name is the name of the HTTP
Header to be matched. Name matching MUST
be case insensitive. (See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent
header names, only the first entry with
an equivalent name MUST be considered
for a match. Subsequent entries with an
equivalent header name MUST be ignored.
Due to the case-insensitivity of header
names, \"foo\" and \"Foo\" are considered
equivalent. \n When a header is repeated
in an HTTP request, it is implementation-specific
behavior as to how this is represented.
Generally, proxies should follow the guidance
from the RFC: https://www.rfc-editor.org/rfc/rfc7230.html#section-3.2.2
regarding processing a repeated header,
with special handling for \"Set-Cookie\"."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
type:
default: Exact
description: "Type specifies how to match
against the value of the header. \n Support:
Core (Exact) \n Support: Implementation-specific
(RegularExpression) \n Since RegularExpression
HeaderMatchType has implementation-specific
conformance, implementations can support
POSIX, PCRE or any other dialects of regular
expressions. Please read the implementation's
documentation to determine the supported
dialect."
enum:
- Exact
- RegularExpression
type: string
value:
description: Value is the value of HTTP
Header to be matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
path:
description: 'Path specifies a HTTP request path
matcher. Supported list: - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
- GatewayAPI: If path is defined, the whole
HttpRouteMatch will be used as a standalone
matcher'
properties:
type:
default: PathPrefix
description: "Type specifies how to match
against the path Value. \n Support: Core
(Exact, PathPrefix) \n Support: Implementation-specific
(RegularExpression)"
enum:
- Exact
- PathPrefix
- RegularExpression
type: string
value:
default: /
description: Value of the HTTP path to match
against.
maxLength: 1024
type: string
type: object
queryParams:
description: 'QueryParams specifies HTTP query
parameter matchers. Multiple match values are
ANDed together, meaning, a request must match
all the specified query parameters to select
the route. Supported list: - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
- MSE Ingress: https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/annotations-supported-by-mse-ingress-gateways-1
Header/Cookie > QueryParams - Gateway API'
items:
description: HTTPQueryParamMatch describes how
to select a HTTP route by matching HTTP query
parameters.
properties:
name:
description: "Name is the name of the HTTP
query param to be matched. This must be
an exact string match. (See https://tools.ietf.org/html/rfc7230#section-2.7.3).
\n If multiple entries specify equivalent
query param names, only the first entry
with an equivalent name MUST be considered
for a match. Subsequent entries with an
equivalent query param name MUST be ignored.
\n If a query param is repeated in an
HTTP request, the behavior is purposely
left undefined, since different data planes
have different capabilities. However,
it is *recommended* that implementations
should match against the first value of
the param if the data plane supports it,
as this behavior is expected in other
load balancing contexts outside of the
Gateway API. \n Users SHOULD NOT route
traffic based on repeated query params
to guard themselves against potential
differences in the implementations."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
type:
default: Exact
description: "Type specifies how to match
against the value of the query parameter.
\n Support: Extended (Exact) \n Support:
Implementation-specific (RegularExpression)
\n Since RegularExpression QueryParamMatchType
has Implementation-specific conformance,
implementations can support POSIX, PCRE
or any other dialects of regular expressions.
Please read the implementation's documentation
to determine the supported dialect."
enum:
- Exact
- RegularExpression
type: string
value:
description: Value is the value of HTTP
query param to be matched.
maxLength: 1024
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
type: array
pause:
description: Pause defines a pause stage for a rollout,
manual or auto
properties:
duration:
description: Duration the amount of time to wait
before moving to the next step.
format: int32
type: integer
type: object
replicas:
anyOf:
- type: integer
- type: string
description: 'Replicas is the number of expected canary
pods in this batch it can be an absolute number (ex:
5) or a percentage of total pods.'
x-kubernetes-int-or-string: true
requestHeaderModifier:
description: "Set overwrites the request with the given
header (name, value) before the action. \n Input:
GET /foo HTTP/1.1 my-header: foo \n requestHeaderModifier:
set: - name: \"my-header\" value: \"bar\" \n Output:
GET /foo HTTP/1.1 my-header: bar"
properties:
add:
description: "Add adds the given header(s) (name,
value) to the request before the action. It appends
to any existing values associated with the header
name. \n Input: GET /foo HTTP/1.1 my-header: foo
\n Config: add: - name: \"my-header\" value: \"bar,baz\"
\n Output: GET /foo HTTP/1.1 my-header: foo,bar,baz"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
properties:
name:
description: "Name is the name of the HTTP
Header to be matched. Name matching MUST
be case insensitive. (See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent
header names, the first entry with an equivalent
name MUST be considered for a match. Subsequent
entries with an equivalent header name MUST
be ignored. Due to the case-insensitivity
of header names, \"foo\" and \"Foo\" are
considered equivalent."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
value:
description: Value is the value of HTTP Header
to be matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
remove:
description: "Remove the given header(s) from the
HTTP request before the action. The value of Remove
is a list of HTTP header names. Note that the
header names are case-insensitive (see https://datatracker.ietf.org/doc/html/rfc2616#section-4.2).
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
bar"
items:
type: string
maxItems: 16
type: array
set:
description: "Set overwrites the request with the
given header (name, value) before the action.
\n Input: GET /foo HTTP/1.1 my-header: foo \n
Config: set: - name: \"my-header\" value: \"bar\"
\n Output: GET /foo HTTP/1.1 my-header: bar"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
properties:
name:
description: "Name is the name of the HTTP
Header to be matched. Name matching MUST
be case insensitive. (See https://tools.ietf.org/html/rfc7230#section-3.2).
\n If multiple entries specify equivalent
header names, the first entry with an equivalent
name MUST be considered for a match. Subsequent
entries with an equivalent header name MUST
be ignored. Due to the case-insensitivity
of header names, \"foo\" and \"Foo\" are
considered equivalent."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
value:
description: Value is the value of HTTP Header
to be matched.
maxLength: 4096
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
traffic:
description: Traffic indicate how many percentage of
traffic the canary pods should receive Value is of
string type and is a percentage, e.g. 5%.
type: string
type: object
type: array
trafficRoutingRef:
description: TrafficRoutingRef is TrafficRouting's Name
type: string
trafficRoutings:
description: TrafficRoutings support ingress, gateway api
and custom network resource(e.g. istio, apisix) to enable
more fine-grained traffic routing and current only support
one TrafficRouting
items:
description: TrafficRoutingRef hosts all the different configuration
for supported service meshes to enable more fine-grained
traffic routing
properties:
customNetworkRefs:
description: CustomNetworkRefs hold a list of custom
providers to route traffic
items:
description: ObjectRef holds a references to the Kubernetes
object
properties:
apiVersion:
description: API Version of the referent
type: string
kind:
description: Kind of the referent
type: string
name:
description: Name of the referent
type: string
required:
- apiVersion
- kind
- name
type: object
type: array
gateway:
description: Gateway holds Gateway specific configuration
to route traffic Gateway configuration only supports
>= v0.4.0 (v1alpha2).
properties:
httpRouteName:
description: HTTPRouteName refers to the name of
an `HTTPRoute` resource in the same namespace
as the `Rollout`
type: string
type: object
gracePeriodSeconds:
default: 3
description: Optional duration in seconds the traffic
provider(e.g. nginx ingress controller) consumes the
service, ingress configuration changes gracefully.
format: int32
type: integer
ingress:
description: Ingress holds Ingress specific configuration
to route traffic, e.g. Nginx, Alb.
properties:
classType:
description: ClassType refers to the type of `Ingress`.
current support nginx, aliyun-alb. default is
nginx.
type: string
name:
description: Name refers to the name of an `Ingress`
resource in the same namespace as the `Rollout`
type: string
required:
- name
type: object
service:
description: Service holds the name of a service which
selects pods with stable version and don't select
any pods with canary version.
type: string
required:
- service
type: object
type: array
type: object
canary:
description: CanaryStrategy defines parameters for a Replica Based
Canary
properties:
disableGenerateCanaryService:
description: canary service will not be generated if DisableGenerateCanaryService
is true
type: boolean
enableExtraWorkloadForCanary:
description: 'If true, then it will create new deployment
for canary, such as: workload-demo-canary. When user verifies
@ -614,13 +1036,13 @@ spec:
description: CanaryStep defines a step of a canary workload.
properties:
matches:
description: Matches define conditions used for matching
the incoming HTTP requests to canary service. Each
description: "Matches define conditions used for matching
incoming HTTP requests to the canary service. Each
match is independent, i.e. this rule will be matched
if **any** one of the matches is satisfied. If Gateway
API, current only support one match. And cannot support
both weight and matches, if both are configured, then
matches takes precedence.
as long as **any** one of the matches is satisfied.
\n It cannot support Traffic (weight-based routing)
and Matches simultaneously, if both are configured.
In such cases, Matches takes precedence."
items:
properties:
headers:
@ -659,13 +1081,14 @@ spec:
default: Exact
description: "Type specifies how to match
against the value of the header. \n Support:
Core (Exact) \n Support: Custom (RegularExpression)
\n Since RegularExpression HeaderMatchType
has custom conformance, implementations
can support POSIX, PCRE or any other dialects
of regular expressions. Please read the
implementation's documentation to determine
the supported dialect."
Core (Exact) \n Support: Implementation-specific
(RegularExpression) \n Since RegularExpression
HeaderMatchType has implementation-specific
conformance, implementations can support
POSIX, PCRE or any other dialects of regular
expressions. Please read the implementation's
documentation to determine the supported
dialect."
enum:
- Exact
- RegularExpression
@ -682,6 +1105,104 @@ spec:
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
path:
description: 'Path specifies a HTTP request path
matcher. Supported list: - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
- GatewayAPI: If path is defined, the whole
HttpRouteMatch will be used as a standalone
matcher'
properties:
type:
default: PathPrefix
description: "Type specifies how to match
against the path Value. \n Support: Core
(Exact, PathPrefix) \n Support: Implementation-specific
(RegularExpression)"
enum:
- Exact
- PathPrefix
- RegularExpression
type: string
value:
default: /
description: Value of the HTTP path to match
against.
maxLength: 1024
type: string
type: object
queryParams:
description: 'QueryParams specifies HTTP query
parameter matchers. Multiple match values are
ANDed together, meaning, a request must match
all the specified query parameters to select
the route. Supported list: - Istio: https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
- MSE Ingress: https://help.aliyun.com/zh/ack/ack-managed-and-ack-dedicated/user-guide/annotations-supported-by-mse-ingress-gateways-1
Header/Cookie > QueryParams - Gateway API'
items:
description: HTTPQueryParamMatch describes how
to select a HTTP route by matching HTTP query
parameters.
properties:
name:
description: "Name is the name of the HTTP
query param to be matched. This must be
an exact string match. (See https://tools.ietf.org/html/rfc7230#section-2.7.3).
\n If multiple entries specify equivalent
query param names, only the first entry
with an equivalent name MUST be considered
for a match. Subsequent entries with an
equivalent query param name MUST be ignored.
\n If a query param is repeated in an
HTTP request, the behavior is purposely
left undefined, since different data planes
have different capabilities. However,
it is *recommended* that implementations
should match against the first value of
the param if the data plane supports it,
as this behavior is expected in other
load balancing contexts outside of the
Gateway API. \n Users SHOULD NOT route
traffic based on repeated query params
to guard themselves against potential
differences in the implementations."
maxLength: 256
minLength: 1
pattern: ^[A-Za-z0-9!#$%&'*+\-.^_\x60|~]+$
type: string
type:
default: Exact
description: "Type specifies how to match
against the value of the query parameter.
\n Support: Extended (Exact) \n Support:
Implementation-specific (RegularExpression)
\n Since RegularExpression QueryParamMatchType
has Implementation-specific conformance,
implementations can support POSIX, PCRE
or any other dialects of regular expressions.
Please read the implementation's documentation
to determine the supported dialect."
enum:
- Exact
- RegularExpression
type: string
value:
description: Value is the value of HTTP
query param to be matched.
maxLength: 1024
minLength: 1
type: string
required:
- name
- value
type: object
maxItems: 16
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
type: array
pause:
@ -705,18 +1226,17 @@ spec:
requestHeaderModifier:
description: "Set overwrites the request with the given
header (name, value) before the action. \n Input:
\ GET /foo HTTP/1.1 my-header: foo \n requestHeaderModifier:
\ set: - name: \"my-header\" value: \"bar\"
\n Output: GET /foo HTTP/1.1 my-header: bar"
GET /foo HTTP/1.1 my-header: foo \n requestHeaderModifier:
set: - name: \"my-header\" value: \"bar\" \n Output:
GET /foo HTTP/1.1 my-header: bar"
properties:
add:
description: "Add adds the given header(s) (name,
value) to the request before the action. It appends
to any existing values associated with the header
name. \n Input: GET /foo HTTP/1.1 my-header:
foo \n Config: add: - name: \"my-header\"
\ value: \"bar\" \n Output: GET /foo HTTP/1.1
\ my-header: foo my-header: bar"
name. \n Input: GET /foo HTTP/1.1 my-header: foo
\n Config: add: - name: \"my-header\" value: \"bar,baz\"
\n Output: GET /foo HTTP/1.1 my-header: foo,bar,baz"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
@ -756,10 +1276,10 @@ spec:
HTTP request before the action. The value of Remove
is a list of HTTP header names. Note that the
header names are case-insensitive (see https://datatracker.ietf.org/doc/html/rfc2616#section-4.2).
\n Input: GET /foo HTTP/1.1 my-header1: foo
\ my-header2: bar my-header3: baz \n Config:
\ remove: [\"my-header1\", \"my-header3\"] \n
Output: GET /foo HTTP/1.1 my-header2: bar"
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
bar"
items:
type: string
maxItems: 16
@ -767,10 +1287,9 @@ spec:
set:
description: "Set overwrites the request with the
given header (name, value) before the action.
\n Input: GET /foo HTTP/1.1 my-header: foo
\n Config: set: - name: \"my-header\" value:
\"bar\" \n Output: GET /foo HTTP/1.1 my-header:
bar"
\n Input: GET /foo HTTP/1.1 my-header: foo \n
Config: set: - name: \"my-header\" value: \"bar\"
\n Output: GET /foo HTTP/1.1 my-header: bar"
items:
description: HTTPHeader represents an HTTP Header
name and value as defined by RFC 7230.
@ -808,7 +1327,8 @@ spec:
type: object
traffic:
description: Traffic indicate how many percentage of
traffic the canary pods should receive
traffic the canary pods should receive Value is of
string type and is a percentage, e.g. 5%.
type: string
type: object
type: array
@ -816,9 +1336,10 @@ spec:
description: TrafficRoutingRef is TrafficRouting's Name
type: string
trafficRoutings:
description: TrafficRoutings hosts all the supported service
meshes supported to enable more fine-grained traffic routing
and current only support one TrafficRouting
description: TrafficRoutings support ingress, gateway api
and custom network resource(e.g. istio, apisix) to enable
more fine-grained traffic routing and current only support
one TrafficRouting
items:
description: TrafficRoutingRef hosts all the different configuration
for supported service meshes to enable more fine-grained
@ -858,6 +1379,7 @@ spec:
type: string
type: object
gracePeriodSeconds:
default: 3
description: Optional duration in seconds the traffic
provider(e.g. nginx ingress controller) consumes the
service, ingress configuration changes gracefully.
@ -921,6 +1443,79 @@ spec:
status:
description: RolloutStatus defines the observed state of Rollout
properties:
blueGreenStatus:
description: BlueGreen describes the state of the blueGreen rollout
properties:
currentStepIndex:
description: CurrentStepIndex defines the current step of the
rollout is on.
format: int32
type: integer
currentStepState:
type: string
finalisingStep:
description: FinalisingStep the step of finalising
type: string
lastUpdateTime:
format: date-time
type: string
message:
type: string
nextStepIndex:
description: NextStepIndex defines the next step of the rollout
is on. In normal case, NextStepIndex is equal to CurrentStepIndex
+ 1 If the current step is the last step, NextStepIndex is equal
to -1 Before the release, NextStepIndex is also equal to -1
0 is not used and won't appear in any case It is allowed to
patch NextStepIndex by design, e.g. if CurrentStepIndex is 2,
user can patch NextStepIndex to 3 (if exists) to achieve batch
jump, or patch NextStepIndex to 1 to implement a re-execution
of step 1 Patching it with a non-positive value is useless and
meaningless, which will be corrected in the next reconciliation
format: int32
type: integer
observedRolloutID:
description: ObservedRolloutID will record the newest spec.RolloutID
if status.canaryRevision equals to workload.updateRevision
type: string
observedWorkloadGeneration:
description: observedWorkloadGeneration is the most recent generation
observed for this Rollout ref workload generation.
format: int64
type: integer
podTemplateHash:
description: pod template hash is used as service selector label
type: string
rolloutHash:
description: RolloutHash from rollout.spec object
type: string
stableRevision:
description: StableRevision indicates the revision of stable pods
type: string
updatedReadyReplicas:
description: UpdatedReadyReplicas the numbers of updated ready
pods
format: int32
type: integer
updatedReplicas:
description: UpdatedReplicas the numbers of updated pods
format: int32
type: integer
updatedRevision:
description: UpdatedRevision is calculated by rollout based on
podTemplateHash, and the internal logic flow uses It may be
different from rs podTemplateHash in different k8s versions,
so it cannot be used as service selector label
type: string
required:
- currentStepState
- finalisingStep
- nextStepIndex
- podTemplateHash
- updatedReadyReplicas
- updatedReplicas
- updatedRevision
type: object
canaryStatus:
description: Canary describes the state of the canary rollout
properties:
@ -941,17 +1536,32 @@ spec:
type: string
currentStepIndex:
description: CurrentStepIndex defines the current step of the
rollout is on. If the current step index is null, the controller
will execute the rollout.
rollout is on.
format: int32
type: integer
currentStepState:
type: string
finalisingStep:
description: FinalisingStep the step of finalising
type: string
lastUpdateTime:
format: date-time
type: string
message:
type: string
nextStepIndex:
description: NextStepIndex defines the next step of the rollout
is on. In normal case, NextStepIndex is equal to CurrentStepIndex
+ 1 If the current step is the last step, NextStepIndex is equal
to -1 Before the release, NextStepIndex is also equal to -1
0 is not used and won't appear in any case It is allowed to
patch NextStepIndex by design, e.g. if CurrentStepIndex is 2,
user can patch NextStepIndex to 3 (if exists) to achieve batch
jump, or patch NextStepIndex to 1 to implement a re-execution
of step 1 Patching it with a non-positive value is useless and
meaningless, which will be corrected in the next reconciliation
format: int32
type: integer
observedRolloutID:
description: ObservedRolloutID will record the newest spec.RolloutID
if status.canaryRevision equals to workload.updateRevision
@ -975,6 +1585,8 @@ spec:
- canaryReplicas
- canaryRevision
- currentStepState
- finalisingStep
- nextStepIndex
- podTemplateHash
type: object
conditions:
@ -1012,6 +1624,14 @@ spec:
- type
type: object
type: array
currentStepIndex:
description: These two values will be synchronized with the same fileds
in CanaryStatus or BlueGreeenStatus mainly used to provide info
for kubectl get command
format: int32
type: integer
currentStepState:
type: string
message:
description: Message provides details on why the rollout is in its
current phase
@ -1025,15 +1645,12 @@ spec:
description: BlueGreenStatus *BlueGreenStatus `json:"blueGreenStatus,omitempty"`
Phase is the rollout phase.
type: string
required:
- currentStepIndex
- currentStepState
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.7.0
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
name: trafficroutings.rollouts.kruise.io
spec:
@ -82,6 +81,7 @@ spec:
type: string
type: object
gracePeriodSeconds:
default: 3
description: Optional duration in seconds the traffic provider(e.g.
nginx ingress controller) consumes the service, ingress configuration
changes gracefully.
@ -155,12 +155,12 @@ spec:
default: Exact
description: "Type specifies how to match against
the value of the header. \n Support: Core (Exact)
\n Support: Custom (RegularExpression) \n Since
RegularExpression HeaderMatchType has custom conformance,
implementations can support POSIX, PCRE or any other
dialects of regular expressions. Please read the
implementation's documentation to determine the
supported dialect."
\n Support: Implementation-specific (RegularExpression)
\n Since RegularExpression HeaderMatchType has implementation-specific
conformance, implementations can support POSIX,
PCRE or any other dialects of regular expressions.
Please read the implementation's documentation to
determine the supported dialect."
enum:
- Exact
- RegularExpression
@ -181,18 +181,17 @@ spec:
type: array
requestHeaderModifier:
description: "Set overwrites the request with the given header
(name, value) before the action. \n Input: GET /foo HTTP/1.1
\ my-header: foo \n requestHeaderModifier: set: - name:
\"my-header\" value: \"bar\" \n Output: GET /foo HTTP/1.1
\ my-header: bar"
(name, value) before the action. \n Input: GET /foo HTTP/1.1
my-header: foo \n requestHeaderModifier: set: - name: \"my-header\"
value: \"bar\" \n Output: GET /foo HTTP/1.1 my-header: bar"
properties:
add:
description: "Add adds the given header(s) (name, value) to
the request before the action. It appends to any existing
values associated with the header name. \n Input: GET
/foo HTTP/1.1 my-header: foo \n Config: add: - name:
\"my-header\" value: \"bar\" \n Output: GET /foo HTTP/1.1
\ my-header: foo my-header: bar"
values associated with the header name. \n Input: GET /foo
HTTP/1.1 my-header: foo \n Config: add: - name: \"my-header\"
value: \"bar,baz\" \n Output: GET /foo HTTP/1.1 my-header:
foo,bar,baz"
items:
description: HTTPHeader represents an HTTP Header name and
value as defined by RFC 7230.
@ -231,9 +230,9 @@ spec:
before the action. The value of Remove is a list of HTTP
header names. Note that the header names are case-insensitive
(see https://datatracker.ietf.org/doc/html/rfc2616#section-4.2).
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
\n Input: GET /foo HTTP/1.1 my-header1: foo my-header2:
bar my-header3: baz \n Config: remove: [\"my-header1\",
\"my-header3\"] \n Output: GET /foo HTTP/1.1 my-header2:
bar"
items:
type: string
@ -241,10 +240,9 @@ spec:
type: array
set:
description: "Set overwrites the request with the given header
(name, value) before the action. \n Input: GET /foo HTTP/1.1
\ my-header: foo \n Config: set: - name: \"my-header\"
\ value: \"bar\" \n Output: GET /foo HTTP/1.1 my-header:
bar"
(name, value) before the action. \n Input: GET /foo HTTP/1.1
my-header: foo \n Config: set: - name: \"my-header\" value:
\"bar\" \n Output: GET /foo HTTP/1.1 my-header: bar"
items:
description: HTTPHeader represents an HTTP Header name and
value as defined by RFC 7230.
@ -309,9 +307,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,4 +1,3 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@ -162,6 +161,16 @@ rules:
- get
- patch
- update
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
@ -196,18 +205,6 @@ rules:
- get
- patch
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
@ -376,3 +373,23 @@ rules:
- get
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: manager-role
namespace: system
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@ -10,3 +10,17 @@ subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manager-rolebinding
namespace: system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system

View File

@ -1,4 +1,3 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
@ -66,48 +65,6 @@ webhooks:
resources:
- deployments
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-apps-v1-statefulset
failurePolicy: Fail
name: mstatefulset.kb.io
rules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- UPDATE
resources:
- statefulsets
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-apps-kruise-io-statefulset
failurePolicy: Fail
name: madvancedstatefulset.kb.io
rules:
- apiGroups:
- apps.kruise.io
apiVersions:
- v1alpha1
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- statefulsets
sideEffects: None
- admissionReviewVersions:
- v1
- v1beta1
@ -129,7 +86,6 @@ webhooks:
resources:
- '*'
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration

View File

@ -18,16 +18,16 @@ webhooks:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
- name: mstatefulset.kb.io
objectSelector:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
- name: madvancedstatefulset.kb.io
objectSelector:
matchExpressions:
- key: rollouts.kruise.io/workload-type
operator: Exists
# - name: mstatefulset.kb.io
# objectSelector:
# matchExpressions:
# - key: rollouts.kruise.io/workload-type
# operator: Exists
# - name: madvancedstatefulset.kb.io
# objectSelector:
# matchExpressions:
# - key: rollouts.kruise.io/workload-type
# operator: Exists
- name: mdeployment.kb.io
objectSelector:
matchExpressions:

84
go.mod
View File

@ -1,91 +1,81 @@
module github.com/openkruise/rollouts
go 1.18
go 1.19
require (
github.com/davecgh/go-spew v1.1.1
github.com/evanphx/json-patch v4.12.0+incompatible
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.18.1
github.com/onsi/gomega v1.24.1
github.com/openkruise/kruise-api v1.3.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.8.2
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8
golang.org/x/time v0.3.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.24.1
k8s.io/apiextensions-apiserver v0.24.1
k8s.io/apimachinery v0.24.1
k8s.io/apiserver v0.24.1
k8s.io/client-go v0.24.1
k8s.io/component-base v0.24.1
k8s.io/klog/v2 v2.60.1
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9
k8s.io/api v0.26.3
k8s.io/apiextensions-apiserver v0.26.3
k8s.io/apimachinery v0.26.3
k8s.io/apiserver v0.26.3
k8s.io/client-go v0.26.3
k8s.io/component-base v0.26.3
k8s.io/klog/v2 v2.100.1
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf
sigs.k8s.io/controller-runtime v0.12.1
sigs.k8s.io/gateway-api v0.5.1
sigs.k8s.io/controller-runtime v0.14.6
sigs.k8s.io/gateway-api v0.7.1
sigs.k8s.io/yaml v1.3.0
)
require (
cloud.google.com/go v0.81.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.18 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/emicklei/go-restful v2.9.5+incompatible // indirect
github.com/form3tech-oss/jwt-go v3.2.3+incompatible // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/go-logr/logr v1.2.0 // indirect
github.com/go-logr/zapr v1.2.0 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/zapr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.19.1 // indirect
golang.org/x/crypto v0.0.0-20220214200702-86341886e292 // indirect
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
go.uber.org/zap v1.24.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/term v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.27.1 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
)

496
go.sum
View File

@ -13,12 +13,6 @@ cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKV
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0 h1:at8Tk2zUz63cLPR0JPWm5vp77pEZmzxEQBEfRKn1VV8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@ -27,7 +21,6 @@ cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4g
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
@ -38,57 +31,22 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.18 h1:90Y4srNYrwOtAgVo3ndrQkTYn6kf1Eg/AjTFJ8Is2aM=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest/adal v0.9.13 h1:Mp5hbtOePIzM8pJVRa3YLrWWmZtoxRXqUEzCfJt3+/Q=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@ -97,92 +55,57 @@ github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5P
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao2r4iyvLdACqsl/Ljk=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible h1:7ZaBxOI7TMoYBfyA3cQHErNNyAWIKUMIwqxEtgHOs5c=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v1.2.0 h1:QK40JKJyMdUDz+h+xvCsru/bJhvG0UxvePV0ufL/AcE=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/zapr v1.2.0 h1:n4JnPI1T3Qq1SFEi/F8rwLrZERp2bso19PJZDB9dayk=
github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/zapr v1.2.3 h1:a9vnzlIBPQBBkeaR9IuMUfmVOrQlkoC4YfPoFkX3T7A=
github.com/go-logr/zapr v1.2.3/go.mod h1:eIauM6P8qSvTw5o2ez6UEAfGjQKrxQTl5EoK+Qa2oG4=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5 h1:1WJP/wi4OjB4iV8KVbH73rQaoialJrqv8gitZLxGLtM=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14 h1:gm3vOOXfiuw5i9p5N9xJvfjvuofpyvLA9Wr6QfK5Fng=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -195,7 +118,6 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -211,14 +133,10 @@ github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QD
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/cel-go v0.10.1/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@ -228,18 +146,15 @@ github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
@ -247,55 +162,18 @@ github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
@ -306,16 +184,12 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
@ -323,28 +197,13 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.6 h1:8yTIVnZgCoiM1TgqoeTl+LfU5Jg6/xL3QhGQnimLYnA=
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/matttproud/golang_protobuf_extensions v1.0.2 h1:hAHbPm5IJGijwng3PWk09JkG9WeqChjprR5s9bBZ+OM=
github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -352,183 +211,105 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.0.0 h1:CcuG/HvWNkkaqCUpJifQY8z7qEMBJya6aLPx6ftGyjQ=
github.com/onsi/ginkgo/v2 v2.0.0/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/ginkgo/v2 v2.6.0 h1:9t9b9vRUbFq3C4qKFCGkVuq/fIHji802N1nrtkh1mNc=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.18.1 h1:M1GfJqGRrBrrGGsbxzV5dqM2U2ApXefZCQpkukxYRLE=
github.com/onsi/gomega v1.18.1/go.mod h1:0q+aL8jAiMXy9hbwj2mr5GziHiwhAIQpFmmtT5hitRs=
github.com/onsi/gomega v1.24.1 h1:KORJXNNTzJXzu4ScJWssJfJMnJ+2QJqhoQSRwNlze9E=
github.com/onsi/gomega v1.24.1/go.mod h1:3AOiACssS3/MajrniINInwbfOOtfZvplPzuRSmvt1jM=
github.com/openkruise/kruise-api v1.3.0 h1:yfEy64uXgSuX/5RwePLbwUK/uX8RRM8fHJkccel5ZIQ=
github.com/openkruise/kruise-api v1.3.0/go.mod h1:9ZX+ycdHKNzcA5ezAf35xOa2Mwfa2BYagWr0lKgi5dU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVDGF+06gjBzk=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1 h1:hWIdL3N2HoUx3B8j3YN9mWor0qhY/NlEKZEaXxuIRh4=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64 h1:5mLPGnFdSsevFRFc9q3yYbBkB6tsm4aCwwQV/j1JQAQ=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
go.etcd.io/etcd/client/v3 v3.5.1/go.mod h1:OnjH4M8OnAotwaB2l9bVgZzRFKru7/ZMoS46OtKyd3Q=
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.19.1 h1:ue41HOKd1vGURxrmeKIgELGb3jPW9DMUDGtsinblHwI=
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220214200702-86341886e292 h1:f+lwQ+GtmgoY+A2YaQxlSOnDjXcQ7ZRLWOHbC6HtRqE=
golang.org/x/crypto v0.0.0-20220214200702-86341886e292/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -551,8 +332,6 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
@ -561,17 +340,10 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@ -583,7 +355,6 @@ golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@ -600,35 +371,19 @@ golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 h1:RerP+noqYHUQ8CMRcPlC2nvTa4dcBIjegkuWdcUDuqg=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -639,13 +394,9 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -659,7 +410,6 @@ golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -675,78 +425,56 @@ golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 h1:rm+CHSpPEEW2IsXUib1ThaHIjuBVZjxNgSKmBLFfD4c=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0kGgGLxUOYcY4U/2Vjg44=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@ -766,7 +494,6 @@ golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjs
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@ -774,21 +501,11 @@ golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roY
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.10-0.20220218145154-897bd77cd717/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
@ -808,11 +525,6 @@ google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0M
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@ -842,31 +554,15 @@ google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfG
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201102152239-715cce707fb0/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@ -879,16 +575,6 @@ google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKa
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -901,8 +587,8 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -913,16 +599,10 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@ -931,10 +611,9 @@ gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@ -942,45 +621,36 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.24.1 h1:BjCMRDcyEYz03joa3K1+rbshwh1Ay6oB53+iUx2H8UY=
k8s.io/api v0.24.1/go.mod h1:JhoOvNiLXKTPQ60zh2g0ewpA+bnEYf5q44Flhquh4vQ=
k8s.io/apiextensions-apiserver v0.24.1 h1:5yBh9+ueTq/kfnHQZa0MAo6uNcPrtxPMpNQgorBaKS0=
k8s.io/apiextensions-apiserver v0.24.1/go.mod h1:A6MHfaLDGfjOc/We2nM7uewD5Oa/FnEbZ6cD7g2ca4Q=
k8s.io/apimachinery v0.24.1 h1:ShD4aDxTQKN5zNf8K1RQ2u98ELLdIW7jEnlO9uAMX/I=
k8s.io/apimachinery v0.24.1/go.mod h1:82Bi4sCzVBdpYjyI4jY6aHX+YCUchUIrZrXKedjd2UM=
k8s.io/apiserver v0.24.1 h1:LAA5UpPOeaREEtFAQRUQOI3eE5So/j5J3zeQJjeLdz4=
k8s.io/apiserver v0.24.1/go.mod h1:dQWNMx15S8NqJMp0gpYfssyvhYnkilc1LpExd/dkLh0=
k8s.io/client-go v0.24.1 h1:w1hNdI9PFrzu3OlovVeTnf4oHDt+FJLd9Ndluvnb42E=
k8s.io/client-go v0.24.1/go.mod h1:f1kIDqcEYmwXS/vTbbhopMUbhKp2JhOeVTfxgaCIlF8=
k8s.io/code-generator v0.24.1/go.mod h1:dpVhs00hTuTdTY6jvVxvTFCk6gSMrtfRydbhZwHI15w=
k8s.io/component-base v0.24.1 h1:APv6W/YmfOWZfo+XJ1mZwep/f7g7Tpwvdbo9CQLDuts=
k8s.io/component-base v0.24.1/go.mod h1:DW5vQGYVCog8WYpNob3PMmmsY8A3L9QZNg4j/dV3s38=
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.60.1 h1:VW25q3bZx9uE3vvdL6M8ezOX79vA2Aq1nEWLqNQclHc=
k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 h1:Gii5eqf+GmIEwGNKQYQClCayuJCe2/4fZUvF7VG99sU=
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42/go.mod h1:Z/45zLw8lUo4wdiUkI+v/ImEGAvu3WatcZl3lPMR4Rk=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/api v0.26.3 h1:emf74GIQMTik01Aum9dPP0gAypL8JTLl/lHa4V9RFSU=
k8s.io/api v0.26.3/go.mod h1:PXsqwPMXBSBcL1lJ9CYDKy7kIReUydukS5JiRlxC3qE=
k8s.io/apiextensions-apiserver v0.26.3 h1:5PGMm3oEzdB1W/FTMgGIDmm100vn7IaUP5er36dB+YE=
k8s.io/apiextensions-apiserver v0.26.3/go.mod h1:jdA5MdjNWGP+njw1EKMZc64xAT5fIhN6VJrElV3sfpQ=
k8s.io/apimachinery v0.26.3 h1:dQx6PNETJ7nODU3XPtrwkfuubs6w7sX0M8n61zHIV/k=
k8s.io/apimachinery v0.26.3/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
k8s.io/apiserver v0.26.3 h1:blBpv+yOiozkPH2aqClhJmJY+rp53Tgfac4SKPDJnU4=
k8s.io/apiserver v0.26.3/go.mod h1:CJe/VoQNcXdhm67EvaVjYXxR3QyfwpceKPuPaeLibTA=
k8s.io/client-go v0.26.3 h1:k1UY+KXfkxV2ScEL3gilKcF7761xkYsSD6BC9szIu8s=
k8s.io/client-go v0.26.3/go.mod h1:ZPNu9lm8/dbRIPAgteN30RSXea6vrCpFvq+MateTUuQ=
k8s.io/component-base v0.26.3 h1:oC0WMK/ggcbGDTkdcqefI4wIZRYdK3JySx9/HADpV0g=
k8s.io/component-base v0.26.3/go.mod h1:5kj1kZYwSC6ZstHJN7oHBqcJC6yyn41eR+Sqa/mQc8E=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 h1:+70TFaan3hfJzs+7VK2o+OGxg8HsuBr/5f6tVAjDu6E=
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448 h1:KTgPnR10d5zhztWptI952TNtt/4u5h3IzDXkdIMuo2Y=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf h1:rRz0YsF7VXj9fXRF6yQgFI7DzST+hsI3TeFSGupntu0=
layeh.com/gopher-json v0.0.0-20201124131017-552bb3c4c3bf/go.mod h1:ivKkcY8Zxw5ba0jldhZCYYQfGdb2K6u9tbYK1AwMIBc=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30/go.mod h1:fEO7lRTdivWO2qYVCVG7dEADOMo/MLDCVr8So2g88Uw=
sigs.k8s.io/controller-runtime v0.12.1 h1:4BJY01xe9zKQti8oRjj/NeHKRXthf1YkYJAgLONFFoI=
sigs.k8s.io/controller-runtime v0.12.1/go.mod h1:BKhxlA4l7FPK4AQcsuL4X6vZeWnKDXez/vp1Y8dxTU0=
sigs.k8s.io/gateway-api v0.5.1 h1:EqzgOKhChzyve9rmeXXbceBYB6xiM50vDfq0kK5qpdw=
sigs.k8s.io/gateway-api v0.5.1/go.mod h1:x0AP6gugkFV8fC/oTlnOMU0pnmuzIR8LfIPRVUjxSqA=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/controller-runtime v0.14.6 h1:oxstGVvXGNnMvY7TAESYk+lzr6S3V5VFxQ6d92KcwQA=
sigs.k8s.io/controller-runtime v0.14.6/go.mod h1:WqIdsAY6JBsjfc/CqO0CORmNtoCtE4S6qbPc9s68h+0=
sigs.k8s.io/gateway-api v0.7.1 h1:Tts2jeepVkPA5rVG/iO+S43s9n7Vp7jCDhZDQYtPigQ=
sigs.k8s.io/gateway-api v0.7.1/go.mod h1:Xv0+ZMxX0lu1nSSDIIPEfbVztgNZ+3cfiYrJsa2Ooso=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

View File

@ -81,7 +81,7 @@ func objectToTable(path string) error {
rollout := testCase.Rollout
trafficRouting := testCase.TrafficRouting
if rollout != nil {
steps := rollout.Spec.Strategy.Canary.Steps
steps := rollout.Spec.Strategy.GetSteps()
for i, step := range steps {
var weight *int32
if step.TrafficRoutingStrategy.Traffic != nil {
@ -92,7 +92,7 @@ func objectToTable(path string) error {
weight = utilpointer.Int32(-1)
}
var canaryService string
stableService := rollout.Spec.Strategy.Canary.TrafficRoutings[0].Service
stableService := rollout.Spec.Strategy.GetTrafficRouting()[0].Service
canaryService = fmt.Sprintf("%s-canary", stableService)
data := &custom.LuaData{
Data: custom.Data{
@ -100,11 +100,12 @@ func objectToTable(path string) error {
Annotations: testCase.Original.GetAnnotations(),
Spec: testCase.Original.Object["spec"],
},
Matches: step.TrafficRoutingStrategy.Matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
Matches: step.TrafficRoutingStrategy.Matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
RequestHeaderModifier: step.TrafficRoutingStrategy.RequestHeaderModifier,
}
uList[fmt.Sprintf("step_%d", i)] = data
}
@ -128,11 +129,12 @@ func objectToTable(path string) error {
Annotations: testCase.Original.GetAnnotations(),
Spec: testCase.Original.Object["spec"],
},
Matches: matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
Matches: matches,
CanaryWeight: *weight,
StableWeight: 100 - *weight,
CanaryService: canaryService,
StableService: stableService,
RequestHeaderModifier: trafficRouting.Spec.Strategy.RequestHeaderModifier,
}
uList["steps_0"] = data
} else {

View File

@ -19,6 +19,10 @@ rollout:
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
- matches:
- headers:
- type: Exact
@ -66,6 +70,10 @@ expected:
exact: pc
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo-canary

View File

@ -13,6 +13,10 @@ trafficRouting:
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
objectRef:
- service: svc-demo
customNetworkRefs:
@ -51,6 +55,10 @@ expected:
exact: pc
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo

View File

@ -14,6 +14,10 @@ trafficRouting:
- type: RegularExpression
name: name
value: ".*demo"
requestHeaderModifier:
set:
- name: "header-foo"
value: "bar"
objectRef:
- service: svc-demo
customNetworkRefs:
@ -50,6 +54,10 @@ expected:
- headers:
name:
regex: .*demo
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo
@ -58,6 +66,10 @@ expected:
- headers:
user-agent:
exact: pc
headers:
request:
set:
header-foo: bar
route:
- destination:
host: svc-demo

View File

@ -43,29 +43,60 @@ function CalculateWeight(route, stableWeight, n)
end
-- generate routes with matches, insert a rule before other rules, only support http headers, cookies etc.
function GenerateRoutesWithMatches(spec, matches, stableService, canaryService)
function GenerateRoutesWithMatches(spec, matches, stableService, canaryService, requestHeaderModifier)
for _, match in ipairs(matches) do
local route = {}
route["match"] = {}
local vsMatch = {}
for key, value in pairs(match) do
local vsMatch = {}
vsMatch[key] = {}
for _, rule in ipairs(value) do
if key == "path" then
vsMatch["uri"] = {}
local rule = value
if rule["type"] == "RegularExpression" then
matchType = "regex"
elseif rule["type"] == "Exact" then
matchType = "exact"
elseif rule["type"] == "Prefix" then
elseif rule["type"] == "PathPrefix" then
matchType = "prefix"
end
if key == "headers" then
vsMatch[key][rule["name"]] = {}
vsMatch[key][rule["name"]][matchType] = rule.value
else
vsMatch[key][matchType] = rule.value
vsMatch["uri"][matchType] = rule.value
else
vsMatch[key] = {}
for _, rule in ipairs(value) do
if rule["type"] == "RegularExpression" then
matchType = "regex"
elseif rule["type"] == "Exact" then
matchType = "exact"
elseif rule["type"] == "Prefix" then
matchType = "prefix"
end
if key == "headers" or key == "queryParams" then
vsMatch[key][rule["name"]] = {}
vsMatch[key][rule["name"]][matchType] = rule.value
else
vsMatch[key][matchType] = rule.value
end
end
end
end
table.insert(route["match"], vsMatch)
if requestHeaderModifier then
route["headers"] = {}
route["headers"]["request"] = {}
for action, headers in pairs(requestHeaderModifier) do
if action == "set" or action == "add" then
route["headers"]["request"][action] = {}
for _, header in ipairs(headers) do
route["headers"]["request"][action][header["name"]] = header["value"]
end
elseif action == "remove" then
route["headers"]["request"]["remove"] = {}
for _, rHeader in ipairs(headers) do
table.insert(route["headers"]["request"]["remove"], rHeader)
end
end
end
table.insert(route["match"], vsMatch)
end
route.route = {
{
@ -116,7 +147,7 @@ end
if (obj.matches and next(obj.matches) ~= nil)
then
GenerateRoutesWithMatches(spec, obj.matches, obj.stableService, obj.canaryService)
GenerateRoutesWithMatches(spec, obj.matches, obj.stableService, obj.canaryService, obj.requestHeaderModifier)
else
GenerateRoutes(spec, obj.stableService, obj.canaryService, obj.stableWeight, obj.canaryWeight, "http")
GenerateRoutes(spec, obj.stableService, obj.canaryService, obj.stableWeight, obj.canaryWeight, "tcp")

View File

@ -10,6 +10,10 @@ annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = nil
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = nil
-- MSE extended annotations
annotations["mse.ingress.kubernetes.io/canary-by-query"] = nil
annotations["mse.ingress.kubernetes.io/canary-by-query-pattern"] = nil
annotations["mse.ingress.kubernetes.io/canary-by-query-value"] = nil
annotations["nginx.ingress.kubernetes.io/canary-weight"] = nil
if ( obj.weight ~= "-1" )
then
@ -33,18 +37,30 @@ then
return annotations
end
for _,match in ipairs(obj.matches) do
header = match.headers[1]
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
if ( header.type == "RegularExpression" )
if match.headers and next(match.headers) ~= nil then
header = match.headers[1]
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
if ( header.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
if match.queryParams and next(match.queryParams) ~= nil then
queryParam = match.queryParams[1]
annotations["nginx.ingress.kubernetes.io/canary-by-query"] = queryParam.name
if ( queryParam.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-query-pattern"] = queryParam.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-query-value"] = queryParam.value
end
end
end
return annotations
return annotations

View File

@ -26,19 +26,21 @@ end
-- headers & cookie apis
-- traverse matches
for _,match in ipairs(obj.matches) do
local header = match.headers[1]
-- cookie
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
-- if regular expression
if ( header.type == "RegularExpression" )
if match.headers and next(match.headers) ~= nil then
local header = match.headers[1]
-- cookie
if ( header.name == "canary-by-cookie" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
annotations["nginx.ingress.kubernetes.io/canary-by-cookie"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
annotations["nginx.ingress.kubernetes.io/canary-by-header"] = header.name
-- if regular expression
if ( header.type == "RegularExpression" )
then
annotations["nginx.ingress.kubernetes.io/canary-by-header-pattern"] = header.value
else
annotations["nginx.ingress.kubernetes.io/canary-by-header-value"] = header.value
end
end
end
end

View File

@ -148,6 +148,7 @@ type BatchReleaseReconciler struct {
// +kubebuilder:rbac:groups=apps.kruise.io,resources=statefulsets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps.kruise.io,resources=daemonsets,verbs=get;list;watch;update;patch
// +kubebuilder:rbac:groups=apps.kruise.io,resources=daemonsets/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=autoscaling,resources=horizontalpodautoscalers,verbs=get;list;watch;update;patch
// Reconcile reads that state of the cluster for a Rollout object and makes changes based on the state read
// and what is in the Rollout.Spec

View File

@ -67,8 +67,8 @@ var (
Name: "sample",
},
ReleasePlan: v1beta1.ReleasePlan{
EnableExtraWorkloadForCanary: true,
BatchPartition: pointer.Int32(0),
RollingStyle: v1beta1.CanaryRollingStyle,
BatchPartition: pointer.Int32(0),
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
@ -100,7 +100,7 @@ var (
},
},
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(100),
Replicas: pointer.Int32(100),
Strategy: apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
@ -146,7 +146,8 @@ var (
Name: "sample",
},
ReleasePlan: v1beta1.ReleasePlan{
BatchPartition: pointer.Int32Ptr(0),
BatchPartition: pointer.Int32(0),
RollingStyle: v1beta1.PartitionRollingStyle,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
@ -177,7 +178,7 @@ var (
},
},
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32Ptr(100),
Replicas: pointer.Int32(100),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
Partition: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(1)},
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(2)},
@ -393,7 +394,7 @@ func TestReconcile_CloneSet(t *testing.T) {
},
GetCloneSet: func() []client.Object {
stable := getStableWithReady(stableClone, "v2").(*kruiseappsv1alpha1.CloneSet)
stable.Spec.Replicas = pointer.Int32Ptr(200)
stable.Spec.Replicas = pointer.Int32(200)
canary := getCanaryWithStage(stable, "v2", 0, true)
return []client.Object{
canary,
@ -474,7 +475,7 @@ func TestReconcile_CloneSet(t *testing.T) {
release.Status.UpdateRevision = util.ComputeHash(canaryTemplate, nil)
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = pointer.Int32Ptr(1)
release.Spec.ReleasePlan.BatchPartition = pointer.Int32(1)
release.Status.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
return release
},
@ -650,7 +651,7 @@ func TestReconcile_Deployment(t *testing.T) {
release := releaseDeploy.DeepCopy()
release.Status.CanaryStatus.UpdatedReplicas = 10
release.Status.CanaryStatus.UpdatedReadyReplicas = 10
release.Spec.ReleasePlan.BatchPartition = pointer.Int32Ptr(1)
release.Spec.ReleasePlan.BatchPartition = pointer.Int32(1)
return setState(release, v1beta1.ReadyBatchState)
},
GetDeployments: func() []client.Object {
@ -693,7 +694,7 @@ func TestReconcile_Deployment(t *testing.T) {
},
GetDeployments: func() []client.Object {
stable := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
stable.Spec.Replicas = pointer.Int32Ptr(200)
stable.Spec.Replicas = pointer.Int32(200)
canary := getCanaryWithStage(stable, "v2", 0, true)
return []client.Object{
stable, canary,
@ -890,7 +891,7 @@ func getCanaryWithStage(workload client.Object, version string, stage int, ready
d.ResourceVersion = strconv.Itoa(rand.Intn(100000000000))
d.Labels[util.CanaryDeploymentLabel] = "87076677"
d.Finalizers = []string{util.CanaryDeploymentFinalizer}
d.Spec.Replicas = pointer.Int32Ptr(int32(stageReplicas))
d.Spec.Replicas = pointer.Int32(int32(stageReplicas))
d.Spec.Template.Spec.Containers = containers(version)
d.Status.Replicas = int32(stageReplicas)
d.Status.ReadyReplicas = int32(stageReplicas)

View File

@ -145,8 +145,8 @@ func (w workloadEventHandler) Update(evt event.UpdateEvent, q workqueue.RateLimi
return
}
oldObject := evt.ObjectNew
newObject := evt.ObjectOld
newObject := evt.ObjectNew
oldObject := evt.ObjectOld
expectationObserved(newObject)
if newObject.GetResourceVersion() == oldObject.GetResourceVersion() {
return

View File

@ -91,14 +91,14 @@ func TestWorkloadEventHandler_Update(t *testing.T) {
oldObject := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
oldObject.SetGeneration(2)
oldObject.Status.ObservedGeneration = 2
oldObject.Spec.Replicas = pointer.Int32Ptr(1000)
oldObject.Spec.Replicas = pointer.Int32(1000)
return oldObject
},
GetNewWorkload: func() client.Object {
newObject := getStableWithReady(stableDeploy, "v2").(*apps.Deployment)
newObject.SetGeneration(2)
newObject.Status.ObservedGeneration = 2
newObject.Spec.Replicas = pointer.Int32Ptr(1000)
newObject.Spec.Replicas = pointer.Int32(1000)
newObject.Status.Replicas = 1000
return newObject
},

View File

@ -24,6 +24,9 @@ import (
appsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle"
bgcloneset "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/cloneset"
bgdeplopyment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/canarystyle"
canarydeployment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/canarystyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle"
@ -32,6 +35,7 @@ import (
partitiondeployment "github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/deployment"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle/statefulset"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/errors"
apps "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -145,14 +149,18 @@ func (r *Executor) progressBatches(release *v1beta1.BatchRelease, newStatus *v1b
switch {
case err == nil:
result = reconcile.Result{RequeueAfter: DefaultDuration}
removeProgressingCondition(newStatus)
newStatus.CanaryStatus.CurrentBatchState = v1beta1.VerifyingBatchState
case errors.IsBadRequest(err):
progressingStateTransition(newStatus, v1.ConditionTrue, v1beta1.ProgressingReasonInRolling, err.Error())
fallthrough
default:
klog.Warningf("Failed to upgrade %v, err %v", klog.KObj(release), err)
}
case v1beta1.VerifyingBatchState:
// replicas/partition has been modified, should wait pod ready in this state.
err = workloadController.CheckBatchReady()
err = workloadController.EnsureBatchPodsReadyAndLabeled()
switch {
case err != nil:
// should go to upgrade state to do again to avoid dead wait.
@ -167,7 +175,7 @@ func (r *Executor) progressBatches(release *v1beta1.BatchRelease, newStatus *v1b
case v1beta1.ReadyBatchState:
// replicas/partition may be modified even though ready, should recheck in this state.
err = workloadController.CheckBatchReady()
err = workloadController.EnsureBatchPodsReadyAndLabeled()
switch {
case err != nil:
// if the batch ready condition changed due to some reasons, just recalculate the current batch.
@ -197,28 +205,43 @@ func (r *Executor) getReleaseController(release *v1beta1.BatchRelease, newStatus
Namespace: release.Namespace,
Name: targetRef.Name,
}
rollingStyle := release.Spec.ReleasePlan.RollingStyle
if len(rollingStyle) == 0 && release.Spec.ReleasePlan.EnableExtraWorkloadForCanary {
rollingStyle = v1beta1.CanaryRollingStyle
}
klog.Infof("BatchRelease(%v) using %s-style release controller for this batch release", klog.KObj(release), rollingStyle)
switch rollingStyle {
case v1beta1.BlueGreenRollingStyle:
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.CloneSet{}).Name() {
klog.InfoS("Using CloneSet bluegreen-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return bluegreenstyle.NewControlPlane(bgcloneset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment bluegreen-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return bluegreenstyle.NewControlPlane(bgdeplopyment.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
switch targetRef.APIVersion {
case appsv1alpha1.GroupVersion.String():
if targetRef.Kind == reflect.TypeOf(appsv1alpha1.CloneSet{}).Name() {
case v1beta1.CanaryRollingStyle:
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment canary-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return canarystyle.NewControlPlane(canarydeployment.NewController, r.client, r.recorder, release, newStatus, targetKey), nil
}
fallthrough
case v1beta1.PartitionRollingStyle, "":
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.CloneSet{}).Name() {
klog.InfoS("Using CloneSet partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(cloneset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
if targetRef.Kind == reflect.TypeOf(appsv1alpha1.DaemonSet{}).Name() {
if targetRef.APIVersion == appsv1alpha1.GroupVersion.String() && targetRef.Kind == reflect.TypeOf(appsv1alpha1.DaemonSet{}).Name() {
klog.InfoS("Using DaemonSet partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(daemonset.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
case apps.SchemeGroupVersion.String():
if targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
if !release.Spec.ReleasePlan.EnableExtraWorkloadForCanary {
klog.InfoS("Using Deployment partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(partitiondeployment.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
} else {
klog.InfoS("Using Deployment canary-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return canarystyle.NewControlPlane(canarydeployment.NewController, r.client, r.recorder, release, newStatus, targetKey), nil
}
if targetRef.APIVersion == apps.SchemeGroupVersion.String() && targetRef.Kind == reflect.TypeOf(apps.Deployment{}).Name() {
klog.InfoS("Using Deployment partition-style release controller for this batch release", "workload name", targetKey.Name, "namespace", targetKey.Namespace)
return partitionstyle.NewControlPlane(partitiondeployment.NewController, r.client, r.recorder, release, newStatus, targetKey, gvk), nil
}
klog.Info("Partition, but use StatefulSet-Like partition-style release controller for this batch release")
}
// try to use StatefulSet-like rollout controller by default
@ -242,3 +265,23 @@ func isPartitioned(release *v1beta1.BatchRelease) bool {
return release.Spec.ReleasePlan.BatchPartition != nil &&
*release.Spec.ReleasePlan.BatchPartition <= release.Status.CanaryStatus.CurrentBatch
}
func progressingStateTransition(status *v1beta1.BatchReleaseStatus, condStatus v1.ConditionStatus, reason, message string) {
cond := util.GetBatchReleaseCondition(*status, v1beta1.RolloutConditionProgressing)
if cond == nil {
cond = util.NewRolloutCondition(v1beta1.RolloutConditionProgressing, condStatus, reason, message)
} else {
cond.Status = condStatus
cond.Reason = reason
if message != "" {
cond.Message = message
}
}
util.SetBatchReleaseCondition(status, *cond)
status.Message = cond.Message
}
func removeProgressingCondition(status *v1beta1.BatchReleaseStatus) {
util.RemoveBatchReleaseCondition(status, v1beta1.RolloutConditionProgressing)
status.Message = ""
}

View File

@ -155,6 +155,7 @@ func refreshStatus(release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleas
if len(newStatus.ObservedReleasePlanHash) == 0 {
newStatus.ObservedReleasePlanHash = util.HashReleasePlanBatches(&release.Spec.ReleasePlan)
}
newStatus.ObservedRolloutID = release.Spec.ReleasePlan.RolloutID
}
func isPlanFinalizing(release *v1beta1.BatchRelease) bool {

View File

@ -61,6 +61,9 @@ type BatchContext struct {
Pods []*corev1.Pod `json:"-"`
// filter or sort pods before patch label
FilterFunc FilterFuncType `json:"-"`
// the next two fields are only used for bluegreen style
CurrentSurge intstr.IntOrString `json:"currentSurge,omitempty"`
DesiredSurge intstr.IntOrString `json:"desiredSurge,omitempty"`
}
type FilterFuncType func(pods []*corev1.Pod, ctx *BatchContext) []*corev1.Pod
@ -73,20 +76,20 @@ func (bc *BatchContext) Log() string {
// IsBatchReady return nil if the batch is ready
func (bc *BatchContext) IsBatchReady() error {
if bc.UpdatedReplicas < bc.DesiredUpdatedReplicas {
return fmt.Errorf("current batch not ready: updated replicas not satified")
return fmt.Errorf("current batch not ready: updated replicas not satisfied, UpdatedReplicas %d < DesiredUpdatedReplicas %d", bc.UpdatedReplicas, bc.DesiredUpdatedReplicas)
}
unavailableToleration := allowedUnavailable(bc.FailureThreshold, bc.UpdatedReplicas)
if unavailableToleration+bc.UpdatedReadyReplicas < bc.DesiredUpdatedReplicas {
return fmt.Errorf("current batch not ready: updated ready replicas not satified")
return fmt.Errorf("current batch not ready: updated ready replicas not satisfied, allowedUnavailable + UpdatedReadyReplicas %d < DesiredUpdatedReplicas %d", unavailableToleration+bc.UpdatedReadyReplicas, bc.DesiredUpdatedReplicas)
}
if bc.DesiredUpdatedReplicas > 0 && bc.UpdatedReadyReplicas == 0 {
return fmt.Errorf("current batch not ready: no updated ready replicas")
return fmt.Errorf("current batch not ready: no updated ready replicas, DesiredUpdatedReplicas %d > 0 and UpdatedReadyReplicas %d = 0", bc.DesiredUpdatedReplicas, bc.UpdatedReadyReplicas)
}
if !batchLabelSatisfied(bc.Pods, bc.RolloutID, bc.PlannedUpdatedReplicas) {
return fmt.Errorf("current batch not ready: pods with batch label not satified")
return fmt.Errorf("current batch not ready: pods with batch label not satisfied, RolloutID %s, PlannedUpdatedReplicas %d", bc.RolloutID, bc.PlannedUpdatedReplicas)
}
return nil
}

View File

@ -0,0 +1,42 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package control
import "k8s.io/apimachinery/pkg/util/intstr"
// OriginalDeploymentStrategy stores part of the fileds of a workload,
// so that it can be restored when finalizing.
// It is only used for BlueGreen Release
// Similar to DeploymentStrategy, it is an annotation used in workload
// However, unlike DeploymentStrategy, it is only used to store and restore the user's strategy
type OriginalDeploymentStrategy struct {
// +optional
MaxUnavailable *intstr.IntOrString `json:"maxUnavailable,omitempty" protobuf:"bytes,1,opt,name=maxUnavailable"`
// +optional
MaxSurge *intstr.IntOrString `json:"maxSurge,omitempty" protobuf:"bytes,2,opt,name=maxSurge"`
// Minimum number of seconds for which a newly created pod should be ready
// without any of its container crashing, for it to be considered available.
// Defaults to 0 (pod will be considered available as soon as it is ready)
// +optional
MinReadySeconds int32 `json:"minReadySeconds,omitempty" protobuf:"varint,5,opt,name=minReadySeconds"`
// The maximum time in seconds for a deployment to make progress before it
// is considered to be failed. The deployment controller will continue to
// process failed deployments and a condition with a ProgressDeadlineExceeded
// reason will be surfaced in the deployment status. Note that progress will
// not be estimated during the time a deployment is paused. Defaults to 600s.
ProgressDeadlineSeconds *int32 `json:"progressDeadlineSeconds,omitempty" protobuf:"varint,9,opt,name=progressDeadlineSeconds"`
}

View File

@ -0,0 +1,225 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloneset
import (
"context"
"fmt"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/hpa"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/errors"
"github.com/openkruise/rollouts/pkg/util/patch"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type realController struct {
*util.WorkloadInfo
client client.Client
pods []*corev1.Pod
key types.NamespacedName
object *kruiseappsv1alpha1.CloneSet
}
func NewController(cli client.Client, key types.NamespacedName, _ schema.GroupVersionKind) bluegreenstyle.Interface {
return &realController{
key: key,
client: cli,
}
}
func (rc *realController) GetWorkloadInfo() *util.WorkloadInfo {
return rc.WorkloadInfo
}
func (rc *realController) BuildController() (bluegreenstyle.Interface, error) {
if rc.object != nil {
return rc, nil
}
object := &kruiseappsv1alpha1.CloneSet{}
if err := rc.client.Get(context.TODO(), rc.key, object); err != nil {
return rc, err
}
rc.object = object
rc.WorkloadInfo = util.ParseWorkload(object)
return rc, nil
}
func (rc *realController) ListOwnedPods() ([]*corev1.Pod, error) {
if rc.pods != nil {
return rc.pods, nil
}
var err error
rc.pods, err = util.ListOwnedPods(rc.client, rc.object)
return rc.pods, err
}
func (rc *realController) Initialize(release *v1beta1.BatchRelease) error {
if rc.object == nil || control.IsControlledByBatchRelease(release, rc.object) {
return nil
}
// disable the hpa
if err := hpa.DisableHPA(rc.client, rc.object); err != nil {
return err
}
klog.InfoS("Initialize: disable hpa for cloneset successfully", "cloneset", klog.KObj(rc.object))
// patch the cloneset
setting, err := control.GetOriginalSetting(rc.object)
if err != nil {
return errors.NewBadRequestError(fmt.Errorf("cannot get original setting for cloneset %v: %s from annotation", klog.KObj(rc.object), err.Error()))
}
control.InitOriginalSetting(&setting, rc.object)
patchData := patch.NewClonesetPatch()
patchData.InsertAnnotation(v1beta1.OriginalDeploymentStrategyAnnotation, util.DumpJSON(&setting))
patchData.InsertAnnotation(util.BatchReleaseControlAnnotation, util.DumpJSON(metav1.NewControllerRef(
release, release.GetObjectKind().GroupVersionKind())))
// we use partition = 100% to function as "paused" instead of setting pasued field as true
// it is manily to keep consistency with partition style (partition is already set as 100% in webhook)
patchData.UpdatePaused(false)
maxSurge := intstr.FromInt(1) // select the minimum positive number as initial value
maxUnavailable := intstr.FromInt(0)
patchData.UpdateMaxSurge(&maxSurge)
patchData.UpdateMaxUnavailable(&maxUnavailable)
patchData.UpdateMinReadySeconds(v1beta1.MaxReadySeconds)
klog.InfoS("Initialize: try to update cloneset", "cloneset", klog.KObj(rc.object), "patchData", patchData.String())
return rc.client.Patch(context.TODO(), util.GetEmptyObjectWithKey(rc.object), patchData)
}
func (rc *realController) UpgradeBatch(ctx *batchcontext.BatchContext) error {
if err := control.ValidateReadyForBlueGreenRelease(rc.object); err != nil {
return errors.NewBadRequestError(fmt.Errorf("cannot upgrade batch, because cloneset %v doesn't satisfy conditions: %s", klog.KObj(rc.object), err.Error()))
}
desired, _ := intstr.GetScaledValueFromIntOrPercent(&ctx.DesiredSurge, int(ctx.Replicas), true)
current, _ := intstr.GetScaledValueFromIntOrPercent(&ctx.CurrentSurge, int(ctx.Replicas), true)
if current >= desired {
klog.InfoS("No need to upgrade batch, because current >= desired", "cloneset", klog.KObj(rc.object), "current", current, "desired", desired)
return nil
} else {
klog.InfoS("Will update batch for cloneset, because current < desired", "cloneset", klog.KObj(rc.object), "current", current, "desired", desired)
}
patchData := patch.NewClonesetPatch()
// avoid interference from partition
patchData.UpdatePartiton(nil)
patchData.UpdateMaxSurge(&ctx.DesiredSurge)
return rc.client.Patch(context.TODO(), util.GetEmptyObjectWithKey(rc.object), patchData)
}
func (rc *realController) Finalize(release *v1beta1.BatchRelease) error {
if release.Spec.ReleasePlan.BatchPartition != nil {
// continuous release (not supported yet)
/*
patchData := patch.NewClonesetPatch()
patchData.DeleteAnnotation(util.BatchReleaseControlAnnotation)
return rc.client.Patch(context.TODO(), util.GetEmptyObjectWithKey(rc.object), patchData)
*/
klog.Warningf("continuous release is not supported yet for bluegreen style release")
return nil
}
// restore the original setting and remove annotation
if !rc.restored() {
c := util.GetEmptyObjectWithKey(rc.object)
setting, err := control.GetOriginalSetting(rc.object)
if err != nil {
return err
}
patchData := patch.NewClonesetPatch()
patchData.UpdateMinReadySeconds(setting.MinReadySeconds)
patchData.UpdateMaxSurge(setting.MaxSurge)
patchData.UpdateMaxUnavailable(setting.MaxUnavailable)
patchData.DeleteAnnotation(v1beta1.OriginalDeploymentStrategyAnnotation)
patchData.DeleteAnnotation(util.BatchReleaseControlAnnotation)
if err := rc.client.Patch(context.TODO(), c, patchData); err != nil {
return err
}
klog.InfoS("Finalize: cloneset bluegreen release: wait all pods updated and ready", "cloneset", klog.KObj(rc.object))
}
// wait all pods updated and ready
if rc.object.Status.ReadyReplicas != rc.object.Status.UpdatedReadyReplicas {
return errors.NewRetryError(fmt.Errorf("cloneset %v finalize not done, readyReplicas %d != updatedReadyReplicas %d, current policy %s",
klog.KObj(rc.object), rc.object.Status.ReadyReplicas, rc.object.Status.UpdatedReadyReplicas, release.Spec.ReleasePlan.FinalizingPolicy))
}
klog.InfoS("Finalize: cloneset bluegreen release: all pods updated and ready", "cloneset", klog.KObj(rc.object))
// restore the hpa
return hpa.RestoreHPA(rc.client, rc.object)
}
func (rc *realController) restored() bool {
if rc.object == nil || rc.object.DeletionTimestamp != nil {
return true
}
if rc.object.Annotations == nil || len(rc.object.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation]) == 0 {
return true
}
return false
}
// bluegreen doesn't support rollback in batch, because:
// - bluegreen support traffic rollback instead, rollback in batch is not necessary
// - it's diffcult for both Deployment and CloneSet to support rollback in batch, with the "minReadySeconds" implementation
func (rc *realController) CalculateBatchContext(release *v1beta1.BatchRelease) (*batchcontext.BatchContext, error) {
// current batch index
currentBatch := release.Status.CanaryStatus.CurrentBatch
// the number of expected updated pods
desiredSurge := release.Spec.ReleasePlan.Batches[currentBatch].CanaryReplicas
// the number of current updated pods
currentSurge := intstr.FromInt(0)
if rc.object.Spec.UpdateStrategy.MaxSurge != nil {
currentSurge = *rc.object.Spec.UpdateStrategy.MaxSurge
if currentSurge == intstr.FromInt(1) {
// currentSurge == intstr.FromInt(1) means that currentSurge is the initial value
// if the value is indeed set by user, setting it to 0 still does no harm
currentSurge = intstr.FromInt(0)
}
}
desired, _ := intstr.GetScaledValueFromIntOrPercent(&desiredSurge, int(rc.Replicas), true)
batchContext := &batchcontext.BatchContext{
Pods: rc.pods,
RolloutID: release.Spec.ReleasePlan.RolloutID,
CurrentBatch: currentBatch,
UpdateRevision: release.Status.UpdateRevision,
DesiredSurge: desiredSurge,
CurrentSurge: currentSurge,
// the following fields isused to check if batch is ready
Replicas: rc.Replicas,
UpdatedReplicas: rc.Status.UpdatedReplicas,
UpdatedReadyReplicas: rc.Status.UpdatedReadyReplicas,
DesiredUpdatedReplicas: int32(desired),
PlannedUpdatedReplicas: int32(desired),
}
// the number of no need update pods that marked before rollout
// if noNeedUpdate := release.Status.CanaryStatus.NoNeedUpdateReplicas; noNeedUpdate != nil {
// batchContext.FilterFunc = labelpatch.FilterPodsForUnorderedUpdate
// }
return batchContext, nil
}

View File

@ -0,0 +1,548 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cloneset
import (
"context"
"encoding/json"
"reflect"
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
rolloutapi "github.com/openkruise/rollouts/api"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
control "github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
autoscalingv1 "k8s.io/api/autoscaling/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
var (
scheme = runtime.NewScheme()
cloneKey = types.NamespacedName{
Namespace: "default",
Name: "cloneset",
}
cloneDemo = &kruiseappsv1alpha1.CloneSet{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps.kruise.io/v1alpha1",
Kind: "CloneSet",
},
ObjectMeta: metav1.ObjectMeta{
Name: cloneKey.Name,
Namespace: cloneKey.Namespace,
Generation: 1,
Labels: map[string]string{
"app": "busybox",
},
Annotations: map[string]string{
"type": "unit-test",
},
},
Spec: kruiseappsv1alpha1.CloneSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "busybox",
},
},
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
Paused: true,
Partition: &intstr.IntOrString{Type: intstr.String, StrVal: "0%"},
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "busybox",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "busybox",
Image: "busybox:latest",
},
},
},
},
},
Status: kruiseappsv1alpha1.CloneSetStatus{
Replicas: 10,
UpdatedReplicas: 0,
ReadyReplicas: 10,
AvailableReplicas: 10,
UpdatedReadyReplicas: 0,
UpdateRevision: "version-2",
CurrentRevision: "version-1",
ObservedGeneration: 1,
CollisionCount: pointer.Int32(1),
},
}
releaseDemo = &v1beta1.BatchRelease{
TypeMeta: metav1.TypeMeta{
APIVersion: "rollouts.kruise.io/v1alpha1",
Kind: "BatchRelease",
},
ObjectMeta: metav1.ObjectMeta{
Name: "release",
Namespace: cloneKey.Namespace,
UID: uuid.NewUUID(),
},
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
},
{
CanaryReplicas: intstr.FromString("50%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
},
},
WorkloadRef: v1beta1.ObjectRef{
APIVersion: cloneDemo.APIVersion,
Kind: cloneDemo.Kind,
Name: cloneDemo.Name,
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 0,
},
},
}
hpaDemo = &autoscalingv1.HorizontalPodAutoscaler{
TypeMeta: metav1.TypeMeta{
APIVersion: "autoscaling/v1",
Kind: "HorizontalPodAutoscaler",
},
ObjectMeta: metav1.ObjectMeta{
Name: "hpa",
Namespace: cloneKey.Namespace,
},
Spec: autoscalingv1.HorizontalPodAutoscalerSpec{
ScaleTargetRef: autoscalingv1.CrossVersionObjectReference{
APIVersion: "apps.kruise.io/v1alpha1",
Kind: "CloneSet",
Name: cloneDemo.Name,
},
MinReplicas: pointer.Int32(1),
MaxReplicas: 10,
},
}
)
func init() {
apps.AddToScheme(scheme)
rolloutapi.AddToScheme(scheme)
kruiseappsv1alpha1.AddToScheme(scheme)
autoscalingv1.AddToScheme(scheme)
}
func TestControlPackage(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "CloneSet Control Package Suite")
}
var _ = Describe("CloneSet Control", func() {
var (
c client.Client
rc *realController
cloneset *kruiseappsv1alpha1.CloneSet
release *v1beta1.BatchRelease
hpa *autoscalingv1.HorizontalPodAutoscaler
)
BeforeEach(func() {
cloneset = cloneDemo.DeepCopy()
release = releaseDemo.DeepCopy()
hpa = hpaDemo.DeepCopy()
c = fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(cloneset, release, hpa).
Build()
rc = &realController{
key: types.NamespacedName{Namespace: cloneset.Namespace, Name: cloneset.Name},
client: c,
}
})
It("should initialize cloneset successfully", func() {
// build controller
_, err := rc.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Initialize method
err = retryFunction(3, func() error {
return rc.Initialize(release)
})
Expect(err).NotTo(HaveOccurred())
// inspect if HPA is disabled
disabledHPA := &autoscalingv1.HorizontalPodAutoscaler{}
err = c.Get(context.TODO(), types.NamespacedName{Namespace: hpa.Namespace, Name: hpa.Name}, disabledHPA)
Expect(err).NotTo(HaveOccurred())
Expect(disabledHPA.Spec.ScaleTargetRef.Name).To(Equal(cloneset.Name + "-DisableByRollout"))
// inspect if Cloneset is patched properly
updatedCloneset := &kruiseappsv1alpha1.CloneSet{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(cloneset), updatedCloneset)
Expect(err).NotTo(HaveOccurred())
// inspect if annotations are added
Expect(updatedCloneset.Annotations).To(HaveKey(v1beta1.OriginalDeploymentStrategyAnnotation))
Expect(updatedCloneset.Annotations).To(HaveKey(util.BatchReleaseControlAnnotation))
Expect(updatedCloneset.Annotations[util.BatchReleaseControlAnnotation]).To(Equal(getControlInfo(release)))
// inspect if strategy is updated
Expect(updatedCloneset.Spec.UpdateStrategy.Paused).To(BeFalse())
Expect(updatedCloneset.Spec.UpdateStrategy.MaxSurge.IntVal).To(Equal(int32(1)))
Expect(updatedCloneset.Spec.UpdateStrategy.MaxUnavailable.IntVal).To(Equal(int32(0)))
Expect(updatedCloneset.Spec.MinReadySeconds).To(Equal(int32(v1beta1.MaxReadySeconds)))
})
It("should finalize CloneSet successfully", func() {
// hack to patch cloneset status
cloneset.Status.UpdatedReadyReplicas = 10
err := c.Status().Update(context.TODO(), cloneset)
Expect(err).NotTo(HaveOccurred())
// build controller
rc.object = nil
_, err = rc.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Finalize method
err = retryFunction(3, func() error {
return rc.Finalize(release)
})
Expect(err).NotTo(HaveOccurred())
// inspect if CloneSet is patched properly
updatedCloneset := &kruiseappsv1alpha1.CloneSet{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(cloneset), updatedCloneset)
Expect(err).NotTo(HaveOccurred())
// inspect if annotations are removed
Expect(updatedCloneset.Annotations).NotTo(HaveKey(v1beta1.OriginalDeploymentStrategyAnnotation))
Expect(updatedCloneset.Annotations).NotTo(HaveKey(util.BatchReleaseControlAnnotation))
// inspect if strategy is restored
Expect(updatedCloneset.Spec.UpdateStrategy.MaxSurge).To(BeNil())
Expect(*updatedCloneset.Spec.UpdateStrategy.MaxUnavailable).To(Equal(intstr.IntOrString{Type: intstr.Int, IntVal: 1}))
Expect(updatedCloneset.Spec.MinReadySeconds).To(Equal(int32(0)))
// inspect if HPA is restored
restoredHPA := &autoscalingv1.HorizontalPodAutoscaler{}
err = c.Get(context.TODO(), types.NamespacedName{Namespace: hpa.Namespace, Name: hpa.Name}, restoredHPA)
Expect(err).NotTo(HaveOccurred())
Expect(restoredHPA.Spec.ScaleTargetRef.Name).To(Equal(cloneset.Name))
})
It("should upgradBatch for CloneSet successfully", func() {
// call Initialize method
_, err := rc.BuildController()
Expect(err).NotTo(HaveOccurred())
err = retryFunction(3, func() error {
return rc.Initialize(release)
})
Expect(err).NotTo(HaveOccurred())
// call UpgradeBatch method
rc.object = nil
_, err = rc.BuildController()
Expect(err).NotTo(HaveOccurred())
batchContext, err := rc.CalculateBatchContext(release)
Expect(err).NotTo(HaveOccurred())
err = rc.UpgradeBatch(batchContext)
Expect(err).NotTo(HaveOccurred())
// inspect if CloneSet is patched properly
updatedCloneset := &kruiseappsv1alpha1.CloneSet{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(cloneset), updatedCloneset)
Expect(err).NotTo(HaveOccurred())
Expect(*updatedCloneset.Spec.UpdateStrategy.MaxSurge).To(Equal(intstr.IntOrString{Type: intstr.String, StrVal: "10%"}))
Expect(*updatedCloneset.Spec.UpdateStrategy.MaxUnavailable).To(Equal(intstr.IntOrString{Type: intstr.Int, IntVal: 0}))
})
})
func TestCalculateBatchContext(t *testing.T) {
RegisterFailHandler(Fail)
cases := map[string]struct {
workload func() *kruiseappsv1alpha1.CloneSet
release func() *v1beta1.BatchRelease
result *batchcontext.BatchContext
}{
"normal case batch0": {
workload: func() *kruiseappsv1alpha1.CloneSet {
return &kruiseappsv1alpha1.CloneSet{
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
MaxSurge: func() *intstr.IntOrString { p := intstr.FromInt(1); return &p }(),
},
},
Status: kruiseappsv1alpha1.CloneSetStatus{
Replicas: 10,
UpdatedReplicas: 0,
UpdatedReadyReplicas: 0,
AvailableReplicas: 10,
},
}
},
release: func() *v1beta1.BatchRelease {
r := &v1beta1.BatchRelease{
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "50%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
},
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 0,
},
UpdateRevision: "update-version",
},
}
return r
},
result: &batchcontext.BatchContext{
CurrentBatch: 0,
DesiredSurge: intstr.FromString("50%"),
CurrentSurge: intstr.FromInt(0),
Replicas: 10,
UpdatedReplicas: 0,
UpdatedReadyReplicas: 0,
UpdateRevision: "update-version",
PlannedUpdatedReplicas: 5,
DesiredUpdatedReplicas: 5,
},
},
"normal case batch1": {
workload: func() *kruiseappsv1alpha1.CloneSet {
return &kruiseappsv1alpha1.CloneSet{
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
MaxSurge: func() *intstr.IntOrString { p := intstr.FromString("50%"); return &p }(),
},
},
Status: kruiseappsv1alpha1.CloneSetStatus{
Replicas: 15,
UpdatedReplicas: 5,
UpdatedReadyReplicas: 5,
AvailableReplicas: 10,
},
}
},
release: func() *v1beta1.BatchRelease {
r := &v1beta1.BatchRelease{
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "50%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
},
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 1,
},
UpdateRevision: "update-version",
},
}
return r
},
result: &batchcontext.BatchContext{
CurrentBatch: 1,
DesiredSurge: intstr.FromString("100%"),
CurrentSurge: intstr.FromString("50%"),
Replicas: 10,
UpdatedReplicas: 5,
UpdatedReadyReplicas: 5,
UpdateRevision: "update-version",
PlannedUpdatedReplicas: 10,
DesiredUpdatedReplicas: 10,
},
},
"normal case batch2": {
workload: func() *kruiseappsv1alpha1.CloneSet {
return &kruiseappsv1alpha1.CloneSet{
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
MaxSurge: func() *intstr.IntOrString { p := intstr.FromString("100%"); return &p }(),
},
},
Status: kruiseappsv1alpha1.CloneSetStatus{
Replicas: 20,
UpdatedReplicas: 10,
UpdatedReadyReplicas: 10,
AvailableReplicas: 10,
ReadyReplicas: 20,
},
}
},
release: func() *v1beta1.BatchRelease {
r := &v1beta1.BatchRelease{
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "50%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
{CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"}},
},
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 2,
},
UpdateRevision: "update-version",
},
}
return r
},
result: &batchcontext.BatchContext{
CurrentBatch: 2,
UpdateRevision: "update-version",
DesiredSurge: intstr.FromString("100%"),
CurrentSurge: intstr.FromString("100%"),
Replicas: 10,
UpdatedReplicas: 10,
UpdatedReadyReplicas: 10,
PlannedUpdatedReplicas: 10,
DesiredUpdatedReplicas: 10,
},
},
}
for name, cs := range cases {
t.Run(name, func(t *testing.T) {
control := realController{
object: cs.workload(),
WorkloadInfo: util.ParseWorkload(cs.workload()),
}
got, err := control.CalculateBatchContext(cs.release())
Expect(err).NotTo(HaveOccurred())
Expect(got.Log()).Should(Equal(cs.result.Log()))
})
}
}
func TestRealController(t *testing.T) {
RegisterFailHandler(Fail)
release := releaseDemo.DeepCopy()
clone := cloneDemo.DeepCopy()
// for unit test we should set some default value since no webhook or controller is working
clone.Spec.UpdateStrategy.Type = kruiseappsv1alpha1.RecreateCloneSetUpdateStrategyType
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(release, clone).Build()
// build new controller
c := NewController(cli, cloneKey, clone.GroupVersionKind()).(*realController)
controller, err := c.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Initialize
err = controller.Initialize(release)
Expect(err).NotTo(HaveOccurred())
fetch := &kruiseappsv1alpha1.CloneSet{}
Expect(cli.Get(context.TODO(), cloneKey, fetch)).NotTo(HaveOccurred())
// check strategy
Expect(fetch.Spec.UpdateStrategy.Type).Should(Equal(kruiseappsv1alpha1.RecreateCloneSetUpdateStrategyType))
// partition is set to 100% in webhook, therefore we cannot observe it in unit test
// Expect(reflect.DeepEqual(fetch.Spec.UpdateStrategy.Partition, &intstr.IntOrString{Type: intstr.String, StrVal: "100%"})).Should(BeTrue())
Expect(reflect.DeepEqual(fetch.Spec.UpdateStrategy.MaxSurge, &intstr.IntOrString{Type: intstr.Int, IntVal: 1})).Should(BeTrue())
Expect(reflect.DeepEqual(fetch.Spec.UpdateStrategy.MaxUnavailable, &intstr.IntOrString{Type: intstr.Int, IntVal: 0})).Should(BeTrue())
Expect(fetch.Spec.MinReadySeconds).Should(Equal(int32(v1beta1.MaxReadySeconds)))
// check annotations
Expect(fetch.Annotations[util.BatchReleaseControlAnnotation]).Should(Equal(getControlInfo(release)))
Expect(fetch.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation]).Should(Equal(util.DumpJSON(&control.OriginalDeploymentStrategy{
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
MaxSurge: &intstr.IntOrString{Type: intstr.String, StrVal: "0%"},
MinReadySeconds: 0,
})))
c.object = fetch // mock
for {
batchContext, err := controller.CalculateBatchContext(release)
Expect(err).NotTo(HaveOccurred())
err = controller.UpgradeBatch(batchContext)
fetch = &kruiseappsv1alpha1.CloneSet{}
// mock
Expect(cli.Get(context.TODO(), cloneKey, fetch)).NotTo(HaveOccurred())
c.object = fetch
if err == nil {
break
}
}
fetch = &kruiseappsv1alpha1.CloneSet{}
Expect(cli.Get(context.TODO(), cloneKey, fetch)).NotTo(HaveOccurred())
Expect(fetch.Spec.UpdateStrategy.MaxSurge.StrVal).Should(Equal("10%"))
Expect(fetch.Spec.UpdateStrategy.MaxUnavailable.IntVal).Should(Equal(int32(0)))
Expect(fetch.Spec.UpdateStrategy.Paused).Should(Equal(false))
Expect(fetch.Spec.MinReadySeconds).Should(Equal(int32(v1beta1.MaxReadySeconds)))
Expect(fetch.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation]).Should(Equal(util.DumpJSON(&control.OriginalDeploymentStrategy{
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
MaxSurge: &intstr.IntOrString{Type: intstr.String, StrVal: "0%"},
MinReadySeconds: 0,
})))
controller.Finalize(release)
fetch = &kruiseappsv1alpha1.CloneSet{}
Expect(cli.Get(context.TODO(), cloneKey, fetch)).NotTo(HaveOccurred())
Expect(fetch.Spec.UpdateStrategy.MaxSurge.StrVal).Should(Equal("0%"))
Expect(fetch.Spec.UpdateStrategy.MaxUnavailable.IntVal).Should(Equal(int32(1)))
Expect(fetch.Spec.UpdateStrategy.Paused).Should(Equal(false))
Expect(fetch.Spec.MinReadySeconds).Should(Equal(int32(0)))
}
func getControlInfo(release *v1beta1.BatchRelease) string {
owner, _ := json.Marshal(metav1.NewControllerRef(release, release.GetObjectKind().GroupVersionKind()))
return string(owner)
}
func retryFunction(limit int, f func() error) (err error) {
for i := limit; i >= 0; i-- {
if err = f(); err == nil {
return nil
}
}
return err
}

View File

@ -0,0 +1,190 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package bluegreenstyle
import (
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/labelpatch"
"github.com/openkruise/rollouts/pkg/util"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type realBatchControlPlane struct {
Interface
client.Client
record.EventRecorder
patcher labelpatch.LabelPatcher
release *v1beta1.BatchRelease
newStatus *v1beta1.BatchReleaseStatus
}
type NewInterfaceFunc func(cli client.Client, key types.NamespacedName, gvk schema.GroupVersionKind) Interface
// NewControlPlane creates a new release controller with blue-green style to drive batch release state machine
func NewControlPlane(f NewInterfaceFunc, cli client.Client, recorder record.EventRecorder, release *v1beta1.BatchRelease, newStatus *v1beta1.BatchReleaseStatus, key types.NamespacedName, gvk schema.GroupVersionKind) *realBatchControlPlane {
return &realBatchControlPlane{
Client: cli,
EventRecorder: recorder,
newStatus: newStatus,
Interface: f(cli, key, gvk),
release: release.DeepCopy(),
patcher: labelpatch.NewLabelPatcher(cli, klog.KObj(release), release.Spec.ReleasePlan.Batches),
}
}
func (rc *realBatchControlPlane) Initialize() error {
controller, err := rc.BuildController()
if err != nil {
return err
}
// claim workload under our control
err = controller.Initialize(rc.release)
if err != nil {
return err
}
// record revision and replicas
workloadInfo := controller.GetWorkloadInfo()
rc.newStatus.StableRevision = workloadInfo.Status.StableRevision
rc.newStatus.UpdateRevision = workloadInfo.Status.UpdateRevision
rc.newStatus.ObservedWorkloadReplicas = workloadInfo.Replicas
return err
}
func (rc *realBatchControlPlane) patchPodLabels(batchContext *context.BatchContext) error {
pods, err := rc.ListOwnedPods() // add pods to rc for patching pod batch labels
if err != nil {
return err
}
batchContext.Pods = pods
return rc.patcher.PatchPodBatchLabel(batchContext)
}
func (rc *realBatchControlPlane) UpgradeBatch() error {
controller, err := rc.BuildController()
if err != nil {
return err
}
if controller.GetWorkloadInfo().Replicas == 0 {
return nil
}
batchContext, err := controller.CalculateBatchContext(rc.release)
if err != nil {
return err
}
klog.Infof("BatchRelease %v calculated context when upgrade batch: %s",
klog.KObj(rc.release), batchContext.Log())
err = controller.UpgradeBatch(batchContext)
if err != nil {
return err
}
return nil
}
func (rc *realBatchControlPlane) EnsureBatchPodsReadyAndLabeled() error {
controller, err := rc.BuildController()
if err != nil {
return err
}
if controller.GetWorkloadInfo().Replicas == 0 {
return nil
}
// do not countAndUpdateNoNeedUpdateReplicas when checking,
// the target calculated should be consistent with UpgradeBatch.
batchContext, err := controller.CalculateBatchContext(rc.release)
if err != nil {
return err
}
klog.Infof("BatchRelease %v calculated context when check batch ready: %s",
klog.KObj(rc.release), batchContext.Log())
if err = rc.patchPodLabels(batchContext); err != nil {
klog.ErrorS(err, "failed to patch pod labels", "release", klog.KObj(rc.release))
}
return batchContext.IsBatchReady()
}
func (rc *realBatchControlPlane) Finalize() error {
controller, err := rc.BuildController()
if err != nil {
return client.IgnoreNotFound(err)
}
// release workload control info and clean up resources if it needs
return controller.Finalize(rc.release)
}
func (rc *realBatchControlPlane) SyncWorkloadInformation() (control.WorkloadEventType, *util.WorkloadInfo, error) {
// ignore the sync if the release plan is deleted
if rc.release.DeletionTimestamp != nil {
return control.WorkloadNormalState, nil, nil
}
controller, err := rc.BuildController()
if err != nil {
if errors.IsNotFound(err) {
return control.WorkloadHasGone, nil, err
}
return control.WorkloadUnknownState, nil, err
}
workloadInfo := controller.GetWorkloadInfo()
if !workloadInfo.IsStable() {
klog.Infof("Workload(%v) still reconciling, waiting for it to complete, generation: %v, observed: %v",
workloadInfo.LogKey, workloadInfo.Generation, workloadInfo.Status.ObservedGeneration)
return control.WorkloadStillReconciling, workloadInfo, nil
}
if workloadInfo.IsPromoted() {
klog.Infof("Workload(%v) has been promoted, no need to rollout again actually, replicas: %v, updated: %v",
workloadInfo.LogKey, workloadInfo.Replicas, workloadInfo.Status.UpdatedReadyReplicas)
return control.WorkloadNormalState, workloadInfo, nil
}
if workloadInfo.IsScaling(rc.newStatus.ObservedWorkloadReplicas) {
klog.Warningf("Workload(%v) replicas is modified, replicas from: %v to -> %v",
workloadInfo.LogKey, rc.newStatus.ObservedWorkloadReplicas, workloadInfo.Replicas)
return control.WorkloadReplicasChanged, workloadInfo, nil
}
if workloadInfo.IsRollback(rc.newStatus.StableRevision, rc.newStatus.UpdateRevision) {
klog.Warningf("Workload(%v) is rolling back", workloadInfo.LogKey)
return control.WorkloadRollbackInBatch, workloadInfo, nil
}
if workloadInfo.IsRevisionNotEqual(rc.newStatus.UpdateRevision) {
klog.Warningf("Workload(%v) updateRevision is modified, updateRevision from: %v to -> %v",
workloadInfo.LogKey, rc.newStatus.UpdateRevision, workloadInfo.Status.UpdateRevision)
return control.WorkloadPodTemplateChanged, workloadInfo, nil
}
return control.WorkloadNormalState, workloadInfo, nil
}

View File

@ -0,0 +1,336 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package deployment
import (
"context"
"fmt"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/bluegreenstyle/hpa"
deploymentutil "github.com/openkruise/rollouts/pkg/controller/deployment/util"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/errors"
"github.com/openkruise/rollouts/pkg/util/patch"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type realController struct {
*util.WorkloadInfo
client client.Client
pods []*corev1.Pod
key types.NamespacedName
object *apps.Deployment
finder *util.ControllerFinder
}
func NewController(cli client.Client, key types.NamespacedName, _ schema.GroupVersionKind) bluegreenstyle.Interface {
return &realController{
key: key,
client: cli,
finder: util.NewControllerFinder(cli),
}
}
func (rc *realController) GetWorkloadInfo() *util.WorkloadInfo {
return rc.WorkloadInfo
}
func (rc *realController) BuildController() (bluegreenstyle.Interface, error) {
if rc.object != nil {
return rc, nil
}
object := &apps.Deployment{}
if err := rc.client.Get(context.TODO(), rc.key, object); err != nil {
return rc, err
}
rc.object = object
rc.WorkloadInfo = rc.getWorkloadInfo(object)
return rc, nil
}
func (rc *realController) ListOwnedPods() ([]*corev1.Pod, error) {
if rc.pods != nil {
return rc.pods, nil
}
var err error
rc.pods, err = util.ListOwnedPods(rc.client, rc.object)
return rc.pods, err
}
// Initialize prepares the Deployment for the BatchRelease process
func (rc *realController) Initialize(release *v1beta1.BatchRelease) error {
if rc.object == nil || control.IsControlledByBatchRelease(release, rc.object) {
return nil
}
// Disable the HPA
if err := hpa.DisableHPA(rc.client, rc.object); err != nil {
return err
}
klog.InfoS("Initialize: disabled HPA for deployment successfully", "deployment", klog.KObj(rc.object))
// Patch minReadySeconds for stable ReplicaSet
if err := rc.patchStableRSMinReadySeconds(v1beta1.MaxReadySeconds); err != nil {
return err
}
klog.InfoS("Initialize: patched minReadySeconds for stable replicaset successfully", "deployment", klog.KObj(rc.object))
// Patch Deplopyment
if err := rc.patchDeployment(release); err != nil {
return err
}
klog.InfoS("Initialize: patched deployment successfully", "deployment", klog.KObj(rc.object))
return nil
}
func (rc *realController) UpgradeBatch(ctx *batchcontext.BatchContext) error {
if err := control.ValidateReadyForBlueGreenRelease(rc.object); err != nil {
return errors.NewBadRequestError(fmt.Errorf("cannot upgrade batch, because deployment %v doesn't satisfy conditions: %s", klog.KObj(rc.object), err.Error()))
}
desired, _ := intstr.GetScaledValueFromIntOrPercent(&ctx.DesiredSurge, int(ctx.Replicas), true)
current, _ := intstr.GetScaledValueFromIntOrPercent(&ctx.CurrentSurge, int(ctx.Replicas), true)
if current >= desired {
klog.Infof("No need to upgrade batch for deployment %v: because current %d >= desired %d", klog.KObj(rc.object), current, desired)
return nil
}
klog.Infof("Ready to upgrade batch for deployment %v: current %d < desired %d", klog.KObj(rc.object), current, desired)
patchData := patch.NewDeploymentPatch()
// different with canary release, bluegreen don't need to set pause in the process of rollout
// because our webhook may pause the Deployment in some situations, we ensure that the Deployment is not paused
patchData.UpdatePaused(false)
patchData.UpdateStrategy(apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
MaxSurge: &ctx.DesiredSurge,
MaxUnavailable: &intstr.IntOrString{},
},
})
return rc.client.Patch(context.TODO(), util.GetEmptyObjectWithKey(rc.object), patchData)
}
// set pause to false, restore the original setting, delete annotation
func (rc *realController) Finalize(release *v1beta1.BatchRelease) error {
if release.Spec.ReleasePlan.BatchPartition != nil {
// continuous release (not supported yet)
/*
patchData := patch.NewDeploymentPatch()
patchData.DeleteAnnotation(util.BatchReleaseControlAnnotation)
if err := rc.client.Patch(context.TODO(), d, patchData); err != nil {
return err
}
*/
klog.Warningf("continuous release is not supported yet for bluegreen style release")
return nil
}
// restore the original setting and remove annotation
d := util.GetEmptyObjectWithKey(rc.object)
if !rc.restored() {
setting, err := control.GetOriginalSetting(rc.object)
if err != nil {
return err
}
patchData := patch.NewDeploymentPatch()
// restore the original setting
patchData.UpdatePaused(false)
patchData.UpdateMinReadySeconds(setting.MinReadySeconds)
patchData.UpdateProgressDeadlineSeconds(setting.ProgressDeadlineSeconds)
patchData.UpdateMaxSurge(setting.MaxSurge)
patchData.UpdateMaxUnavailable(setting.MaxUnavailable)
// restore label and annotation
patchData.DeleteAnnotation(v1beta1.OriginalDeploymentStrategyAnnotation)
patchData.DeleteLabel(v1alpha1.DeploymentStableRevisionLabel)
patchData.DeleteAnnotation(util.BatchReleaseControlAnnotation)
if err := rc.client.Patch(context.TODO(), d, patchData); err != nil {
return err
}
klog.InfoS("Finalize: deployment bluegreen release: wait all pods updated and ready", "Deployment", klog.KObj(rc.object))
}
// wait all pods updated and ready
if err := waitAllUpdatedAndReady(d.(*apps.Deployment)); err != nil {
return errors.NewRetryError(err)
}
klog.InfoS("Finalize: All pods updated and ready, then restore hpa", "Deployment", klog.KObj(rc.object))
// restore hpa
return hpa.RestoreHPA(rc.client, rc.object)
}
func (rc *realController) restored() bool {
if rc.object == nil || rc.object.DeletionTimestamp != nil {
return true
}
if rc.object.Annotations == nil || len(rc.object.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation]) == 0 {
return true
}
return false
}
func (rc *realController) CalculateBatchContext(release *v1beta1.BatchRelease) (*batchcontext.BatchContext, error) {
currentBatch := release.Status.CanaryStatus.CurrentBatch
desiredSurge := release.Spec.ReleasePlan.Batches[currentBatch].CanaryReplicas
PlannedUpdatedReplicas := deploymentutil.NewRSReplicasLimit(desiredSurge, rc.object)
currentSurge := intstr.FromInt(0)
if rc.object.Spec.Strategy.RollingUpdate != nil && rc.object.Spec.Strategy.RollingUpdate.MaxSurge != nil {
currentSurge = *rc.object.Spec.Strategy.RollingUpdate.MaxSurge
if currentSurge == intstr.FromInt(1) {
// currentSurge == intstr.FromInt(1) means that currentSurge is the initial value
// if the value is indeed set by user, setting it to 0 still does no harm
currentSurge = intstr.FromInt(0)
}
}
return &batchcontext.BatchContext{
Pods: rc.pods,
RolloutID: release.Spec.ReleasePlan.RolloutID,
CurrentBatch: currentBatch,
CurrentSurge: currentSurge,
DesiredSurge: desiredSurge,
UpdateRevision: release.Status.UpdateRevision,
Replicas: rc.Replicas,
UpdatedReplicas: rc.Status.UpdatedReplicas,
UpdatedReadyReplicas: rc.Status.UpdatedReadyReplicas,
PlannedUpdatedReplicas: PlannedUpdatedReplicas,
DesiredUpdatedReplicas: PlannedUpdatedReplicas,
}, nil
}
func (rc *realController) getWorkloadInfo(d *apps.Deployment) *util.WorkloadInfo {
workloadInfo := util.ParseWorkload(d)
workloadInfo.Status.UpdatedReadyReplicas = 0
if res, err := rc.getUpdatedReadyReplicas(d); err == nil {
workloadInfo.Status.UpdatedReadyReplicas = res
}
workloadInfo.Status.StableRevision = d.Labels[v1alpha1.DeploymentStableRevisionLabel]
return workloadInfo
}
func (rc *realController) getUpdatedReadyReplicas(d *apps.Deployment) (int32, error) {
rss := &apps.ReplicaSetList{}
listOpts := []client.ListOption{
client.InNamespace(d.Namespace),
client.MatchingLabels(d.Spec.Selector.MatchLabels),
client.UnsafeDisableDeepCopy,
}
if err := rc.client.List(context.TODO(), rss, listOpts...); err != nil {
klog.Warningf("getWorkloadInfo failed, because"+"%s", err.Error())
return -1, err
}
allRSs := rss.Items
// select rs owner by current deployment
ownedRSs := make([]*apps.ReplicaSet, 0)
for i := range allRSs {
rs := &allRSs[i]
if !rs.DeletionTimestamp.IsZero() {
continue
}
if metav1.IsControlledBy(rs, d) {
ownedRSs = append(ownedRSs, rs)
}
}
newRS := deploymentutil.FindNewReplicaSet(d, ownedRSs)
updatedReadyReplicas := int32(0)
// if newRS is nil, it means the replicaset hasn't been created (because the deployment is paused)
// therefore we can return 0 directly
if newRS != nil {
updatedReadyReplicas = newRS.Status.ReadyReplicas
}
return updatedReadyReplicas, nil
}
func waitAllUpdatedAndReady(deployment *apps.Deployment) error {
if deployment.Spec.Paused {
return fmt.Errorf("deployment should not be paused")
}
// ALL pods updated AND ready
if deployment.Status.ReadyReplicas != deployment.Status.UpdatedReplicas {
return fmt.Errorf("all ready replicas should be updated, and all updated replicas should be ready")
}
availableReplicas := deployment.Status.AvailableReplicas
allowedUnavailable := util.DeploymentMaxUnavailable(deployment)
if allowedUnavailable+availableReplicas < deployment.Status.Replicas {
return fmt.Errorf("ready replicas should satisfy maxUnavailable")
}
return nil
}
// Patch minReadySeconds for stable ReplicaSet
/*
Here is why:
For rollback scenario, we should set the stable rs minReadySeconds to infinity to make pods of the stable rs unavailable,
otherwise Pods in new version would be terminated immediately when rollback happens.
we want to keep them until traffic is switched to the stable version
*/
func (rc *realController) patchStableRSMinReadySeconds(seconds int32) error {
if stableRS, err := rc.finder.GetDeploymentStableRs(rc.object); err != nil {
return fmt.Errorf("failed to get stable ReplicaSet: %v", err)
} else if stableRS == nil {
klog.Warningf("No stable ReplicaSet found for deployment %s/%s", rc.object.Namespace, rc.object.Name)
} else {
body := fmt.Sprintf(`{"spec":{"minReadySeconds":%v}}`, seconds)
if err = rc.client.Patch(context.TODO(), stableRS, client.RawPatch(types.MergePatchType, []byte(body))); err != nil {
return fmt.Errorf("failed to patch ReplicaSet %s/%s minReadySeconds to %v: %v", stableRS.Namespace, stableRS.Name, v1beta1.MaxReadySeconds, err)
}
}
return nil
}
// Update deployment strategy: MinReadySeconds, ProgressDeadlineSeconds, MaxSurge, MaxUnavailable
func (rc *realController) patchDeployment(release *v1beta1.BatchRelease) error {
setting, err := control.GetOriginalSetting(rc.object)
if err != nil {
return errors.NewBadRequestError(fmt.Errorf("cannot get original setting for deployment %v: %s", klog.KObj(rc.object), err.Error()))
}
control.InitOriginalSetting(&setting, rc.object)
patchData := patch.NewDeploymentPatch()
patchData.InsertAnnotation(v1beta1.OriginalDeploymentStrategyAnnotation, util.DumpJSON(&setting))
patchData.InsertAnnotation(util.BatchReleaseControlAnnotation, util.DumpJSON(metav1.NewControllerRef(
release, release.GetObjectKind().GroupVersionKind())))
patchData.UpdateStrategy(apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 0},
},
})
patchData.UpdateMinReadySeconds(v1beta1.MaxReadySeconds)
patchData.UpdateProgressDeadlineSeconds(utilpointer.Int32(v1beta1.MaxProgressSeconds))
// Apply the patch to the Deployment
if err := rc.client.Patch(context.TODO(), util.GetEmptyObjectWithKey(rc.object), patchData); err != nil {
return fmt.Errorf("failed to patch deployment %v: %v", klog.KObj(rc.object), err)
}
return nil
}

View File

@ -0,0 +1,728 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package deployment
import (
"context"
"encoding/json"
"fmt"
"reflect"
"strconv"
"testing"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
kruiseappsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
rolloutapi "github.com/openkruise/rollouts/api"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
control "github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/errors"
apps "k8s.io/api/apps/v1"
autoscalingv1 "k8s.io/api/autoscaling/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/rand"
"k8s.io/apimachinery/pkg/util/uuid"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
var (
scheme = runtime.NewScheme()
deploymentKey = types.NamespacedName{
Name: "deployment",
Namespace: "default",
}
deploymentDemo = &apps.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: deploymentKey.Name,
Namespace: deploymentKey.Namespace,
Generation: 1,
Labels: map[string]string{
"app": "busybox",
},
Annotations: map[string]string{
"type": "unit-test",
},
},
Spec: apps.DeploymentSpec{
Paused: true,
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "busybox",
},
},
Replicas: pointer.Int32(10),
Strategy: apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
MaxSurge: &intstr.IntOrString{Type: intstr.String, StrVal: "20%"},
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "busybox",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "busybox",
Image: "busybox:latest",
},
},
},
},
},
Status: apps.DeploymentStatus{
Replicas: 10,
UpdatedReplicas: 0,
ReadyReplicas: 10,
AvailableReplicas: 10,
CollisionCount: pointer.Int32(1),
ObservedGeneration: 1,
},
}
deploymentDemo2 = &apps.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: apps.SchemeGroupVersion.String(),
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: "deployment",
Namespace: "default",
UID: types.UID("87076677"),
Generation: 2,
Labels: map[string]string{
"app": "busybox",
apps.DefaultDeploymentUniqueLabelKey: "update-pod-hash",
},
},
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32(10),
Strategy: apps.DeploymentStrategy{
Type: apps.RollingUpdateDeploymentStrategyType,
RollingUpdate: &apps.RollingUpdateDeployment{
MaxSurge: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(1)},
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: int32(0)},
},
},
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "busybox",
},
},
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: containers("v2"),
},
},
},
Status: apps.DeploymentStatus{
Replicas: 10,
ReadyReplicas: 10,
UpdatedReplicas: 0,
AvailableReplicas: 10,
},
}
releaseDemo = &v1beta1.BatchRelease{
TypeMeta: metav1.TypeMeta{
APIVersion: "rollouts.kruise.io/v1alpha1",
Kind: "BatchRelease",
},
ObjectMeta: metav1.ObjectMeta{
Name: "release",
Namespace: deploymentKey.Namespace,
UID: uuid.NewUUID(),
},
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
FinalizingPolicy: v1beta1.WaitResumeFinalizingPolicyType,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("10%"),
},
{
CanaryReplicas: intstr.FromString("50%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
},
},
WorkloadRef: v1beta1.ObjectRef{
APIVersion: deploymentDemo.APIVersion,
Kind: deploymentDemo.Kind,
Name: deploymentDemo.Name,
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 1,
},
},
}
hpaDemo = &autoscalingv1.HorizontalPodAutoscaler{
TypeMeta: metav1.TypeMeta{
APIVersion: "autoscaling/v1",
Kind: "HorizontalPodAutoscaler",
},
ObjectMeta: metav1.ObjectMeta{
Name: "hpa",
Namespace: deploymentKey.Namespace,
},
Spec: autoscalingv1.HorizontalPodAutoscalerSpec{
ScaleTargetRef: autoscalingv1.CrossVersionObjectReference{
APIVersion: apps.SchemeGroupVersion.String(),
Kind: "Deployment",
Name: deploymentDemo.Name,
},
MinReplicas: pointer.Int32(1),
MaxReplicas: 10,
},
}
)
func init() {
apps.AddToScheme(scheme)
rolloutapi.AddToScheme(scheme)
kruiseappsv1alpha1.AddToScheme(scheme)
autoscalingv1.AddToScheme(scheme)
}
func TestControlPackage(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Deployment Control Package Suite")
}
var _ = Describe("Deployment Control", func() {
var (
c client.Client
rc *realController
deployment *apps.Deployment
release *v1beta1.BatchRelease
hpa *autoscalingv1.HorizontalPodAutoscaler
stableRS *apps.ReplicaSet
canaryRS *apps.ReplicaSet
)
BeforeEach(func() {
deployment = deploymentDemo.DeepCopy()
release = releaseDemo.DeepCopy()
hpa = hpaDemo.DeepCopy()
deployment = getStableWithReady(deployment, "v1").(*apps.Deployment)
stableRS = makeStableReplicaSets(deployment).(*apps.ReplicaSet)
stableRS.Spec.MinReadySeconds = 0
stableRS.Status.ReadyReplicas = *deployment.Spec.Replicas
stableRS.Status.AvailableReplicas = *deployment.Spec.Replicas
canaryRS = makeCanaryReplicaSets(deployment).(*apps.ReplicaSet)
canaryRS.Status.ReadyReplicas = 0
canaryRS.Status.AvailableReplicas = 0
c = fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(deployment, release, hpa, stableRS, canaryRS).
Build()
rc = &realController{
key: types.NamespacedName{Namespace: deployment.Namespace, Name: deployment.Name},
client: c,
finder: util.NewControllerFinder(c),
}
})
It("should initialize Deployment successfully", func() {
// build controller
_, err := rc.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Initialize method
err = retryFunction(3, func() error {
return rc.Initialize(release)
})
Expect(err).NotTo(HaveOccurred())
// inspect if HPA is disabled
disabledHPA := &autoscalingv1.HorizontalPodAutoscaler{}
err = c.Get(context.TODO(), types.NamespacedName{Namespace: hpa.Namespace, Name: hpa.Name}, disabledHPA)
Expect(err).NotTo(HaveOccurred())
Expect(disabledHPA.Spec.ScaleTargetRef.Name).To(Equal(deployment.Name + "-DisableByRollout"))
// inspect if MinReadySeconds of stable ReplicaSet is updated
stableRSAfter := &apps.ReplicaSet{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(stableRS), stableRSAfter)
Expect(err).NotTo(HaveOccurred())
Expect(stableRSAfter.Spec.MinReadySeconds).To(Equal(int32(v1beta1.MaxReadySeconds)))
// inspect if Deployment is patched properly
updatedDeployment := &apps.Deployment{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(deployment), updatedDeployment)
Expect(err).NotTo(HaveOccurred())
// inspect if annotations are added
Expect(updatedDeployment.Annotations).To(HaveKey(v1beta1.OriginalDeploymentStrategyAnnotation))
Expect(updatedDeployment.Annotations).To(HaveKey(util.BatchReleaseControlAnnotation))
Expect(updatedDeployment.Annotations[util.BatchReleaseControlAnnotation]).To(Equal(getControlInfo(release)))
// inspect if strategy is updated
Expect(updatedDeployment.Spec.Strategy.RollingUpdate).NotTo(BeNil())
Expect(updatedDeployment.Spec.Strategy.RollingUpdate.MaxSurge.IntVal).To(Equal(int32(1)))
Expect(updatedDeployment.Spec.Strategy.RollingUpdate.MaxUnavailable.IntVal).To(Equal(int32(0)))
Expect(updatedDeployment.Spec.MinReadySeconds).To(Equal(int32(v1beta1.MaxReadySeconds)))
Expect(*updatedDeployment.Spec.ProgressDeadlineSeconds).To(Equal(int32(v1beta1.MaxProgressSeconds)))
})
It("should finalize Deployment successfully", func() {
// build controller
rc.object = nil
_, err := rc.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Finalize method
err = retryFunction(3, func() error {
return rc.Finalize(release)
})
Expect(err).NotTo(HaveOccurred())
// inspect if Deployment is patched properly
updatedDeployment := &apps.Deployment{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(deployment), updatedDeployment)
Expect(err).NotTo(HaveOccurred())
// inspect if annotations are removed
Expect(updatedDeployment.Annotations).NotTo(HaveKey(v1beta1.OriginalDeploymentStrategyAnnotation))
Expect(updatedDeployment.Annotations).NotTo(HaveKey(util.BatchReleaseControlAnnotation))
// inspect if strategy is restored
Expect(updatedDeployment.Spec.Strategy.RollingUpdate).NotTo(BeNil())
Expect(*updatedDeployment.Spec.Strategy.RollingUpdate.MaxSurge).To(Equal(intstr.IntOrString{Type: intstr.String, StrVal: "20%"}))
Expect(*updatedDeployment.Spec.Strategy.RollingUpdate.MaxUnavailable).To(Equal(intstr.IntOrString{Type: intstr.Int, IntVal: 1}))
Expect(updatedDeployment.Spec.MinReadySeconds).To(Equal(int32(0)))
Expect(updatedDeployment.Spec.ProgressDeadlineSeconds).To(BeNil())
// inspect if HPA is restored
restoredHPA := &autoscalingv1.HorizontalPodAutoscaler{}
err = c.Get(context.TODO(), types.NamespacedName{Namespace: hpa.Namespace, Name: hpa.Name}, restoredHPA)
Expect(err).NotTo(HaveOccurred())
Expect(restoredHPA.Spec.ScaleTargetRef.Name).To(Equal(deployment.Name))
// inspect if MinReadySeconds of stable ReplicaSet is restored
stableRSAfter := &apps.ReplicaSet{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(stableRS), stableRSAfter)
Expect(err).NotTo(HaveOccurred())
Expect(stableRSAfter.Spec.MinReadySeconds).To(Equal(int32(0)))
})
It("should upgradBatch for Deployment successfully", func() {
// call Initialize method
_, err := rc.BuildController()
Expect(err).NotTo(HaveOccurred())
err = retryFunction(3, func() error {
return rc.Initialize(release)
})
Expect(err).NotTo(HaveOccurred())
// call UpgradeBatch method
rc.object = nil
_, err = rc.BuildController()
Expect(err).NotTo(HaveOccurred())
batchContext, err := rc.CalculateBatchContext(release)
Expect(err).NotTo(HaveOccurred())
err = rc.UpgradeBatch(batchContext)
Expect(err).NotTo(HaveOccurred())
// inspect if Deployment is patched properly
updatedDeployment := &apps.Deployment{}
err = c.Get(context.TODO(), client.ObjectKeyFromObject(deployment), updatedDeployment)
Expect(err).NotTo(HaveOccurred())
Expect(updatedDeployment.Spec.Paused).To(BeFalse())
Expect(*updatedDeployment.Spec.Strategy.RollingUpdate.MaxSurge).To(Equal(intstr.IntOrString{Type: intstr.String, StrVal: "50%"}))
Expect(*updatedDeployment.Spec.Strategy.RollingUpdate.MaxUnavailable).To(Equal(intstr.IntOrString{Type: intstr.Int, IntVal: 0}))
})
})
func TestCalculateBatchContext(t *testing.T) {
RegisterFailHandler(Fail)
cases := map[string]struct {
workload func() []client.Object
release func() *v1beta1.BatchRelease
result *batchcontext.BatchContext
}{
"noraml case": {
workload: func() []client.Object {
deployment := getStableWithReady(deploymentDemo2, "v2").(*apps.Deployment)
deployment.Status = apps.DeploymentStatus{
Replicas: 15,
UpdatedReplicas: 5,
AvailableReplicas: 12,
ReadyReplicas: 12,
}
// current partition, ie. maxSurge
deployment.Spec.Strategy.RollingUpdate.MaxSurge = &intstr.IntOrString{Type: intstr.String, StrVal: "50%"}
deployment.Spec.Replicas = pointer.Int32(10)
newRss := makeCanaryReplicaSets(deployment).(*apps.ReplicaSet)
newRss.Status.ReadyReplicas = 2
return []client.Object{deployment, newRss, makeStableReplicaSets(deployment)}
},
release: func() *v1beta1.BatchRelease {
r := &v1beta1.BatchRelease{
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
FinalizingPolicy: v1beta1.WaitResumeFinalizingPolicyType,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "50%"},
},
{
CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "100%"},
},
},
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 1,
},
UpdateRevision: "version-2",
},
}
return r
},
result: &batchcontext.BatchContext{
CurrentBatch: 1,
UpdateRevision: "version-2",
DesiredSurge: intstr.IntOrString{Type: intstr.String, StrVal: "100%"},
CurrentSurge: intstr.IntOrString{Type: intstr.String, StrVal: "50%"},
Replicas: 10,
UpdatedReplicas: 5,
UpdatedReadyReplicas: 2,
PlannedUpdatedReplicas: 10,
DesiredUpdatedReplicas: 10,
},
},
"maxSurge=99%, replicas=5": {
workload: func() []client.Object {
deployment := getStableWithReady(deploymentDemo2, "v2").(*apps.Deployment)
deployment.Status = apps.DeploymentStatus{
Replicas: 9,
UpdatedReplicas: 4,
AvailableReplicas: 9,
ReadyReplicas: 9,
}
deployment.Spec.Replicas = pointer.Int32(5)
// current partition, ie. maxSurge
deployment.Spec.Strategy.RollingUpdate.MaxSurge = &intstr.IntOrString{Type: intstr.String, StrVal: "90%"}
newRss := makeCanaryReplicaSets(deployment).(*apps.ReplicaSet)
newRss.Status.ReadyReplicas = 4
return []client.Object{deployment, newRss, makeStableReplicaSets(deployment)}
},
release: func() *v1beta1.BatchRelease {
r := &v1beta1.BatchRelease{
Spec: v1beta1.BatchReleaseSpec{
ReleasePlan: v1beta1.ReleasePlan{
FinalizingPolicy: v1beta1.WaitResumeFinalizingPolicyType,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "90%"},
},
{
CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "99%"},
},
},
},
},
Status: v1beta1.BatchReleaseStatus{
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatch: 1,
},
UpdateRevision: "version-2",
},
}
return r
},
result: &batchcontext.BatchContext{
CurrentBatch: 1,
UpdateRevision: "version-2",
DesiredSurge: intstr.FromString("99%"),
CurrentSurge: intstr.FromString("90%"),
Replicas: 5,
UpdatedReplicas: 4,
UpdatedReadyReplicas: 4,
PlannedUpdatedReplicas: 4,
DesiredUpdatedReplicas: 4,
},
},
// test case for continuous release
// "maxSurge=100%, but it is initialized value": {
// workload: func() []client.Object {
// deployment := getStableWithReady(deploymentDemo2, "v2").(*apps.Deployment)
// deployment.Status = apps.DeploymentStatus{
// Replicas: 10,
// UpdatedReplicas: 0,
// AvailableReplicas: 10,
// ReadyReplicas: 10,
// }
// // current partition, ie. maxSurge
// deployment.Spec.Strategy.RollingUpdate.MaxSurge = &intstr.IntOrString{Type: intstr.String, StrVal: "100%"}
// newRss := makeCanaryReplicaSets(deployment).(*apps.ReplicaSet)
// newRss.Status.ReadyReplicas = 0
// return []client.Object{deployment, newRss, makeStableReplicaSets(deployment)}
// },
// release: func() *v1beta1.BatchRelease {
// r := &v1beta1.BatchRelease{
// Spec: v1beta1.BatchReleaseSpec{
// ReleasePlan: v1beta1.ReleasePlan{
// FailureThreshold: &percent,
// FinalizingPolicy: v1beta1.WaitResumeFinalizingPolicyType,
// Batches: []v1beta1.ReleaseBatch{
// {
// CanaryReplicas: intstr.IntOrString{Type: intstr.String, StrVal: "50%"},
// },
// },
// },
// },
// Status: v1beta1.BatchReleaseStatus{
// CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
// CurrentBatch: 0,
// },
// UpdateRevision: "version-2",
// },
// }
// return r
// },
// result: &batchcontext.BatchContext{
// CurrentBatch: 0,
// UpdateRevision: "version-2",
// DesiredPartition: intstr.FromString("50%"),
// FailureThreshold: &percent,
// CurrentPartition: intstr.FromString("0%"), // mainly check this
// Replicas: 10,
// UpdatedReplicas: 0,
// UpdatedReadyReplicas: 0,
// PlannedUpdatedReplicas: 5,
// DesiredUpdatedReplicas: 5,
// },
// },
}
for name, cs := range cases {
t.Run(name, func(t *testing.T) {
cliBuilder := fake.NewClientBuilder().WithScheme(scheme).WithObjects(cs.workload()...)
cli := cliBuilder.Build()
control := realController{
client: cli,
key: deploymentKey,
}
_, err := control.BuildController()
Expect(err).NotTo(HaveOccurred())
got, err := control.CalculateBatchContext(cs.release())
Expect(err).NotTo(HaveOccurred())
fmt.Printf("expect %s, but got %s", cs.result.Log(), got.Log())
Expect(got.Log()).Should(Equal(cs.result.Log()))
})
}
}
func TestRealController(t *testing.T) {
RegisterFailHandler(Fail)
release := releaseDemo.DeepCopy()
clone := deploymentDemo.DeepCopy()
stableRs, canaryRs := makeStableReplicaSets(clone), makeCanaryReplicaSets(clone)
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(release, clone, stableRs, canaryRs).Build()
// build new controller
c := NewController(cli, deploymentKey, clone.GroupVersionKind()).(*realController)
controller, err := c.BuildController()
Expect(err).NotTo(HaveOccurred())
// call Initialize
err = controller.Initialize(release)
Expect(err).NotTo(HaveOccurred())
fetch := &apps.Deployment{}
Expect(cli.Get(context.TODO(), deploymentKey, fetch)).NotTo(HaveOccurred())
// check strategy
Expect(fetch.Spec.Paused).Should(BeTrue())
Expect(fetch.Spec.Strategy.Type).Should(Equal(apps.RollingUpdateDeploymentStrategyType))
Expect(reflect.DeepEqual(fetch.Spec.Strategy.RollingUpdate.MaxSurge, &intstr.IntOrString{Type: intstr.Int, IntVal: 1})).Should(BeTrue())
Expect(reflect.DeepEqual(fetch.Spec.Strategy.RollingUpdate.MaxUnavailable, &intstr.IntOrString{Type: intstr.Int, IntVal: 0})).Should(BeTrue())
Expect(fetch.Spec.MinReadySeconds).Should(Equal(int32(v1beta1.MaxReadySeconds)))
Expect(*fetch.Spec.ProgressDeadlineSeconds).Should(Equal(int32(v1beta1.MaxProgressSeconds)))
// check annotations
Expect(fetch.Annotations[util.BatchReleaseControlAnnotation]).Should(Equal(getControlInfo(release)))
fmt.Println(fetch.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation])
Expect(fetch.Annotations[v1beta1.OriginalDeploymentStrategyAnnotation]).Should(Equal(util.DumpJSON(&control.OriginalDeploymentStrategy{
MaxUnavailable: &intstr.IntOrString{Type: intstr.Int, IntVal: 1},
MaxSurge: &intstr.IntOrString{Type: intstr.String, StrVal: "20%"},
MinReadySeconds: 0,
ProgressDeadlineSeconds: pointer.Int32(600),
})))
// check minReadyseconds field of stable replicaset
fetchRS := &apps.ReplicaSet{}
Expect(cli.Get(context.TODO(), types.NamespacedName{Name: stableRs.GetName(), Namespace: stableRs.GetNamespace()}, fetchRS)).NotTo(HaveOccurred())
Expect(fetchRS.Spec.MinReadySeconds).Should(Equal(int32(v1beta1.MaxReadySeconds)))
c.object = fetch // mock
for {
batchContext, err := controller.CalculateBatchContext(release)
Expect(err).NotTo(HaveOccurred())
err = controller.UpgradeBatch(batchContext)
fetch := &apps.Deployment{}
// mock
Expect(cli.Get(context.TODO(), deploymentKey, fetch)).NotTo(HaveOccurred())
c.object = fetch
if err == nil {
break
}
}
fetch = &apps.Deployment{}
Expect(cli.Get(context.TODO(), deploymentKey, fetch)).NotTo(HaveOccurred())
// currentBatch is 1, which means br is in the second batch, maxSurge is 50%
Expect(reflect.DeepEqual(fetch.Spec.Strategy.RollingUpdate.MaxSurge, &intstr.IntOrString{Type: intstr.String, StrVal: "50%"})).Should(BeTrue())
release.Spec.ReleasePlan.BatchPartition = nil
err = controller.Finalize(release)
Expect(errors.IsRetryError(err)).Should(BeTrue())
fetch = &apps.Deployment{}
Expect(cli.Get(context.TODO(), deploymentKey, fetch)).NotTo(HaveOccurred())
// check workload strategy
Expect(fetch.Spec.Paused).Should(BeFalse())
Expect(fetch.Spec.Strategy.Type).Should(Equal(apps.RollingUpdateDeploymentStrategyType))
Expect(reflect.DeepEqual(fetch.Spec.Strategy.RollingUpdate.MaxSurge, &intstr.IntOrString{Type: intstr.String, StrVal: "20%"})).Should(BeTrue())
Expect(reflect.DeepEqual(fetch.Spec.Strategy.RollingUpdate.MaxUnavailable, &intstr.IntOrString{Type: intstr.Int, IntVal: 1})).Should(BeTrue())
Expect(fetch.Spec.MinReadySeconds).Should(Equal(int32(0)))
Expect(*fetch.Spec.ProgressDeadlineSeconds).Should(Equal(int32(600)))
}
func getControlInfo(release *v1beta1.BatchRelease) string {
owner, _ := json.Marshal(metav1.NewControllerRef(release, release.GetObjectKind().GroupVersionKind()))
return string(owner)
}
func makeCanaryReplicaSets(d client.Object) client.Object {
deploy := d.(*apps.Deployment)
labels := deploy.Spec.Selector.DeepCopy().MatchLabels
labels[apps.DefaultDeploymentUniqueLabelKey] = util.ComputeHash(&deploy.Spec.Template, nil)
return &apps.ReplicaSet{
TypeMeta: metav1.TypeMeta{
APIVersion: apps.SchemeGroupVersion.String(),
Kind: "ReplicaSet",
},
ObjectMeta: metav1.ObjectMeta{
Name: deploy.Name + rand.String(5),
Namespace: deploy.Namespace,
UID: uuid.NewUUID(),
Labels: labels,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(deploy, deploy.GroupVersionKind()),
},
CreationTimestamp: metav1.Now(),
},
Spec: apps.ReplicaSetSpec{
Replicas: deploy.Spec.Replicas,
Selector: deploy.Spec.Selector.DeepCopy(),
Template: *deploy.Spec.Template.DeepCopy(),
},
}
}
func makeStableReplicaSets(d client.Object) client.Object {
deploy := d.(*apps.Deployment)
stableTemplate := deploy.Spec.Template.DeepCopy()
stableTemplate.Spec.Containers = containers("v1")
labels := deploy.Spec.Selector.DeepCopy().MatchLabels
labels[apps.DefaultDeploymentUniqueLabelKey] = util.ComputeHash(stableTemplate, nil)
return &apps.ReplicaSet{
TypeMeta: metav1.TypeMeta{
APIVersion: apps.SchemeGroupVersion.String(),
Kind: "ReplicaSet",
},
ObjectMeta: metav1.ObjectMeta{
Name: deploy.Name + rand.String(5),
Namespace: deploy.Namespace,
UID: uuid.NewUUID(),
Labels: labels,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(deploy, deploy.GroupVersionKind()),
},
CreationTimestamp: metav1.NewTime(time.Now().Add(-time.Hour)),
},
Spec: apps.ReplicaSetSpec{
Replicas: deploy.Spec.Replicas,
Selector: deploy.Spec.Selector.DeepCopy(),
Template: *stableTemplate,
},
}
}
func containers(version string) []corev1.Container {
return []corev1.Container{
{
Name: "busybox",
Image: fmt.Sprintf("busybox:%v", version),
},
}
}
func getStableWithReady(workload client.Object, version string) client.Object {
switch workload.(type) {
case *apps.Deployment:
deploy := workload.(*apps.Deployment)
d := deploy.DeepCopy()
d.Spec.Paused = true
d.ResourceVersion = strconv.Itoa(rand.Intn(100000000000))
d.Spec.Template.Spec.Containers = containers(version)
d.Status.ObservedGeneration = deploy.Generation
return d
case *kruiseappsv1alpha1.CloneSet:
clone := workload.(*kruiseappsv1alpha1.CloneSet)
c := clone.DeepCopy()
c.ResourceVersion = strconv.Itoa(rand.Intn(100000000000))
c.Spec.UpdateStrategy.Paused = true
c.Spec.UpdateStrategy.Partition = &intstr.IntOrString{Type: intstr.String, StrVal: "100%"}
c.Spec.Template.Spec.Containers = containers(version)
c.Status.ObservedGeneration = clone.Generation
return c
}
return nil
}
func retryFunction(limit int, f func() error) (err error) {
for i := limit; i >= 0; i-- {
if err = f(); err == nil {
return nil
}
}
return err
}

View File

@ -0,0 +1,106 @@
package hpa
import (
"context"
"fmt"
"strings"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
HPADisableSuffix = "-DisableByRollout"
)
func DisableHPA(cli client.Client, object client.Object) error {
hpa := findHPAForWorkload(cli, object)
if hpa == nil {
return nil
}
targetRef, found, err := unstructured.NestedFieldCopy(hpa.Object, "spec", "scaleTargetRef")
if err != nil || !found {
return fmt.Errorf("get HPA targetRef for workload %v failed, because %s", klog.KObj(object), err.Error())
}
ref := targetRef.(map[string]interface{})
name, version, kind := ref["name"].(string), ref["apiVersion"].(string), ref["kind"].(string)
if !strings.HasSuffix(name, HPADisableSuffix) {
body := fmt.Sprintf(`{"spec":{"scaleTargetRef":{"apiVersion": "%s", "kind": "%s", "name": "%s"}}}`, version, kind, addSuffix(name))
if err = cli.Patch(context.TODO(), hpa, client.RawPatch(types.MergePatchType, []byte(body))); err != nil {
return fmt.Errorf("failed to disable HPA %v for workload %v, because %s", klog.KObj(hpa), klog.KObj(object), err.Error())
}
}
return nil
}
func RestoreHPA(cli client.Client, object client.Object) error {
hpa := findHPAForWorkload(cli, object)
if hpa == nil {
return nil
}
targetRef, found, err := unstructured.NestedFieldCopy(hpa.Object, "spec", "scaleTargetRef")
if err != nil || !found {
return fmt.Errorf("get HPA targetRef for workload %v failed, because %s", klog.KObj(object), err.Error())
}
ref := targetRef.(map[string]interface{})
name, version, kind := ref["name"].(string), ref["apiVersion"].(string), ref["kind"].(string)
if strings.HasSuffix(name, HPADisableSuffix) {
body := fmt.Sprintf(`{"spec":{"scaleTargetRef":{"apiVersion": "%s", "kind": "%s", "name": "%s"}}}`, version, kind, removeSuffix(name))
if err = cli.Patch(context.TODO(), hpa, client.RawPatch(types.MergePatchType, []byte(body))); err != nil {
return fmt.Errorf("failed to restore HPA %v for workload %v, because %s", klog.KObj(hpa), klog.KObj(object), err.Error())
}
}
return nil
}
func findHPAForWorkload(cli client.Client, object client.Object) *unstructured.Unstructured {
hpa := findHPA(cli, object, "v2")
if hpa != nil {
return hpa
}
return findHPA(cli, object, "v1")
}
func findHPA(cli client.Client, object client.Object, version string) *unstructured.Unstructured {
unstructuredList := &unstructured.UnstructuredList{}
hpaGvk := schema.GroupVersionKind{Group: "autoscaling", Kind: "HorizontalPodAutoscaler", Version: version}
unstructuredList.SetGroupVersionKind(hpaGvk)
if err := cli.List(context.TODO(), unstructuredList, &client.ListOptions{Namespace: object.GetNamespace()}); err != nil {
klog.Warningf("Get HPA for workload %v failed, because %s", klog.KObj(object), err.Error())
return nil
}
klog.Infof("Get %d HPA with %s in namespace %s in total", len(unstructuredList.Items), version, object.GetNamespace())
for _, item := range unstructuredList.Items {
scaleTargetRef, found, err := unstructured.NestedFieldCopy(item.Object, "spec", "scaleTargetRef")
if err != nil || !found {
continue
}
ref := scaleTargetRef.(map[string]interface{})
name, version, kind := ref["name"].(string), ref["apiVersion"].(string), ref["kind"].(string)
if version == object.GetObjectKind().GroupVersionKind().GroupVersion().String() &&
kind == object.GetObjectKind().GroupVersionKind().Kind &&
removeSuffix(name) == object.GetName() {
return &item
}
}
klog.Infof("No HPA found for workload %v", klog.KObj(object))
return nil
}
func addSuffix(HPARefName string) string {
if strings.HasSuffix(HPARefName, HPADisableSuffix) {
return HPARefName
}
return HPARefName + HPADisableSuffix
}
func removeSuffix(HPARefName string) string {
refName := HPARefName
for strings.HasSuffix(refName, HPADisableSuffix) {
refName = refName[:len(refName)-len(HPADisableSuffix)]
}
return refName
}

View File

@ -0,0 +1,149 @@
package hpa
import (
"context"
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
var (
scheme = runtime.NewScheme()
)
func TestHPAPackage(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "HPA Package Suite")
}
var _ = Describe("HPA Operations", func() {
var (
cli client.Client
object *unstructured.Unstructured
)
BeforeEach(func() {
object = &unstructured.Unstructured{}
object.SetGroupVersionKind(schema.GroupVersionKind{
Group: "apps",
Version: "v1",
Kind: "Deployment",
})
object.SetNamespace("default")
object.SetName("my-deployment")
cli = fake.NewClientBuilder().WithScheme(scheme).WithObjects(object).Build()
})
Context("when disabling and restoring HPA", func() {
It("should disable and restore HPA successfully", func() {
// Create a fake HPA
hpa := &unstructured.Unstructured{}
hpa.SetGroupVersionKind(schema.GroupVersionKind{
Group: "autoscaling",
Version: "v2",
Kind: "HorizontalPodAutoscaler",
})
hpa.SetNamespace("default")
hpa.SetName("my-hpa")
unstructured.SetNestedField(hpa.Object, map[string]interface{}{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "my-deployment",
}, "spec", "scaleTargetRef")
Expect(cli.Create(context.TODO(), hpa)).To(Succeed())
// Disable HPA
DisableHPA(cli, object)
fetchedHPA := &unstructured.Unstructured{}
fetchedHPA.SetGroupVersionKind(schema.GroupVersionKind{
Group: "autoscaling",
Version: "v2",
Kind: "HorizontalPodAutoscaler",
})
Expect(cli.Get(context.TODO(), types.NamespacedName{
Namespace: "default",
Name: "my-hpa",
}, fetchedHPA)).To(Succeed())
targetRef, found, err := unstructured.NestedFieldCopy(fetchedHPA.Object, "spec", "scaleTargetRef")
Expect(err).NotTo(HaveOccurred())
Expect(found).To(BeTrue())
ref := targetRef.(map[string]interface{})
Expect(ref["name"]).To(Equal("my-deployment" + HPADisableSuffix))
// Restore HPA
RestoreHPA(cli, object)
Expect(cli.Get(context.TODO(), types.NamespacedName{
Namespace: "default",
Name: "my-hpa",
}, fetchedHPA)).To(Succeed())
targetRef, found, err = unstructured.NestedFieldCopy(fetchedHPA.Object, "spec", "scaleTargetRef")
Expect(err).NotTo(HaveOccurred())
Expect(found).To(BeTrue())
ref = targetRef.(map[string]interface{})
Expect(ref["name"]).To(Equal("my-deployment"))
})
})
Context("when finding HPA for workload", func() {
It("should find the correct HPA", func() {
// Create a fake HPA v2
hpaV2 := &unstructured.Unstructured{}
hpaV2.SetGroupVersionKind(schema.GroupVersionKind{
Group: "autoscaling",
Version: "v2",
Kind: "HorizontalPodAutoscaler",
})
hpaV2.SetNamespace("default")
hpaV2.SetName("my-hpa-v2")
unstructured.SetNestedField(hpaV2.Object, map[string]interface{}{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "my-deployment",
}, "spec", "scaleTargetRef")
// Create a fake HPA v1
hpaV1 := &unstructured.Unstructured{}
hpaV1.SetGroupVersionKind(schema.GroupVersionKind{
Group: "autoscaling",
Version: "v1",
Kind: "HorizontalPodAutoscaler",
})
hpaV1.SetNamespace("default")
hpaV1.SetName("my-hpa-v1")
unstructured.SetNestedField(hpaV1.Object, map[string]interface{}{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "my-deployment",
}, "spec", "scaleTargetRef")
Expect(cli.Create(context.TODO(), hpaV2)).To(Succeed())
Expect(cli.Create(context.TODO(), hpaV1)).To(Succeed())
// Test finding HPA for workload
foundHPA := findHPAForWorkload(cli, object)
Expect(foundHPA).NotTo(BeNil())
Expect(foundHPA.GetName()).To(Equal("my-hpa-v2"))
// Delete v2 HPA and check if v1 is found
Expect(cli.Delete(context.TODO(), hpaV2)).To(Succeed())
foundHPA = findHPAForWorkload(cli, object)
Expect(foundHPA).NotTo(BeNil())
Expect(foundHPA.GetName()).To(Equal("my-hpa-v1"))
})
})
})

View File

@ -0,0 +1,48 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package bluegreenstyle
import (
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/util"
corev1 "k8s.io/api/core/v1"
)
type Interface interface {
// BuildController will get workload object and parse workload info,
// and return a initialized controller for workload.
BuildController() (Interface, error)
// GetWorkloadInfo return workload information.
GetWorkloadInfo() *util.WorkloadInfo
// ListOwnedPods fetch the pods owned by the workload.
// Note that we should list pod only if we really need it.
// reserved for future use
ListOwnedPods() ([]*corev1.Pod, error)
// CalculateBatchContext calculate current batch context
// according to release plan and current status of workload.
CalculateBatchContext(release *v1beta1.BatchRelease) (*batchcontext.BatchContext, error)
// Initialize do something before rolling out, for example:
// - pause the workload
// - update: MinReadySeconds, ProgressDeadlineSeconds, Strategy
Initialize(release *v1beta1.BatchRelease) error
// UpgradeBatch upgrade workload according current batch context.
UpgradeBatch(ctx *batchcontext.BatchContext) error
// Finalize do something after rolling out, for example:
// - set pause to false, restore the original setting, delete annotation
Finalize(release *v1beta1.BatchRelease) error
}

View File

@ -50,7 +50,7 @@ func NewControlPlane(f NewInterfaceFunc, cli client.Client, recorder record.Even
newStatus: newStatus,
Interface: f(cli, key),
release: release.DeepCopy(),
patcher: labelpatch.NewLabelPatcher(cli, klog.KObj(release)),
patcher: labelpatch.NewLabelPatcher(cli, klog.KObj(release), release.Spec.ReleasePlan.Batches),
}
}
@ -103,14 +103,22 @@ func (rc *realCanaryController) UpgradeBatch() error {
return fmt.Errorf("wait canary workload %v reconcile", canary.GetCanaryInfo().LogKey)
}
batchContext := rc.CalculateBatchContext(rc.release)
batchContext, err := rc.CalculateBatchContext(rc.release)
if err != nil {
return err
}
klog.Infof("BatchRelease %v calculated context when upgrade batch: %s",
klog.KObj(rc.release), batchContext.Log())
return canary.UpgradeBatch(batchContext)
err = canary.UpgradeBatch(batchContext)
if err != nil {
return err
}
return nil
}
func (rc *realCanaryController) CheckBatchReady() error {
func (rc *realCanaryController) EnsureBatchPodsReadyAndLabeled() error {
stable, err := rc.BuildStableController()
if err != nil {
return err
@ -129,10 +137,15 @@ func (rc *realCanaryController) CheckBatchReady() error {
return fmt.Errorf("wait canary workload %v reconcile", canary.GetCanaryInfo().LogKey)
}
batchContext := rc.CalculateBatchContext(rc.release)
batchContext, err := rc.CalculateBatchContext(rc.release)
if err != nil {
return err
}
klog.Infof("BatchRelease %v calculated context when check batch ready: %s",
klog.KObj(rc.release), batchContext.Log())
if err = rc.patcher.PatchPodBatchLabel(batchContext); err != nil {
klog.ErrorS(err, "failed to patch pod labels", "release", klog.KObj(rc.release))
}
return batchContext.IsBatchReady()
}

View File

@ -43,6 +43,7 @@ type realCanaryController struct {
canaryObject *apps.Deployment
canaryClient client.Client
objectKey types.NamespacedName
canaryPods []*corev1.Pod
}
func newCanary(cli client.Client, key types.NamespacedName) realCanaryController {
@ -145,7 +146,7 @@ func (r *realCanaryController) create(release *v1beta1.BatchRelease, template *a
canary.Spec.Template.Annotations[k] = v
}
}
canary.Spec.Replicas = pointer.Int32Ptr(0)
canary.Spec.Replicas = pointer.Int32(0)
canary.Spec.Paused = false
if err := r.canaryClient.Create(context.TODO(), canary); err != nil {

View File

@ -27,6 +27,7 @@ import (
"github.com/openkruise/rollouts/pkg/util"
utilclient "github.com/openkruise/rollouts/pkg/util/client"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
@ -82,19 +83,30 @@ func (rc *realController) BuildCanaryController(release *v1beta1.BatchRelease) (
return rc, nil
}
func (rc *realController) CalculateBatchContext(release *v1beta1.BatchRelease) *batchcontext.BatchContext {
func (rc *realController) CalculateBatchContext(release *v1beta1.BatchRelease) (*batchcontext.BatchContext, error) {
rolloutID := release.Spec.ReleasePlan.RolloutID
if rolloutID != "" {
// if rollout-id is set, the pod will be patched batch label,
// so we have to list pod here.
if _, err := rc.ListOwnedPods(); err != nil {
return nil, err
}
}
replicas := *rc.stableObject.Spec.Replicas
currentBatch := release.Status.CanaryStatus.CurrentBatch
desiredUpdate := int32(control.CalculateBatchReplicas(release, int(replicas), int(currentBatch)))
return &batchcontext.BatchContext{
Pods: rc.canaryPods,
RolloutID: rolloutID,
Replicas: replicas,
UpdateRevision: release.Status.UpdateRevision,
CurrentBatch: currentBatch,
DesiredUpdatedReplicas: desiredUpdate,
FailureThreshold: release.Spec.ReleasePlan.FailureThreshold,
UpdatedReplicas: rc.canaryObject.Status.Replicas,
UpdatedReadyReplicas: rc.canaryObject.Status.AvailableReplicas,
}
}, nil
}
func (rc *realController) getLatestTemplate() (*v1.PodTemplateSpec, error) {
@ -104,3 +116,12 @@ func (rc *realController) getLatestTemplate() (*v1.PodTemplateSpec, error) {
}
return &rc.stableObject.Spec.Template, nil
}
func (rc *realController) ListOwnedPods() ([]*corev1.Pod, error) {
if rc.canaryPods != nil {
return rc.canaryPods, nil
}
var err error
rc.canaryPods, err = util.ListOwnedPods(rc.canaryClient, rc.canaryObject)
return rc.canaryPods, err
}

View File

@ -101,7 +101,7 @@ var (
UpdatedReplicas: 10,
ReadyReplicas: 10,
AvailableReplicas: 10,
CollisionCount: pointer.Int32Ptr(1),
CollisionCount: pointer.Int32(1),
ObservedGeneration: 1,
},
}
@ -163,7 +163,7 @@ func TestCalculateBatchContext(t *testing.T) {
workload: func() (*apps.Deployment, *apps.Deployment) {
stable := &apps.Deployment{
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(10),
Replicas: pointer.Int32(10),
},
Status: apps.DeploymentStatus{
Replicas: 10,
@ -173,7 +173,7 @@ func TestCalculateBatchContext(t *testing.T) {
}
canary := &apps.Deployment{
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(5),
Replicas: pointer.Int32(5),
},
Status: apps.DeploymentStatus{
Replicas: 5,
@ -226,7 +226,9 @@ func TestCalculateBatchContext(t *testing.T) {
canaryObject: canary,
},
}
got := control.CalculateBatchContext(cs.release())
got, err := control.CalculateBatchContext(cs.release())
got.FilterFunc = nil
Expect(err).NotTo(HaveOccurred())
Expect(reflect.DeepEqual(got, cs.result)).Should(BeTrue())
})
}
@ -290,7 +292,8 @@ func TestRealCanaryController(t *testing.T) {
Expect(util.EqualIgnoreHash(&c.canaryObject.Spec.Template, &deployment.Spec.Template)).Should(BeTrue())
// check rolling
batchContext := c.CalculateBatchContext(release)
batchContext, err := c.CalculateBatchContext(release)
Expect(err).NotTo(HaveOccurred())
err = controller.UpgradeBatch(batchContext)
Expect(err).NotTo(HaveOccurred())
canary := getCanaryDeployment(release, deployment, c)

View File

@ -33,7 +33,7 @@ type Interface interface {
BuildCanaryController(release *v1beta1.BatchRelease) (CanaryInterface, error)
// CalculateBatchContext calculate the current batch context according to
// our release plan and the statues of stable workload and canary workload.
CalculateBatchContext(release *v1beta1.BatchRelease) *batchcontext.BatchContext
CalculateBatchContext(release *v1beta1.BatchRelease) (*batchcontext.BatchContext, error)
}
// CanaryInterface contains the methods about canary workload

View File

@ -52,10 +52,10 @@ type Interface interface {
// it returns nil if the preparation is succeeded, else the preparation should retry.
UpgradeBatch() error
// CheckBatchReady checks how many replicas are ready to serve requests in the current batch.
// EnsureBatchPodsReadyAndLabeled checks how many replicas are ready to serve requests in the current batch.
// this function is tasked to do any initialization work on the resources.
// it returns nil if the preparation is succeeded, else the preparation should retry.
CheckBatchReady() error
EnsureBatchPodsReadyAndLabeled() error
// Finalize makes sure the resources are in a good final state.
// this function is tasked to do any initialization work on the resources.

View File

@ -101,7 +101,7 @@ var (
UpdateRevision: "version-2",
CurrentRevision: "version-1",
ObservedGeneration: 1,
CollisionCount: pointer.Int32Ptr(1),
CollisionCount: pointer.Int32(1),
},
}
@ -162,7 +162,7 @@ func TestCalculateBatchContext(t *testing.T) {
workload: func() *kruiseappsv1alpha1.CloneSet {
return &kruiseappsv1alpha1.CloneSet{
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32Ptr(10),
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
Partition: func() *intstr.IntOrString { p := intstr.FromString("100%"); return &p }(),
},
@ -211,7 +211,7 @@ func TestCalculateBatchContext(t *testing.T) {
workload: func() *kruiseappsv1alpha1.CloneSet {
return &kruiseappsv1alpha1.CloneSet{
Spec: kruiseappsv1alpha1.CloneSetSpec{
Replicas: pointer.Int32Ptr(20),
Replicas: pointer.Int32(20),
UpdateStrategy: kruiseappsv1alpha1.CloneSetUpdateStrategy{
Partition: func() *intstr.IntOrString { p := intstr.FromString("100%"); return &p }(),
},
@ -253,7 +253,7 @@ func TestCalculateBatchContext(t *testing.T) {
Replicas: 20,
UpdatedReplicas: 10,
UpdatedReadyReplicas: 10,
NoNeedUpdatedReplicas: pointer.Int32Ptr(10),
NoNeedUpdatedReplicas: pointer.Int32(10),
PlannedUpdatedReplicas: 4,
DesiredUpdatedReplicas: 12,
CurrentPartition: intstr.FromString("100%"),

View File

@ -54,7 +54,7 @@ func NewControlPlane(f NewInterfaceFunc, cli client.Client, recorder record.Even
newStatus: newStatus,
Interface: f(cli, key, gvk),
release: release.DeepCopy(),
patcher: labelpatch.NewLabelPatcher(cli, klog.KObj(release)),
patcher: labelpatch.NewLabelPatcher(cli, klog.KObj(release), release.Spec.ReleasePlan.Batches),
}
}
@ -114,7 +114,7 @@ func (rc *realBatchControlPlane) UpgradeBatch() error {
return rc.patcher.PatchPodBatchLabel(batchContext)
}
func (rc *realBatchControlPlane) CheckBatchReady() error {
func (rc *realBatchControlPlane) EnsureBatchPodsReadyAndLabeled() error {
controller, err := rc.BuildController()
if err != nil {
return err
@ -226,7 +226,7 @@ func (rc *realBatchControlPlane) markNoNeedUpdatePodsIfNeeds() (*int32, error) {
if !pods[i].DeletionTimestamp.IsZero() {
continue
}
if !util.IsConsistentWithRevision(pods[i], rc.newStatus.UpdateRevision) {
if !util.IsConsistentWithRevision(pods[i].GetLabels(), rc.newStatus.UpdateRevision) {
continue
}
if pods[i].Labels[util.NoNeedUpdatePodLabel] == rolloutID {
@ -273,7 +273,7 @@ func (rc *realBatchControlPlane) countAndUpdateNoNeedUpdateReplicas() error {
if !pod.DeletionTimestamp.IsZero() {
continue
}
if !util.IsConsistentWithRevision(pod, rc.release.Status.UpdateRevision) {
if !util.IsConsistentWithRevision(pod.GetLabels(), rc.release.Status.UpdateRevision) {
continue
}
id, ok := pod.Labels[util.NoNeedUpdatePodLabel]

View File

@ -67,7 +67,7 @@ func (rc *realController) BuildController() (partitionstyle.Interface, error) {
if !pod.DeletionTimestamp.IsZero() {
return false
}
if !util.IsConsistentWithRevision(pod, rc.WorkloadInfo.Status.UpdateRevision) {
if !util.IsConsistentWithRevision(pod.GetLabels(), rc.WorkloadInfo.Status.UpdateRevision) {
return false
}
return util.IsPodReady(pod)

View File

@ -160,7 +160,7 @@ func TestCalculateBatchContext(t *testing.T) {
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
UpdateStrategy: kruiseappsv1alpha1.DaemonSetUpdateStrategy{
RollingUpdate: &kruiseappsv1alpha1.RollingUpdateDaemonSet{
Partition: pointer.Int32Ptr(10),
Partition: pointer.Int32(10),
},
},
},
@ -233,7 +233,7 @@ func TestCalculateBatchContext(t *testing.T) {
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
UpdateStrategy: kruiseappsv1alpha1.DaemonSetUpdateStrategy{
RollingUpdate: &kruiseappsv1alpha1.RollingUpdateDaemonSet{
Partition: pointer.Int32Ptr(10),
Partition: pointer.Int32(10),
},
},
},
@ -288,7 +288,7 @@ func TestCalculateBatchContext(t *testing.T) {
Replicas: 10,
UpdatedReplicas: 5,
UpdatedReadyReplicas: 5,
NoNeedUpdatedReplicas: pointer.Int32Ptr(5),
NoNeedUpdatedReplicas: pointer.Int32(5),
PlannedUpdatedReplicas: 2,
DesiredUpdatedReplicas: 6,
CurrentPartition: intstr.FromInt(10),

View File

@ -19,14 +19,6 @@ package deployment
import (
"context"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle"
deploymentutil "github.com/openkruise/rollouts/pkg/controller/deployment/util"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/patch"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -35,6 +27,15 @@ import (
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control"
"github.com/openkruise/rollouts/pkg/controller/batchrelease/control/partitionstyle"
deploymentutil "github.com/openkruise/rollouts/pkg/controller/deployment/util"
"github.com/openkruise/rollouts/pkg/util"
"github.com/openkruise/rollouts/pkg/util/patch"
)
type realController struct {
@ -130,7 +131,11 @@ func (rc *realController) UpgradeBatch(ctx *batchcontext.BatchContext) error {
}
func (rc *realController) Finalize(release *v1beta1.BatchRelease) error {
if rc.object == nil || !deploymentutil.IsUnderRolloutControl(rc.object) {
if rc.object == nil {
return nil // No need to finalize again.
}
isUnderRolloutControl := rc.object.Annotations[util.BatchReleaseControlAnnotation] != "" && rc.object.Spec.Paused
if !isUnderRolloutControl {
return nil // No need to finalize again.
}
@ -138,7 +143,9 @@ func (rc *realController) Finalize(release *v1beta1.BatchRelease) error {
if release.Spec.ReleasePlan.BatchPartition == nil {
strategy := util.GetDeploymentStrategy(rc.object)
patchData.UpdatePaused(false)
patchData.UpdateStrategy(apps.DeploymentStrategy{Type: apps.RollingUpdateDeploymentStrategyType, RollingUpdate: strategy.RollingUpdate})
if rc.object.Spec.Strategy.Type == apps.RecreateDeploymentStrategyType {
patchData.UpdateStrategy(apps.DeploymentStrategy{Type: apps.RollingUpdateDeploymentStrategyType, RollingUpdate: strategy.RollingUpdate})
}
patchData.DeleteAnnotation(v1alpha1.DeploymentStrategyAnnotation)
patchData.DeleteAnnotation(v1alpha1.DeploymentExtraStatusAnnotation)
patchData.DeleteLabel(v1alpha1.DeploymentStableRevisionLabel)

View File

@ -101,7 +101,7 @@ var (
UpdatedReplicas: 10,
ReadyReplicas: 10,
AvailableReplicas: 10,
CollisionCount: pointer.Int32Ptr(1),
CollisionCount: pointer.Int32(1),
ObservedGeneration: 1,
},
}
@ -178,7 +178,7 @@ func TestCalculateBatchContext(t *testing.T) {
},
},
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(10),
Replicas: pointer.Int32(10),
},
Status: apps.DeploymentStatus{
Replicas: 10,
@ -242,7 +242,7 @@ func TestCalculateBatchContext(t *testing.T) {
},
},
Spec: apps.DeploymentSpec{
Replicas: pointer.Int32Ptr(5),
Replicas: pointer.Int32(5),
},
Status: apps.DeploymentStatus{
Replicas: 5,

View File

@ -77,7 +77,7 @@ func (rc *realController) BuildController() (partitionstyle.Interface, error) {
if !pod.DeletionTimestamp.IsZero() {
return false
}
if !util.IsConsistentWithRevision(pod, rc.WorkloadInfo.Status.UpdateRevision) {
if !util.IsConsistentWithRevision(pod.GetLabels(), rc.WorkloadInfo.Status.UpdateRevision) {
return false
}
return util.IsPodReady(pod)

View File

@ -108,7 +108,7 @@ var (
CurrentRevision: "version-1",
ObservedGeneration: 1,
UpdatedReadyReplicas: 0,
CollisionCount: pointer.Int32Ptr(1),
CollisionCount: pointer.Int32(1),
},
}
@ -183,11 +183,11 @@ func TestCalculateBatchContextForNativeStatefulSet(t *testing.T) {
},
Spec: apps.StatefulSetSpec{
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
Replicas: pointer.Int32Ptr(10),
Replicas: pointer.Int32(10),
UpdateStrategy: apps.StatefulSetUpdateStrategy{
Type: apps.RollingUpdateStatefulSetStrategyType,
RollingUpdate: &apps.RollingUpdateStatefulSetStrategy{
Partition: pointer.Int32Ptr(100),
Partition: pointer.Int32(100),
},
},
},
@ -258,11 +258,11 @@ func TestCalculateBatchContextForNativeStatefulSet(t *testing.T) {
},
Spec: apps.StatefulSetSpec{
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
Replicas: pointer.Int32Ptr(20),
Replicas: pointer.Int32(20),
UpdateStrategy: apps.StatefulSetUpdateStrategy{
Type: apps.RollingUpdateStatefulSetStrategyType,
RollingUpdate: &apps.RollingUpdateStatefulSetStrategy{
Partition: pointer.Int32Ptr(100),
Partition: pointer.Int32(100),
},
},
},
@ -312,7 +312,7 @@ func TestCalculateBatchContextForNativeStatefulSet(t *testing.T) {
Replicas: 20,
UpdatedReplicas: 10,
UpdatedReadyReplicas: 10,
NoNeedUpdatedReplicas: pointer.Int32Ptr(10),
NoNeedUpdatedReplicas: pointer.Int32(10),
PlannedUpdatedReplicas: 4,
DesiredUpdatedReplicas: 12,
CurrentPartition: intstr.FromInt(100),
@ -377,11 +377,11 @@ func TestCalculateBatchContextForAdvancedStatefulSet(t *testing.T) {
},
Spec: kruiseappsv1beta1.StatefulSetSpec{
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
Replicas: pointer.Int32Ptr(10),
Replicas: pointer.Int32(10),
UpdateStrategy: kruiseappsv1beta1.StatefulSetUpdateStrategy{
Type: apps.RollingUpdateStatefulSetStrategyType,
RollingUpdate: &kruiseappsv1beta1.RollingUpdateStatefulSetStrategy{
Partition: pointer.Int32Ptr(100),
Partition: pointer.Int32(100),
UnorderedUpdate: &kruiseappsv1beta1.UnorderedUpdateStrategy{
PriorityStrategy: &appsv1pub.UpdatePriorityStrategy{
OrderPriority: []appsv1pub.UpdatePriorityOrderTerm{
@ -461,11 +461,11 @@ func TestCalculateBatchContextForAdvancedStatefulSet(t *testing.T) {
},
Spec: kruiseappsv1beta1.StatefulSetSpec{
Selector: &metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}},
Replicas: pointer.Int32Ptr(20),
Replicas: pointer.Int32(20),
UpdateStrategy: kruiseappsv1beta1.StatefulSetUpdateStrategy{
Type: apps.RollingUpdateStatefulSetStrategyType,
RollingUpdate: &kruiseappsv1beta1.RollingUpdateStatefulSetStrategy{
Partition: pointer.Int32Ptr(100),
Partition: pointer.Int32(100),
UnorderedUpdate: &kruiseappsv1beta1.UnorderedUpdateStrategy{
PriorityStrategy: &appsv1pub.UpdatePriorityStrategy{
OrderPriority: []appsv1pub.UpdatePriorityOrderTerm{
@ -524,7 +524,7 @@ func TestCalculateBatchContextForAdvancedStatefulSet(t *testing.T) {
Replicas: 20,
UpdatedReplicas: 10,
UpdatedReadyReplicas: 10,
NoNeedUpdatedReplicas: pointer.Int32Ptr(10),
NoNeedUpdatedReplicas: pointer.Int32(10),
PlannedUpdatedReplicas: 4,
DesiredUpdatedReplicas: 12,
CurrentPartition: intstr.FromInt(100),

View File

@ -21,8 +21,10 @@ import (
"fmt"
"strings"
appsv1alpha1 "github.com/openkruise/kruise-api/apps/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
@ -63,6 +65,42 @@ func IsControlledByBatchRelease(release *v1beta1.BatchRelease, object client.Obj
return false
}
// only when IsReadyForBlueGreenRelease returns true, can we go on to the next batch
func ValidateReadyForBlueGreenRelease(object client.Object) error {
// check the annotation
if object.GetAnnotations()[util.BatchReleaseControlAnnotation] == "" {
return fmt.Errorf("workload has no control info annotation")
}
switch o := object.(type) {
case *apps.Deployment:
// must be RollingUpdate
if len(o.Spec.Strategy.Type) > 0 && o.Spec.Strategy.Type != apps.RollingUpdateDeploymentStrategyType {
return fmt.Errorf("deployment strategy type is not RollingUpdate")
}
if o.Spec.Strategy.RollingUpdate == nil {
return fmt.Errorf("deployment strategy rollingUpdate is nil")
}
// MinReadySeconds and ProgressDeadlineSeconds must be set
if o.Spec.MinReadySeconds != v1beta1.MaxReadySeconds || o.Spec.ProgressDeadlineSeconds == nil || *o.Spec.ProgressDeadlineSeconds != v1beta1.MaxProgressSeconds {
return fmt.Errorf("deployment strategy minReadySeconds or progressDeadlineSeconds is not MaxReadySeconds or MaxProgressSeconds")
}
case *appsv1alpha1.CloneSet:
// must be ReCreate
if len(o.Spec.UpdateStrategy.Type) > 0 && o.Spec.UpdateStrategy.Type != appsv1alpha1.RecreateCloneSetUpdateStrategyType {
return fmt.Errorf("cloneSet strategy type is not ReCreate")
}
// MinReadySeconds and ProgressDeadlineSeconds must be set
if o.Spec.MinReadySeconds != v1beta1.MaxReadySeconds {
return fmt.Errorf("cloneSet strategy minReadySeconds is not MaxReadySeconds")
}
default:
panic("unsupported workload type to ValidateReadyForBlueGreenRelease function")
}
return nil
}
// BuildReleaseControlInfo return a NewControllerRef of release with escaped `"`.
func BuildReleaseControlInfo(release *v1beta1.BatchRelease) string {
owner, _ := json.Marshal(metav1.NewControllerRef(release, release.GetObjectKind().GroupVersionKind()))
@ -112,3 +150,101 @@ func IsCurrentMoreThanOrEqualToDesired(current, desired intstr.IntOrString) bool
desiredNum, _ := intstr.GetScaledValueFromIntOrPercent(&desired, 10000000, true)
return currentNum >= desiredNum
}
// GetDeploymentStrategy decode the strategy object for advanced deployment
// from the annotation "rollouts.kruise.io/original-deployment-strategy"
func GetOriginalSetting(object client.Object) (OriginalDeploymentStrategy, error) {
setting := OriginalDeploymentStrategy{}
settingStr := object.GetAnnotations()[v1beta1.OriginalDeploymentStrategyAnnotation]
if settingStr == "" {
return setting, nil
}
err := json.Unmarshal([]byte(settingStr), &setting)
return setting, err
}
// InitOriginalSetting will update the original setting based on the workload object
// note: update the maxSurge and maxUnavailable only when MaxSurge and MaxUnavailable are nil,
// which means they should keep unchanged in continuous release (though continuous release isn't supported for now)
func InitOriginalSetting(setting *OriginalDeploymentStrategy, object client.Object) {
var changeLogs []string
switch o := object.(type) {
case *apps.Deployment:
if setting.MaxSurge == nil {
setting.MaxSurge = getMaxSurgeFromDeployment(o.Spec.Strategy.RollingUpdate)
changeLogs = append(changeLogs, fmt.Sprintf("maxSurge changed from nil to %s", setting.MaxSurge.String()))
}
if setting.MaxUnavailable == nil {
setting.MaxUnavailable = getMaxUnavailableFromDeployment(o.Spec.Strategy.RollingUpdate)
changeLogs = append(changeLogs, fmt.Sprintf("maxUnavailable changed from nil to %s", setting.MaxUnavailable.String()))
}
if setting.ProgressDeadlineSeconds == nil {
setting.ProgressDeadlineSeconds = getIntPtrOrDefault(o.Spec.ProgressDeadlineSeconds, 600)
changeLogs = append(changeLogs, fmt.Sprintf("progressDeadlineSeconds changed from nil to %d", *setting.ProgressDeadlineSeconds))
}
if setting.MinReadySeconds == 0 {
setting.MinReadySeconds = o.Spec.MinReadySeconds
changeLogs = append(changeLogs, fmt.Sprintf("minReadySeconds changed from 0 to %d", setting.MinReadySeconds))
}
case *appsv1alpha1.CloneSet:
if setting.MaxSurge == nil {
setting.MaxSurge = getMaxSurgeFromCloneset(o.Spec.UpdateStrategy)
changeLogs = append(changeLogs, fmt.Sprintf("maxSurge changed from nil to %s", setting.MaxSurge.String()))
}
if setting.MaxUnavailable == nil {
setting.MaxUnavailable = getMaxUnavailableFromCloneset(o.Spec.UpdateStrategy)
changeLogs = append(changeLogs, fmt.Sprintf("maxUnavailable changed from nil to %s", setting.MaxUnavailable.String()))
}
if setting.ProgressDeadlineSeconds == nil {
// cloneset is planned to support progressDeadlineSeconds field
}
if setting.MinReadySeconds == 0 {
setting.MinReadySeconds = o.Spec.MinReadySeconds
changeLogs = append(changeLogs, fmt.Sprintf("minReadySeconds changed from 0 to %d", setting.MinReadySeconds))
}
default:
panic(fmt.Errorf("unsupported object type %T", o))
}
if len(changeLogs) == 0 {
klog.InfoS("InitOriginalSetting: original setting unchanged", "object", object.GetName())
return
}
klog.InfoS("InitOriginalSetting: original setting updated", "object", object.GetName(), "changes", strings.Join(changeLogs, ";"))
}
func getMaxSurgeFromDeployment(ru *apps.RollingUpdateDeployment) *intstr.IntOrString {
defaultMaxSurge := intstr.FromString("25%")
if ru == nil || ru.MaxSurge == nil {
return &defaultMaxSurge
}
return ru.MaxSurge
}
func getMaxUnavailableFromDeployment(ru *apps.RollingUpdateDeployment) *intstr.IntOrString {
defaultMaxAnavailale := intstr.FromString("25%")
if ru == nil || ru.MaxUnavailable == nil {
return &defaultMaxAnavailale
}
return ru.MaxUnavailable
}
func getMaxSurgeFromCloneset(us appsv1alpha1.CloneSetUpdateStrategy) *intstr.IntOrString {
defaultMaxSurge := intstr.FromString("0%")
if us.MaxSurge == nil {
return &defaultMaxSurge
}
return us.MaxSurge
}
func getMaxUnavailableFromCloneset(us appsv1alpha1.CloneSetUpdateStrategy) *intstr.IntOrString {
defaultMaxUnavailable := intstr.FromString("20%")
if us.MaxUnavailable == nil {
return &defaultMaxUnavailable
}
return us.MaxUnavailable
}
func getIntPtrOrDefault(ptr *int32, defaultVal int32) *int32 {
if ptr == nil {
return &defaultVal
}
return ptr
}

View File

@ -39,8 +39,8 @@ import (
// the pods that are really need to be rolled back according to release plan, but patch batch label according
// original release plan, and will patch the pods that are really rolled back in priority.
// - in batch 0: really roll back (20 - 10) * 20% = 2 pods, but 20 * 20% = 4 pod will be patched batch label;
// - in batch 0: really roll back (20 - 10) * 50% = 5 pods, but 20 * 50% = 10 pod will be patched batch label;
// - in batch 0: really roll back (20 - 10) * 100% = 10 pods, but 20 * 100% = 20 pod will be patched batch label;
// - in batch 1: really roll back (20 - 10) * 50% = 5 pods, but 20 * 50% = 10 pod will be patched batch label;
// - in batch 2: really roll back (20 - 10) * 100% = 10 pods, but 20 * 100% = 20 pod will be patched batch label;
//
// Mainly for PaaS platform display pod list in conveniently.
//
@ -57,7 +57,7 @@ func FilterPodsForUnorderedUpdate(pods []*corev1.Pod, ctx *batchcontext.BatchCon
terminatingPods = append(terminatingPods, pod)
continue
}
if !util.IsConsistentWithRevision(pod, ctx.UpdateRevision) {
if !util.IsConsistentWithRevision(pod.GetLabels(), ctx.UpdateRevision) {
continue
}
if pod.Labels[util.NoNeedUpdatePodLabel] == ctx.RolloutID && pod.Labels[v1beta1.RolloutIDLabel] != ctx.RolloutID {
@ -92,8 +92,8 @@ func FilterPodsForUnorderedUpdate(pods []*corev1.Pod, ctx *batchcontext.BatchCon
// the pods that are really need to be rolled back according to release plan, but patch batch label according
// original release plan, and will patch the pods that are really rolled back in priority.
// - in batch 0: really roll back (20 - 10) * 20% = 2 pods, but 20 * 20% = 4 pod will be patched batch label;
// - in batch 0: really roll back (20 - 10) * 50% = 5 pods, but 20 * 50% = 10 pod will be patched batch label;
// - in batch 0: really roll back (20 - 10) * 100% = 10 pods, but 20 * 100% = 20 pod will be patched batch label;
// - in batch 1: really roll back (20 - 10) * 50% = 5 pods, but 20 * 50% = 10 pod will be patched batch label;
// - in batch 2: really roll back (20 - 10) * 100% = 10 pods, but 20 * 100% = 20 pod will be patched batch label;
//
// Mainly for PaaS platform display pod list in conveniently.
//
@ -113,7 +113,7 @@ func FilterPodsForOrderedUpdate(pods []*corev1.Pod, ctx *batchcontext.BatchConte
terminatingPods = append(terminatingPods, pod)
continue
}
if !util.IsConsistentWithRevision(pod, ctx.UpdateRevision) {
if !util.IsConsistentWithRevision(pod.GetLabels(), ctx.UpdateRevision) {
continue
}
if getPodOrdinal(pod) >= partition {

View File

@ -19,12 +19,16 @@ package labelpatch
import (
"context"
"fmt"
"strconv"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
"github.com/openkruise/rollouts/pkg/util"
v1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/klog/v2"
"sigs.k8s.io/controller-runtime/pkg/client"
)
@ -33,13 +37,14 @@ type LabelPatcher interface {
PatchPodBatchLabel(ctx *batchcontext.BatchContext) error
}
func NewLabelPatcher(cli client.Client, logKey klog.ObjectRef) *realPatcher {
return &realPatcher{Client: cli, logKey: logKey}
func NewLabelPatcher(cli client.Client, logKey klog.ObjectRef, batches []v1beta1.ReleaseBatch) *realPatcher {
return &realPatcher{Client: cli, logKey: logKey, batches: batches}
}
type realPatcher struct {
client.Client
logKey klog.ObjectRef
logKey klog.ObjectRef
batches []v1beta1.ReleaseBatch
}
func (r *realPatcher) PatchPodBatchLabel(ctx *batchcontext.BatchContext) error {
@ -55,59 +60,145 @@ func (r *realPatcher) PatchPodBatchLabel(ctx *batchcontext.BatchContext) error {
// PatchPodBatchLabel will patch rollout-id && batch-id to pods
func (r *realPatcher) patchPodBatchLabel(pods []*corev1.Pod, ctx *batchcontext.BatchContext) error {
// the number of active pods that has been patched successfully.
patchedUpdatedReplicas := int32(0)
// the number of target active pods that should be patched batch label.
plannedUpdatedReplicas := ctx.PlannedUpdatedReplicas
plannedUpdatedReplicasForBatches := r.calculatePlannedStepIncrements(r.batches, int(ctx.Replicas), int(ctx.CurrentBatch))
var updatedButUnpatchedPods []*corev1.Pod
revisionHashCache := map[types.UID]string{} // to prevent duplicate computing for revision hash
podsToPatchControllerRevision := map[*corev1.Pod]string{} // to record pods to patch controller-revision-hash
for _, pod := range pods {
if !util.IsConsistentWithRevision(pod, ctx.UpdateRevision) {
if !pod.DeletionTimestamp.IsZero() {
klog.InfoS("Pod is being deleted, skip patching", "pod", klog.KObj(pod), "rollout", r.logKey)
continue
}
podRolloutID := pod.Labels[v1beta1.RolloutIDLabel]
if pod.DeletionTimestamp.IsZero() && podRolloutID == ctx.RolloutID {
patchedUpdatedReplicas++
labels := make(map[string]string, len(pod.Labels))
for k, v := range pod.Labels {
labels[k] = v
}
}
// all pods that should be patched have been patched
if patchedUpdatedReplicas >= plannedUpdatedReplicas {
return nil // return fast
}
for _, pod := range pods {
if pod.DeletionTimestamp.IsZero() {
// we don't patch label for the active old revision pod
if !util.IsConsistentWithRevision(pod, ctx.UpdateRevision) {
continue
}
// we don't continue to patch if the goal is met
if patchedUpdatedReplicas >= ctx.PlannedUpdatedReplicas {
continue
if labels[v1.ControllerRevisionHashLabelKey] == "" {
// For native deployment, we need to get the revision hash from ReplicaSet, which is exactly constants with the update revision
// The reason is that, The status of the Deployment that KCM sees may differ from the Deployment seen by
// the Rollouts controller due to some default values not being assigned yet. Therefore, even if both use
// exactly the same algorithm, they cannot compute the same pod-template-hash. The fact that K8S does not
// expose the method for computing the pod-template-hash also confirms that third-party components relying
// on the pod-template-hash is not recommended. Thus, we use the CloneSet algorithm to compute the
// controller-revision-hash: this method is fully defined by OpenKruise and can ensure that the same
// ReplicaSet produces the same value.
owner := metav1.GetControllerOf(pod)
if owner != nil && owner.Kind == "ReplicaSet" {
var hash string
if cache, ok := revisionHashCache[owner.UID]; ok {
hash = cache
} else {
rs := &v1.ReplicaSet{}
if err := r.Get(context.Background(), types.NamespacedName{Namespace: pod.Namespace, Name: owner.Name}, rs); err != nil {
klog.ErrorS(err, "Failed to get ReplicaSet", "pod", klog.KObj(pod), "rollout", r.logKey, "owner", owner.Name, "namespace", pod.Namespace)
return err
}
delete(rs.Spec.Template.ObjectMeta.Labels, v1.DefaultDeploymentUniqueLabelKey)
hash = util.ComputeHash(&rs.Spec.Template, nil)
revisionHashCache[owner.UID] = hash
}
labels[v1.ControllerRevisionHashLabelKey] = hash
podsToPatchControllerRevision[pod] = hash
klog.InfoS("Pod controller-revision-hash updated", "pod", klog.KObj(pod), "rollout", r.logKey, "hash", hash)
}
}
// if it has been patched, just ignore
if pod.Labels[v1beta1.RolloutIDLabel] == ctx.RolloutID {
// we don't patch label for the active old revision pod
if !util.IsConsistentWithRevision(labels, ctx.UpdateRevision) {
klog.InfoS("Pod is not consistent with revision, skip patching", "pod", klog.KObj(pod),
"revision", ctx.UpdateRevision, "pod-template-hash", labels[v1.DefaultDeploymentUniqueLabelKey],
"controller-revision-hash", labels[v1.ControllerRevisionHashLabelKey], "rollout", r.logKey)
continue
}
if labels[v1beta1.RolloutIDLabel] != ctx.RolloutID {
// for example: new/recreated pods
updatedButUnpatchedPods = append(updatedButUnpatchedPods, pod)
klog.InfoS("Find a pod to add updatedButUnpatchedPods", "pod", klog.KObj(pod), "rollout", r.logKey)
continue
}
podBatchID, err := strconv.Atoi(labels[v1beta1.RolloutBatchIDLabel])
if err != nil {
klog.InfoS("Pod batchID is not a number, skip patching", "pod", klog.KObj(pod), "rollout", r.logKey)
continue
}
plannedUpdatedReplicasForBatches[podBatchID-1]--
}
klog.InfoS("updatedButUnpatchedPods amount calculated", "amount", len(updatedButUnpatchedPods),
"rollout", r.logKey, "plan", plannedUpdatedReplicasForBatches)
// patch the pods
for i := len(plannedUpdatedReplicasForBatches) - 1; i >= 0; i-- {
for ; plannedUpdatedReplicasForBatches[i] > 0; plannedUpdatedReplicasForBatches[i]-- {
if len(updatedButUnpatchedPods) == 0 {
klog.Warningf("no pods to patch for %v, batch %d", r.logKey, i+1)
i = -1
break
}
// patch the updated but unpatched pod
pod := updatedButUnpatchedPods[len(updatedButUnpatchedPods)-1]
clone := util.GetEmptyObjectWithKey(pod)
var patchStr string
if hash, ok := podsToPatchControllerRevision[pod]; ok {
patchStr = fmt.Sprintf(`{"metadata":{"labels":{"%s":"%s","%s":"%d","%s":"%s"}}}`,
v1beta1.RolloutIDLabel, ctx.RolloutID, v1beta1.RolloutBatchIDLabel, i+1, v1.ControllerRevisionHashLabelKey, hash)
delete(podsToPatchControllerRevision, pod)
} else {
patchStr = fmt.Sprintf(`{"metadata":{"labels":{"%s":"%s","%s":"%d"}}}`,
v1beta1.RolloutIDLabel, ctx.RolloutID, v1beta1.RolloutBatchIDLabel, i+1)
}
if err := r.Patch(context.TODO(), clone, client.RawPatch(types.StrategicMergePatchType, []byte(patchStr))); err != nil {
return err
}
klog.InfoS("Successfully patched Pod batchID", "batchID", i+1, "pod", klog.KObj(pod), "rollout", r.logKey)
// update the counter
updatedButUnpatchedPods = updatedButUnpatchedPods[:len(updatedButUnpatchedPods)-1]
}
klog.InfoS("All pods has been patched batchID", "batchID", i+1, "rollout", r.logKey)
}
// for rollback in batch, it is possible that some updated pods are remained unpatched, we won't report error
if len(updatedButUnpatchedPods) != 0 {
klog.Warningf("still has %d pods to patch for %v", len(updatedButUnpatchedPods), r.logKey)
}
// pods with controller-revision-hash label updated but not in the rollout release need to be patched too
// We must promptly patch the computed controller-revision-hash label to the Pod so that it can be directly read
// during frequent Reconcile processes, avoiding a large amount of redundant computation.
for pod, hash := range podsToPatchControllerRevision {
clone := util.GetEmptyObjectWithKey(pod)
by := fmt.Sprintf(`{"metadata":{"labels":{"%s":"%s","%s":"%d"}}}`,
v1beta1.RolloutIDLabel, ctx.RolloutID, v1beta1.RolloutBatchIDLabel, ctx.CurrentBatch+1)
if err := r.Patch(context.TODO(), clone, client.RawPatch(types.StrategicMergePatchType, []byte(by))); err != nil {
patchStr := fmt.Sprintf(`{"metadata":{"labels":{"%s":"%s"}}}`, v1.ControllerRevisionHashLabelKey, hash)
if err := r.Patch(context.TODO(), clone, client.RawPatch(types.StrategicMergePatchType, []byte(patchStr))); err != nil {
return err
}
if pod.DeletionTimestamp.IsZero() {
patchedUpdatedReplicas++
}
klog.Infof("Successfully patch Pod(%v) batchID %d label", klog.KObj(pod), ctx.CurrentBatch+1)
klog.InfoS("Successfully patch Pod controller-revision-hash", "pod", klog.KObj(pod), "rollout", r.logKey, "hash", hash)
}
if patchedUpdatedReplicas >= plannedUpdatedReplicas {
return nil
}
return fmt.Errorf("patched %v pods for %v, however the goal is %d", patchedUpdatedReplicas, r.logKey, plannedUpdatedReplicas)
return nil
}
func (r *realPatcher) calculatePlannedStepIncrements(batches []v1beta1.ReleaseBatch, workloadReplicas, currentBatch int) (res []int) {
// batchIndex greater than currentBatch will be patched with zero
res = make([]int, len(batches))
for i := 0; i <= currentBatch; i++ {
res[i] = calculateBatchReplicas(batches, workloadReplicas, i)
}
for i := currentBatch; i > 0; i-- {
res[i] -= res[i-1]
if res[i] < 0 {
klog.Warningf("Rollout %v batch replicas increment is less than 0", r.logKey)
}
}
return
}
func calculateBatchReplicas(batches []v1beta1.ReleaseBatch, workloadReplicas, currentBatch int) int {
batchSize, _ := intstr.GetScaledValueFromIntOrPercent(&batches[currentBatch].CanaryReplicas, workloadReplicas, true)
if batchSize > workloadReplicas {
klog.Warningf("releasePlan has wrong batch replicas, batches[%d].replicas %v is more than workload.replicas %v", currentBatch, batchSize, workloadReplicas)
batchSize = workloadReplicas
} else if batchSize < 0 {
klog.Warningf("releasePlan has wrong batch replicas, batches[%d].replicas %v is less than 0 %v", currentBatch, batchSize)
batchSize = 0
}
return batchSize
}

View File

@ -18,6 +18,7 @@ package labelpatch
import (
"context"
"reflect"
"strconv"
"testing"
@ -25,10 +26,15 @@ import (
. "github.com/onsi/gomega"
"github.com/openkruise/rollouts/api/v1beta1"
batchcontext "github.com/openkruise/rollouts/pkg/controller/batchrelease/context"
apps "k8s.io/api/apps/v1"
"github.com/openkruise/rollouts/pkg/util"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/rand"
"k8s.io/klog/v2"
"k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
@ -38,7 +44,8 @@ var (
)
func init() {
corev1.AddToScheme(scheme)
_ = corev1.AddToScheme(scheme)
_ = appsv1.AddToScheme(scheme)
}
func TestLabelPatcher(t *testing.T) {
@ -46,7 +53,8 @@ func TestLabelPatcher(t *testing.T) {
cases := map[string]struct {
batchContext func() *batchcontext.BatchContext
expectedPatched int
Batches []v1beta1.ReleaseBatch
expectedPatched []int
}{
"10 pods, 0 patched, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -54,14 +62,18 @@ func TestLabelPatcher(t *testing.T) {
RolloutID: "rollout-1",
UpdateRevision: "version-1",
PlannedUpdatedReplicas: 5,
CurrentBatch: 0,
Replicas: 10,
}
pods := generatePods(1, ctx.Replicas, 0, "", "", ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 5,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 2 patched, 3 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -72,11 +84,16 @@ func TestLabelPatcher(t *testing.T) {
Replicas: 10,
}
pods := generatePods(1, ctx.Replicas, 2,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch)), ctx.UpdateRevision)
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 5,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 5 patched, 0 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -87,11 +104,16 @@ func TestLabelPatcher(t *testing.T) {
Replicas: 10,
}
pods := generatePods(1, ctx.Replicas, 5,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch)), ctx.UpdateRevision)
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 5,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 7 patched, 0 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -102,11 +124,16 @@ func TestLabelPatcher(t *testing.T) {
Replicas: 10,
}
pods := generatePods(1, ctx.Replicas, 7,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch)), ctx.UpdateRevision)
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 7,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{7},
},
"2 pods, 0 patched, 2 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -117,11 +144,16 @@ func TestLabelPatcher(t *testing.T) {
Replicas: 10,
}
pods := generatePods(1, 2, 0,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch)), ctx.UpdateRevision)
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 2,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{2},
},
"10 pods, 3 patched with old rollout-id, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
@ -132,11 +164,76 @@ func TestLabelPatcher(t *testing.T) {
Replicas: 10,
}
pods := generatePods(1, ctx.Replicas, 3,
"previous-rollout-id", strconv.Itoa(int(ctx.CurrentBatch)), ctx.UpdateRevision)
"previous-rollout-id", strconv.Itoa(int(ctx.CurrentBatch+1)), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
expectedPatched: 5,
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 2 patched with batch-id:1, 3 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: "version-1",
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generatePods(1, 5, 2,
"rollout-1", strconv.Itoa(1), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{2, 3},
},
"10 pods, 0 patched with batch-id:1, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: "version-1",
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generatePods(1, 5, 0,
"rollout-1", strconv.Itoa(1), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{2, 3},
},
"10 pods, 3 patched with batch-id:1, 2 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: "version-1",
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generatePods(1, 5, 3,
"rollout-1", strconv.Itoa(1), ctx.UpdateRevision)
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{3, 2},
},
}
@ -148,36 +245,327 @@ func TestLabelPatcher(t *testing.T) {
objects = append(objects, pod)
}
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(objects...).Build()
patcher := NewLabelPatcher(cli, klog.ObjectRef{Name: "test"})
patchErr := patcher.patchPodBatchLabel(ctx.Pods, ctx)
patcher := NewLabelPatcher(cli, klog.ObjectRef{Name: "test"}, cs.Batches)
_ = patcher.patchPodBatchLabel(ctx.Pods, ctx)
podList := &corev1.PodList{}
err := cli.List(context.TODO(), podList)
Expect(err).NotTo(HaveOccurred())
patched := 0
patched := make([]int, ctx.CurrentBatch+1)
for _, pod := range podList.Items {
if pod.Labels[v1beta1.RolloutIDLabel] == ctx.RolloutID {
patched++
if batchID, err := strconv.Atoi(pod.Labels[v1beta1.RolloutBatchIDLabel]); err == nil {
patched[batchID-1]++
}
}
}
Expect(patched).Should(BeNumerically("==", cs.expectedPatched))
if patched < int(ctx.PlannedUpdatedReplicas) {
Expect(patchErr).To(HaveOccurred())
Expect(patched).To(Equal(cs.expectedPatched))
})
}
}
func TestDeploymentPatch(t *testing.T) {
rs := &appsv1.ReplicaSet{
ObjectMeta: metav1.ObjectMeta{
Name: "rs-1",
},
Spec: appsv1.ReplicaSetSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "nginx",
Image: "nginx:1.14.2",
Ports: []corev1.ContainerPort{
{
ContainerPort: 80,
},
},
},
},
},
},
},
}
revision := util.ComputeHash(&rs.Spec.Template, nil)
// randomly inserted to test cases, pods with this revision should be skipped and
// the result should not be influenced
skippedRevision := "should-skip"
cases := map[string]struct {
batchContext func() *batchcontext.BatchContext
Batches []v1beta1.ReleaseBatch
expectedPatched []int
}{
"10 pods, 0 patched, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, ctx.Replicas, 0, "", "")
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 2 patched, 3 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, ctx.Replicas, 2,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 5 patched, 0 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, ctx.Replicas, 5,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 7 patched, 0 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, ctx.Replicas, 7,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{7},
},
"2 pods, 0 patched, 2 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, 2, 0,
ctx.RolloutID, strconv.Itoa(int(ctx.CurrentBatch+1)))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{2},
},
"10 pods, 3 patched with old rollout-id, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
Replicas: 10,
}
pods := generateDeploymentPods(1, ctx.Replicas, 3,
"previous-rollout-id", strconv.Itoa(int(ctx.CurrentBatch+1)))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromInt(5),
},
},
expectedPatched: []int{5},
},
"10 pods, 2 patched with batch-id:1, 3 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generateDeploymentPods(1, 5, 2,
"rollout-1", strconv.Itoa(1))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{2, 3},
},
"10 pods, 0 patched with batch-id:1, 5 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generateDeploymentPods(1, 5, 0,
"rollout-1", strconv.Itoa(1))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{2, 3},
},
"10 pods, 3 patched with batch-id:1, 2 new patched": {
batchContext: func() *batchcontext.BatchContext {
ctx := &batchcontext.BatchContext{
RolloutID: "rollout-1",
UpdateRevision: revision,
PlannedUpdatedReplicas: 5,
CurrentBatch: 1,
Replicas: 10,
}
pods := generateDeploymentPods(1, 5, 3,
"rollout-1", strconv.Itoa(1))
ctx.Pods = pods
return ctx
},
Batches: []v1beta1.ReleaseBatch{
{CanaryReplicas: intstr.FromInt(2)},
{CanaryReplicas: intstr.FromInt(5)},
},
expectedPatched: []int{3, 2},
},
}
for name, cs := range cases {
t.Run(name, func(t *testing.T) {
ctx := cs.batchContext()
insertedSkippedPodNum := int32(rand.Intn(3))
if insertedSkippedPodNum > 0 {
ctx.Pods = append(ctx.Pods, generatePods(
100, 99+insertedSkippedPodNum, 0, "doesn't matter", "1", skippedRevision)...)
}
t.Logf("%d should-skip pods inserted", insertedSkippedPodNum)
if rand.Intn(2) > 0 {
now := metav1.Now()
ctx.Pods = append(ctx.Pods, &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &now,
Labels: map[string]string{
appsv1.ControllerRevisionHashLabelKey: skippedRevision,
},
},
})
t.Logf("deleted pod inserted")
}
var objects []client.Object
for _, pod := range ctx.Pods {
objects = append(objects, pod)
}
objects = append(objects, rs)
cli := fake.NewClientBuilder().WithScheme(scheme).WithObjects(objects...).Build()
patcher := NewLabelPatcher(cli, klog.ObjectRef{Name: "test"}, cs.Batches)
if err := patcher.patchPodBatchLabel(ctx.Pods, ctx); err != nil {
t.Fatalf("failed to patch pods: %v", err)
}
podList := &corev1.PodList{}
if err := cli.List(context.TODO(), podList); err != nil {
t.Fatalf("failed to list pods: %v", err)
}
patched := make([]int, ctx.CurrentBatch+1)
for _, pod := range podList.Items {
if pod.Labels[v1beta1.RolloutIDLabel] == ctx.RolloutID {
if batchID, err := strconv.Atoi(pod.Labels[v1beta1.RolloutBatchIDLabel]); err == nil {
patched[batchID-1]++
}
}
}
if !reflect.DeepEqual(patched, cs.expectedPatched) {
t.Fatalf("expected patched: %v, got: %v", cs.expectedPatched, patched)
}
for _, pod := range podList.Items {
if pod.Labels[appsv1.ControllerRevisionHashLabelKey] != revision &&
pod.Labels[appsv1.ControllerRevisionHashLabelKey] != skippedRevision {
t.Fatalf("expected pod %s/%s to have revision %s, got %s", pod.Namespace, pod.Name, revision, pod.Labels[appsv1.ControllerRevisionHashLabelKey])
}
}
})
}
}
func generateDeploymentPods(ordinalBegin, ordinalEnd, labeled int32, rolloutID, batchID string) []*corev1.Pod {
podsWithLabel := generateLabeledPods(map[string]string{
v1beta1.RolloutIDLabel: rolloutID,
v1beta1.RolloutBatchIDLabel: batchID,
}, int(labeled), int(ordinalBegin))
total := ordinalEnd - ordinalBegin + 1
podsWithoutLabel := generateLabeledPods(map[string]string{}, int(total-labeled), int(labeled+ordinalBegin))
pods := append(podsWithoutLabel, podsWithLabel...)
for _, pod := range pods {
pod.OwnerReferences = []metav1.OwnerReference{
{
APIVersion: "apps/v1",
Kind: "ReplicaSet",
Name: "rs-1",
UID: "123",
Controller: pointer.Bool(true),
},
}
}
return pods
}
func generatePods(ordinalBegin, ordinalEnd, labeled int32, rolloutID, batchID, version string) []*corev1.Pod {
podsWithLabel := generateLabeledPods(map[string]string{
v1beta1.RolloutIDLabel: rolloutID,
v1beta1.RolloutBatchIDLabel: batchID,
apps.ControllerRevisionHashLabelKey: version,
v1beta1.RolloutIDLabel: rolloutID,
v1beta1.RolloutBatchIDLabel: batchID,
appsv1.ControllerRevisionHashLabelKey: version,
}, int(labeled), int(ordinalBegin))
total := ordinalEnd - ordinalBegin + 1
podsWithoutLabel := generateLabeledPods(map[string]string{
apps.ControllerRevisionHashLabelKey: version,
appsv1.ControllerRevisionHashLabelKey: version,
}, int(total-labeled), int(labeled+ordinalBegin))
return append(podsWithoutLabel, podsWithLabel...)
}

View File

@ -1,6 +1,5 @@
/*
Copyright 2022 The Kruise Authors.
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@ -81,7 +80,22 @@ func (dc *DeploymentController) getReplicaSetsForDeployment(ctx context.Context,
}
// List all ReplicaSets to find those we own but that no longer match our
// selector. They will be orphaned by ClaimReplicaSets().
return dc.rsLister.ReplicaSets(d.Namespace).List(deploymentSelector)
allRSs, err := dc.rsLister.ReplicaSets(d.Namespace).List(deploymentSelector)
if err != nil {
return nil, fmt.Errorf("list %s/%s rs failed:%v", d.Namespace, d.Name, err)
}
// select rs owner by current deployment
ownedRSs := make([]*apps.ReplicaSet, 0)
for _, rs := range allRSs {
if !rs.DeletionTimestamp.IsZero() {
continue
}
if metav1.IsControlledBy(rs, d) {
ownedRSs = append(ownedRSs, rs)
}
}
return ownedRSs, nil
}
// syncDeployment will sync the deployment with the given key.

View File

@ -0,0 +1,475 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package rollout
import (
"context"
"fmt"
"reflect"
"time"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/trafficrouting"
"github.com/openkruise/rollouts/pkg/util"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/tools/record"
"k8s.io/klog/v2"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type blueGreenReleaseManager struct {
client.Client
trafficRoutingManager *trafficrouting.Manager
recorder record.EventRecorder
}
func (m *blueGreenReleaseManager) runCanary(c *RolloutContext) error {
blueGreenStatus := c.NewStatus.BlueGreenStatus
if br, err := fetchBatchRelease(m.Client, c.Rollout.Namespace, c.Rollout.Name); err != nil && !errors.IsNotFound(err) {
klog.Errorf("rollout(%s/%s) fetch batchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name, err.Error())
return err
} else if err == nil {
// This line will do something important:
// - sync status from br to Rollout: to better observability;
// - sync rollout-id from Rollout to br: to make BatchRelease
// relabels pods in the scene where only rollout-id is changed.
if err = m.syncBatchRelease(br, blueGreenStatus); err != nil {
klog.Errorf("rollout(%s/%s) sync batchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name, err.Error())
return err
}
}
if blueGreenStatus.PodTemplateHash == "" {
blueGreenStatus.PodTemplateHash = c.Workload.PodTemplateHash
}
if m.doCanaryJump(c) {
return nil
}
// When the first batch is trafficRouting rolling and the next steps are rolling release,
// We need to clean up the canary-related resources first and then rollout the rest of the batch.
currentStep := c.Rollout.Spec.Strategy.BlueGreen.Steps[blueGreenStatus.CurrentStepIndex-1]
if currentStep.Traffic == nil && len(currentStep.Matches) == 0 {
tr := newTrafficRoutingContext(c)
done, err := m.trafficRoutingManager.FinalisingTrafficRouting(tr)
blueGreenStatus.LastUpdateTime = tr.LastUpdateTime
if err != nil {
return err
} else if !done {
klog.Infof("rollout(%s/%s) cleaning up canary-related resources", c.Rollout.Namespace, c.Rollout.Name)
expectedTime := time.Now().Add(tr.RecheckDuration)
c.RecheckTime = &expectedTime
return nil
}
}
switch blueGreenStatus.CurrentStepState {
// before CanaryStepStateUpgrade, handle some special cases, to prevent traffic loss
case v1beta1.CanaryStepStateInit:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateInit)
tr := newTrafficRoutingContext(c)
if currentStep.Traffic != nil || len(currentStep.Matches) > 0 {
//TODO - consider istio subsets
if blueGreenStatus.CurrentStepIndex == 1 {
klog.Infof("Before the first batch, rollout(%s/%s) patch stable Service", c.Rollout.Namespace, c.Rollout.Name)
retry, err := m.trafficRoutingManager.PatchStableService(tr)
if err != nil {
return err
} else if retry {
expectedTime := time.Now().Add(tr.RecheckDuration)
c.RecheckTime = &expectedTime
return nil
}
}
}
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStateInit, blueGreenStatus.CurrentStepState)
fallthrough
case v1beta1.CanaryStepStateUpgrade:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateUpgrade)
done, err := m.doCanaryUpgrade(c)
if err != nil {
return err
} else if done {
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateTrafficRouting
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStateUpgrade, blueGreenStatus.CurrentStepState)
}
case v1beta1.CanaryStepStateTrafficRouting:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateTrafficRouting)
tr := newTrafficRoutingContext(c)
done, err := m.trafficRoutingManager.DoTrafficRouting(tr)
blueGreenStatus.LastUpdateTime = tr.LastUpdateTime
if err != nil {
return err
} else if done {
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateMetricsAnalysis
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStateTrafficRouting, blueGreenStatus.CurrentStepState)
}
expectedTime := time.Now().Add(time.Duration(defaultGracePeriodSeconds) * time.Second)
c.RecheckTime = &expectedTime
case v1beta1.CanaryStepStateMetricsAnalysis:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateMetricsAnalysis)
done, err := m.doCanaryMetricsAnalysis(c)
if err != nil {
return err
} else if done {
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStatePaused
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStateMetricsAnalysis, blueGreenStatus.CurrentStepState)
}
case v1beta1.CanaryStepStatePaused:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStatePaused)
done, err := m.doCanaryPaused(c)
if err != nil {
return err
} else if done {
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateReady
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStatePaused, blueGreenStatus.CurrentStepState)
}
case v1beta1.CanaryStepStateReady:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateReady)
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
// run next step
if len(c.Rollout.Spec.Strategy.BlueGreen.Steps) > int(blueGreenStatus.CurrentStepIndex) {
blueGreenStatus.CurrentStepIndex++
blueGreenStatus.NextStepIndex = util.NextBatchIndex(c.Rollout, blueGreenStatus.CurrentStepIndex)
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateInit
klog.Infof("rollout(%s/%s) bluegreen step from(%d) -> to(%d)", c.Rollout.Namespace, c.Rollout.Name, blueGreenStatus.CurrentStepIndex-1, blueGreenStatus.CurrentStepIndex)
return nil
}
// completed
blueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateCompleted
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s), run all steps", c.Rollout.Namespace, c.Rollout.Name,
blueGreenStatus.CurrentStepIndex, v1beta1.CanaryStepStateReady, blueGreenStatus.CurrentStepState)
fallthrough
// canary completed
case v1beta1.CanaryStepStateCompleted:
klog.Infof("rollout(%s/%s) run bluegreen strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateCompleted)
}
return nil
}
func (m *blueGreenReleaseManager) doCanaryUpgrade(c *RolloutContext) (bool, error) {
// verify whether batchRelease configuration is the latest
steps := len(c.Rollout.Spec.Strategy.BlueGreen.Steps)
blueGreenStatus := c.NewStatus.BlueGreenStatus
cond := util.GetRolloutCondition(*c.NewStatus, v1beta1.RolloutConditionProgressing)
cond.Message = fmt.Sprintf("Rollout is in step(%d/%d), and upgrade workload to new version", blueGreenStatus.CurrentStepIndex, steps)
c.NewStatus.Message = cond.Message
// run batch release to upgrade the workloads
done, br, err := runBatchRelease(m, c.Rollout, getRolloutID(c.Workload), blueGreenStatus.CurrentStepIndex, c.Workload.IsInRollback)
if err != nil {
return false, err
} else if !done {
return false, nil
}
if br.Status.ObservedReleasePlanHash != util.HashReleasePlanBatches(&br.Spec.ReleasePlan) ||
br.Generation != br.Status.ObservedGeneration {
klog.Infof("rollout(%s/%s) batchRelease status is inconsistent, and wait a moment", c.Rollout.Namespace, c.Rollout.Name)
return false, nil
}
// check whether batchRelease is ready(whether new pods is ready.)
if br.Status.CanaryStatus.CurrentBatchState != v1beta1.ReadyBatchState ||
br.Status.CanaryStatus.CurrentBatch+1 < blueGreenStatus.CurrentStepIndex {
klog.Infof("rollout(%s/%s) batchRelease status(%s) is not ready, and wait a moment", c.Rollout.Namespace, c.Rollout.Name, util.DumpJSON(br.Status))
return false, nil
}
m.recorder.Eventf(c.Rollout, corev1.EventTypeNormal, "Progressing", fmt.Sprintf("upgrade step(%d) canary pods with new versions done", blueGreenStatus.CurrentStepIndex))
klog.Infof("rollout(%s/%s) batch(%s) state(%s), and success",
c.Rollout.Namespace, c.Rollout.Name, util.DumpJSON(br.Status), br.Status.CanaryStatus.CurrentBatchState)
// set the latest PodTemplateHash to selector the latest pods.
blueGreenStatus.PodTemplateHash = c.Workload.PodTemplateHash
return true, nil
}
func (m *blueGreenReleaseManager) doCanaryMetricsAnalysis(c *RolloutContext) (bool, error) {
// todo
return true, nil
}
func (m *blueGreenReleaseManager) doCanaryPaused(c *RolloutContext) (bool, error) {
blueGreenStatus := c.NewStatus.BlueGreenStatus
currentStep := c.Rollout.Spec.Strategy.BlueGreen.Steps[blueGreenStatus.CurrentStepIndex-1]
steps := len(c.Rollout.Spec.Strategy.BlueGreen.Steps)
cond := util.GetRolloutCondition(*c.NewStatus, v1beta1.RolloutConditionProgressing)
// need manual confirmation
if currentStep.Pause.Duration == nil {
klog.Infof("rollout(%s/%s) don't set pause duration, and need manual confirmation", c.Rollout.Namespace, c.Rollout.Name)
cond.Message = fmt.Sprintf("Rollout is in step(%d/%d), and you need manually confirm to enter the next step", blueGreenStatus.CurrentStepIndex, steps)
c.NewStatus.Message = cond.Message
return false, nil
}
cond.Message = fmt.Sprintf("Rollout is in step(%d/%d), and wait duration(%d seconds) to enter the next step", blueGreenStatus.CurrentStepIndex, steps, *currentStep.Pause.Duration)
c.NewStatus.Message = cond.Message
// wait duration time, then go to next step
duration := time.Second * time.Duration(*currentStep.Pause.Duration)
expectedTime := blueGreenStatus.LastUpdateTime.Add(duration)
if expectedTime.Before(time.Now()) {
klog.Infof("rollout(%s/%s) canary step(%d) paused duration(%d seconds), and go to the next step",
c.Rollout.Namespace, c.Rollout.Name, blueGreenStatus.CurrentStepIndex, *currentStep.Pause.Duration)
return true, nil
}
c.RecheckTime = &expectedTime
return false, nil
}
func (m *blueGreenReleaseManager) doCanaryJump(c *RolloutContext) (jumped bool) {
bluegreenStatus := c.NewStatus.BlueGreenStatus
// since we forbid adding or removing steps, currentStepIndex should always be valid
currentStep := c.Rollout.Spec.Strategy.BlueGreen.Steps[bluegreenStatus.CurrentStepIndex-1]
// nextIndex=-1 means the release is done, nextIndex=0 is not used
if nextIndex := bluegreenStatus.NextStepIndex; nextIndex != util.NextBatchIndex(c.Rollout, bluegreenStatus.CurrentStepIndex) && nextIndex > 0 {
currentIndexBackup := bluegreenStatus.CurrentStepIndex
currentStepStateBackup := bluegreenStatus.CurrentStepState
// update the current and next stepIndex
bluegreenStatus.CurrentStepIndex = nextIndex
bluegreenStatus.NextStepIndex = util.NextBatchIndex(c.Rollout, nextIndex)
nextStep := c.Rollout.Spec.Strategy.BlueGreen.Steps[nextIndex-1]
// compare next step and current step to decide the state we should go
if reflect.DeepEqual(nextStep.Replicas, currentStep.Replicas) {
bluegreenStatus.CurrentStepState = v1beta1.CanaryStepStateTrafficRouting
} else {
bluegreenStatus.CurrentStepState = v1beta1.CanaryStepStateInit
}
bluegreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
klog.Infof("rollout(%s/%s) step(%d->%d) state from(%s -> %s)",
c.Rollout.Namespace, c.Rollout.Name,
currentIndexBackup, bluegreenStatus.CurrentStepIndex,
currentStepStateBackup, bluegreenStatus.CurrentStepState)
return true
}
return false
}
// cleanup after rollout is completed or finished
func (m *blueGreenReleaseManager) doCanaryFinalising(c *RolloutContext) (bool, error) {
blueGreenStatus := c.NewStatus.BlueGreenStatus
if blueGreenStatus == nil {
return true, nil
}
// rollout progressing complete, remove rollout progressing annotation in workload
err := removeRolloutProgressingAnnotation(m.Client, c)
if err != nil {
return false, err
}
tr := newTrafficRoutingContext(c)
// execute steps based on the predefined order for each reason
nextStep := nextBlueGreenTask(c.FinalizeReason, blueGreenStatus.FinalisingStep)
// if current step is empty, set it with the first step
// if current step is end, we just return
if len(blueGreenStatus.FinalisingStep) == 0 {
blueGreenStatus.FinalisingStep = nextStep
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
} else if blueGreenStatus.FinalisingStep == v1beta1.FinalisingStepTypeEnd {
klog.Infof("rollout(%s/%s) finalising process is already completed", c.Rollout.Namespace, c.Rollout.Name)
return true, nil
}
klog.Infof("rollout(%s/%s) Finalising Step is %s", c.Rollout.Namespace, c.Rollout.Name, blueGreenStatus.FinalisingStep)
var retry bool
// the order of steps is maitained by calculating thenextStep
switch blueGreenStatus.FinalisingStep {
// set workload.pause=false; set workload.partition=0
case v1beta1.FinalisingStepResumeWorkload:
retry, err = finalizingBatchRelease(m.Client, c)
// delete batchRelease
case v1beta1.FinalisingStepReleaseWorkloadControl:
retry, err = removeBatchRelease(m.Client, c)
// restore the gateway resources (ingress/gatewayAPI/Istio), that means
// only stable Service will accept the traffic
case v1beta1.FinalisingStepRouteTrafficToStable:
retry, err = m.trafficRoutingManager.RestoreGateway(tr)
// restore the stable service
case v1beta1.FinalisingStepRestoreStableService:
retry, err = m.trafficRoutingManager.RestoreStableService(tr)
// remove canary service
case v1beta1.FinalisingStepRemoveCanaryService:
retry, err = m.trafficRoutingManager.RemoveCanaryService(tr)
// route all traffic to new version
case v1beta1.FinalisingStepRouteTrafficToNew:
retry, err = m.trafficRoutingManager.RouteAllTrafficToNewVersion(tr)
// dangerous, wait endlessly, only for debugging use
case v1beta1.FinalisingStepWaitEndless:
retry, err = true, fmt.Errorf("only for debugging, just wait endlessly")
default:
nextStep = nextBlueGreenTask(c.FinalizeReason, "")
klog.Warningf("unexpected finalising step, current step(%s), start from the first step(%s)", blueGreenStatus.FinalisingStep, nextStep)
blueGreenStatus.FinalisingStep = nextStep
return false, nil
}
if err != nil || retry {
return false, err
}
// current step is done, run the next step
blueGreenStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
blueGreenStatus.FinalisingStep = nextStep
if blueGreenStatus.FinalisingStep == v1beta1.FinalisingStepTypeEnd {
return true, nil
}
return false, nil
}
func (m *blueGreenReleaseManager) fetchClient() client.Client {
return m.Client
}
func (m *blueGreenReleaseManager) createBatchRelease(rollout *v1beta1.Rollout, rolloutID string, batch int32, isRollback bool) *v1beta1.BatchRelease {
var batches []v1beta1.ReleaseBatch
for _, step := range rollout.Spec.Strategy.BlueGreen.Steps {
batches = append(batches, v1beta1.ReleaseBatch{CanaryReplicas: *step.Replicas})
}
br := &v1beta1.BatchRelease{
ObjectMeta: metav1.ObjectMeta{
Namespace: rollout.Namespace,
Name: rollout.Name,
OwnerReferences: []metav1.OwnerReference{*metav1.NewControllerRef(rollout, rolloutControllerKind)},
},
Spec: v1beta1.BatchReleaseSpec{
WorkloadRef: v1beta1.ObjectRef{
APIVersion: rollout.Spec.WorkloadRef.APIVersion,
Kind: rollout.Spec.WorkloadRef.Kind,
Name: rollout.Spec.WorkloadRef.Name,
},
ReleasePlan: v1beta1.ReleasePlan{
Batches: batches,
RolloutID: rolloutID,
BatchPartition: utilpointer.Int32(batch),
FailureThreshold: rollout.Spec.Strategy.BlueGreen.FailureThreshold,
RollingStyle: v1beta1.BlueGreenRollingStyle,
},
},
}
annotations := map[string]string{}
if isRollback {
annotations[v1alpha1.RollbackInBatchAnnotation] = rollout.Annotations[v1alpha1.RollbackInBatchAnnotation]
}
if len(annotations) > 0 {
br.Annotations = annotations
}
return br
}
// syncBatchRelease sync status of br to blueGreenStatus, and sync rollout-id of blueGreenStatus to br.
func (m *blueGreenReleaseManager) syncBatchRelease(br *v1beta1.BatchRelease, blueGreenStatus *v1beta1.BlueGreenStatus) error {
// sync from BatchRelease status to Rollout blueGreenStatus
blueGreenStatus.UpdatedReplicas = br.Status.CanaryStatus.UpdatedReplicas
blueGreenStatus.UpdatedReadyReplicas = br.Status.CanaryStatus.UpdatedReadyReplicas
// Do not remove this line currently, otherwise, users will be not able to judge whether the BatchRelease works
// in the scene where only rollout-id changed.
// TODO: optimize the logic to better understand
blueGreenStatus.Message = fmt.Sprintf("BatchRelease is at state %s, rollout-id %s, step %d",
br.Status.CanaryStatus.CurrentBatchState, br.Status.ObservedRolloutID, br.Status.CanaryStatus.CurrentBatch+1)
// br.Status.Message records messages that help users to understand what is going wrong
if len(br.Status.Message) > 0 {
blueGreenStatus.Message += fmt.Sprintf(", %s", br.Status.Message)
}
// sync rolloutId from blueGreenStatus to BatchRelease
if blueGreenStatus.ObservedRolloutID != br.Spec.ReleasePlan.RolloutID {
body := fmt.Sprintf(`{"spec":{"releasePlan":{"rolloutID":"%s"}}}`, blueGreenStatus.ObservedRolloutID)
return m.Patch(context.TODO(), br, client.RawPatch(types.MergePatchType, []byte(body)))
}
return nil
}
/*
- Rollback Scenario:
why the first step is to restore the gateway? (aka. route all traffic to stable version)
we cannot remove selector of the stable service firstly as canary does, because users are allowed to configure "0%" traffic
in bluegreen strategy. Consider the following example:
- replicas: 50% // step 1
traffic: 0%
if user is at step 1, and then attempts to rollback directly, Rollout should route all traffic to stable service
(keep unchanged actually). However, if we remove the selector of the stable service instead, we would inadvertently
route some traffic to the new version for a period, which is undesirable.
- Rollout Deletion and Disabling Scenario:
If Rollout is being deleted or disabled, it suggests users want to release the new version using workload built-in strategy,
such as rollingUpdate for Deployment, instead of blue-green or canary. And thus, we can simply remove
the label selector of the stable service, routing traffic to reach both stable and updated pods.
- Rollout success Scenario:
This indicates the rollout has completed its final batch and the user has confirmed to
transition fully to the new version. We can simply route all traffic to new version. Additionally, given that all
traffic is routed to the canary Service, it is safe to remove selector of stable Service, which additionally works
as a workaround for a bug caused by ingress-nginx controller (see https://github.com/kubernetes/ingress-nginx/issues/9635)
*/
func nextBlueGreenTask(reason string, currentTask v1beta1.FinalisingStepType) v1beta1.FinalisingStepType {
var taskSequence []v1beta1.FinalisingStepType
switch reason {
case v1beta1.FinaliseReasonSuccess: // success
taskSequence = []v1beta1.FinalisingStepType{
v1beta1.FinalisingStepRouteTrafficToNew,
v1beta1.FinalisingStepRestoreStableService,
v1beta1.FinalisingStepResumeWorkload,
v1beta1.FinalisingStepRouteTrafficToStable,
v1beta1.FinalisingStepRemoveCanaryService,
v1beta1.FinalisingStepReleaseWorkloadControl,
}
case v1beta1.FinaliseReasonRollback: // rollback
taskSequence = []v1beta1.FinalisingStepType{
v1beta1.FinalisingStepRouteTrafficToStable, // route all traffic to stable version
v1beta1.FinalisingStepResumeWorkload,
v1beta1.FinalisingStepRestoreStableService,
v1beta1.FinalisingStepRemoveCanaryService,
v1beta1.FinalisingStepReleaseWorkloadControl,
}
default: // others: disabled/deleting rollout
taskSequence = []v1beta1.FinalisingStepType{
v1beta1.FinalisingStepRestoreStableService,
v1beta1.FinalisingStepRouteTrafficToStable,
v1beta1.FinalisingStepRemoveCanaryService,
v1beta1.FinalisingStepResumeWorkload, // scale up new, scale down old
v1beta1.FinalisingStepReleaseWorkloadControl,
}
}
// if currentTask is empty, return first task
if len(currentTask) == 0 {
return taskSequence[0]
}
// find next task
for i := range taskSequence {
if currentTask == taskSequence[i] && i < len(taskSequence)-1 {
return taskSequence[i+1]
}
}
return v1beta1.FinalisingStepTypeEnd
}

View File

@ -0,0 +1,338 @@
/*
Copyright 2022 The Kruise Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package rollout
import (
"context"
"reflect"
"testing"
"github.com/openkruise/rollouts/api/v1alpha1"
"github.com/openkruise/rollouts/api/v1beta1"
"github.com/openkruise/rollouts/pkg/trafficrouting"
"github.com/openkruise/rollouts/pkg/util"
apps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
netv1 "k8s.io/api/networking/v1"
// metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/tools/record"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestBlueGreenRunCanary(t *testing.T) {
cases := []struct {
name string
getObj func() ([]*apps.Deployment, []*apps.ReplicaSet)
getNetwork func() ([]*corev1.Service, []*netv1.Ingress)
getRollout func() (*v1beta1.Rollout, *v1beta1.BatchRelease)
expectStatus func() *v1beta1.RolloutStatus
expectBr func() *v1beta1.BatchRelease
}{
{
name: "run bluegreen upgrade1",
getObj: func() ([]*apps.Deployment, []*apps.ReplicaSet) {
dep1 := deploymentDemo.DeepCopy()
rs1 := rsDemo.DeepCopy()
return []*apps.Deployment{dep1}, []*apps.ReplicaSet{rs1}
},
getNetwork: func() ([]*corev1.Service, []*netv1.Ingress) {
return []*corev1.Service{demoService.DeepCopy()}, []*netv1.Ingress{demoIngress.DeepCopy()}
},
getRollout: func() (*v1beta1.Rollout, *v1beta1.BatchRelease) {
obj := rolloutDemoBlueGreen.DeepCopy()
obj.Status.BlueGreenStatus.ObservedWorkloadGeneration = 2
obj.Status.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
obj.Status.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
obj.Status.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
obj.Status.BlueGreenStatus.CurrentStepIndex = 1
obj.Status.BlueGreenStatus.NextStepIndex = 2
obj.Status.BlueGreenStatus.ObservedRolloutID = "88bd5dbfd"
obj.Status.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
cond := util.GetRolloutCondition(obj.Status, v1beta1.RolloutConditionProgressing)
cond.Reason = v1alpha1.ProgressingReasonInRolling
util.SetRolloutCondition(&obj.Status, *cond)
return obj, nil
},
expectStatus: func() *v1beta1.RolloutStatus {
s := rolloutDemoBlueGreen.Status.DeepCopy()
s.BlueGreenStatus.ObservedWorkloadGeneration = 2
s.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
s.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
s.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
s.BlueGreenStatus.CurrentStepIndex = 1
s.BlueGreenStatus.NextStepIndex = 2
s.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
s.BlueGreenStatus.ObservedRolloutID = "88bd5dbfd"
cond := util.GetRolloutCondition(*s, v1beta1.RolloutConditionProgressing)
cond.Reason = v1alpha1.ProgressingReasonInRolling
util.SetRolloutCondition(s, *cond)
return s
},
expectBr: func() *v1beta1.BatchRelease {
br := batchDemo.DeepCopy()
br.Spec.ReleasePlan.Batches = []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("50%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
}
br.Spec.ReleasePlan.BatchPartition = utilpointer.Int32(0)
br.Spec.ReleasePlan.RollingStyle = v1beta1.BlueGreenRollingStyle
br.Spec.ReleasePlan.RolloutID = "88bd5dbfd"
return br
},
},
{
name: "run bluegreen traffic routing",
getObj: func() ([]*apps.Deployment, []*apps.ReplicaSet) {
dep1 := deploymentDemo.DeepCopy()
rs1 := rsDemo.DeepCopy()
rs2 := rsDemo.DeepCopy()
rs2.Name = "echoserver-canary"
rs2.Labels["pod-template-hash"] = "pod-template-hash-v2"
rs2.Spec.Template.Spec.Containers[0].Image = "echoserver:v2"
return []*apps.Deployment{dep1}, []*apps.ReplicaSet{rs1, rs2}
},
getNetwork: func() ([]*corev1.Service, []*netv1.Ingress) {
return []*corev1.Service{demoService.DeepCopy()}, []*netv1.Ingress{demoIngress.DeepCopy()}
},
getRollout: func() (*v1beta1.Rollout, *v1beta1.BatchRelease) {
obj := rolloutDemoBlueGreen.DeepCopy()
obj.Status.BlueGreenStatus.ObservedWorkloadGeneration = 2
obj.Status.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
obj.Status.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
obj.Status.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
obj.Status.BlueGreenStatus.CurrentStepIndex = 1
obj.Status.BlueGreenStatus.NextStepIndex = 2
obj.Status.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
obj.Status.BlueGreenStatus.ObservedRolloutID = "88bd5dbfd"
cond := util.GetRolloutCondition(obj.Status, v1beta1.RolloutConditionProgressing)
cond.Reason = v1alpha1.ProgressingReasonInRolling
util.SetRolloutCondition(&obj.Status, *cond)
br := batchDemo.DeepCopy()
br.Spec.ReleasePlan.Batches = []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("50%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
}
br.Spec.ReleasePlan.BatchPartition = utilpointer.Int32(0)
br.Spec.ReleasePlan.RollingStyle = v1beta1.BlueGreenRollingStyle
br.Spec.ReleasePlan.RolloutID = "88bd5dbfd"
br.Status = v1beta1.BatchReleaseStatus{
ObservedGeneration: 1,
ObservedReleasePlanHash: util.HashReleasePlanBatches(&br.Spec.ReleasePlan),
CanaryStatus: v1beta1.BatchReleaseCanaryStatus{
CurrentBatchState: v1beta1.ReadyBatchState,
CurrentBatch: 0,
UpdatedReplicas: 1,
UpdatedReadyReplicas: 1,
},
}
return obj, br
},
expectStatus: func() *v1beta1.RolloutStatus {
s := rolloutDemoBlueGreen.Status.DeepCopy()
s.BlueGreenStatus.ObservedWorkloadGeneration = 2
s.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
s.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
s.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
s.BlueGreenStatus.PodTemplateHash = "pod-template-hash-v2"
s.BlueGreenStatus.UpdatedReplicas = 1
s.BlueGreenStatus.UpdatedReadyReplicas = 1
s.BlueGreenStatus.CurrentStepIndex = 1
s.BlueGreenStatus.NextStepIndex = 2
s.BlueGreenStatus.ObservedRolloutID = "88bd5dbfd"
s.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStateTrafficRouting
cond := util.GetRolloutCondition(*s, v1beta1.RolloutConditionProgressing)
cond.Reason = v1alpha1.ProgressingReasonInRolling
util.SetRolloutCondition(s, *cond)
return s
},
expectBr: func() *v1beta1.BatchRelease {
br := batchDemo.DeepCopy()
br.Spec.ReleasePlan.Batches = []v1beta1.ReleaseBatch{
{
CanaryReplicas: intstr.FromString("50%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
{
CanaryReplicas: intstr.FromString("100%"),
},
}
br.Spec.ReleasePlan.BatchPartition = utilpointer.Int32(0)
br.Spec.ReleasePlan.RollingStyle = v1beta1.BlueGreenRollingStyle
br.Spec.ReleasePlan.RolloutID = "88bd5dbfd"
return br
},
},
}
for _, cs := range cases {
t.Run(cs.name, func(t *testing.T) {
deps, rss := cs.getObj()
rollout, br := cs.getRollout()
fc := fake.NewClientBuilder().WithScheme(scheme).WithObjects(rollout).Build()
for _, rs := range rss {
_ = fc.Create(context.TODO(), rs)
}
for _, dep := range deps {
_ = fc.Create(context.TODO(), dep)
}
if br != nil {
_ = fc.Create(context.TODO(), br)
}
ss, in := cs.getNetwork()
_ = fc.Create(context.TODO(), ss[0])
_ = fc.Create(context.TODO(), in[0])
r := &RolloutReconciler{
Client: fc,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
finder: util.NewControllerFinder(fc),
trafficRoutingManager: trafficrouting.NewTrafficRoutingManager(fc),
}
r.blueGreenManager = &blueGreenReleaseManager{
Client: fc,
trafficRoutingManager: r.trafficRoutingManager,
recorder: r.Recorder,
}
workload, err := r.finder.GetWorkloadForRef(rollout)
if err != nil {
t.Fatalf("GetWorkloadForRef failed: %s", err.Error())
}
c := &RolloutContext{
Rollout: rollout,
NewStatus: rollout.Status.DeepCopy(),
Workload: workload,
}
err = r.blueGreenManager.runCanary(c)
if err != nil {
t.Fatalf("reconcileRolloutProgressing failed: %s", err.Error())
}
checkBatchReleaseEqual(fc, t, client.ObjectKey{Name: rollout.Name}, cs.expectBr())
cStatus := c.NewStatus.DeepCopy()
cStatus.Message = ""
if cStatus.BlueGreenStatus != nil {
cStatus.BlueGreenStatus.LastUpdateTime = nil
cStatus.BlueGreenStatus.Message = ""
}
cond := util.GetRolloutCondition(*cStatus, v1beta1.RolloutConditionProgressing)
cond.Message = ""
util.SetRolloutCondition(cStatus, *cond)
expectStatus := cs.expectStatus()
if !reflect.DeepEqual(expectStatus, cStatus) {
t.Fatalf("expect(%s), but get(%s)", util.DumpJSON(cs.expectStatus()), util.DumpJSON(cStatus))
}
})
}
}
func TestBlueGreenRunCanaryPaused(t *testing.T) {
cases := []struct {
name string
getRollout func() *v1beta1.Rollout
expectStatus func() *v1beta1.RolloutStatus
}{
{
name: "paused, last step, 60% weight",
getRollout: func() *v1beta1.Rollout {
obj := rolloutDemoBlueGreen.DeepCopy()
obj.Status.BlueGreenStatus.ObservedWorkloadGeneration = 2
obj.Status.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
obj.Status.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
obj.Status.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
obj.Status.BlueGreenStatus.CurrentStepIndex = 3
obj.Status.BlueGreenStatus.NextStepIndex = 4
obj.Status.BlueGreenStatus.PodTemplateHash = "pod-template-hash-v2"
obj.Status.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStatePaused
return obj
},
expectStatus: func() *v1beta1.RolloutStatus {
obj := rolloutDemoBlueGreen.Status.DeepCopy()
obj.BlueGreenStatus.ObservedWorkloadGeneration = 2
obj.BlueGreenStatus.RolloutHash = "f55bvd874d5f2fzvw46bv966x4bwbdv4wx6bd9f7b46ww788954b8z8w29b7wxfd"
obj.BlueGreenStatus.StableRevision = "pod-template-hash-v1"
obj.BlueGreenStatus.UpdatedRevision = "6f8cc56547"
obj.BlueGreenStatus.CurrentStepIndex = 3
obj.BlueGreenStatus.NextStepIndex = 4
obj.BlueGreenStatus.PodTemplateHash = "pod-template-hash-v2"
obj.BlueGreenStatus.CurrentStepState = v1beta1.CanaryStepStatePaused
return obj
},
},
}
for _, cs := range cases {
t.Run(cs.name, func(t *testing.T) {
rollout := cs.getRollout()
fc := fake.NewClientBuilder().WithScheme(scheme).WithObjects(rollout).Build()
r := &RolloutReconciler{
Client: fc,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
finder: util.NewControllerFinder(fc),
trafficRoutingManager: trafficrouting.NewTrafficRoutingManager(fc),
}
r.blueGreenManager = &blueGreenReleaseManager{
Client: fc,
trafficRoutingManager: r.trafficRoutingManager,
recorder: r.Recorder,
}
c := &RolloutContext{
Rollout: rollout,
NewStatus: rollout.Status.DeepCopy(),
}
err := r.blueGreenManager.runCanary(c)
if err != nil {
t.Fatalf("reconcileRolloutProgressing failed: %s", err.Error())
}
cStatus := c.NewStatus.DeepCopy()
cStatus.BlueGreenStatus.LastUpdateTime = nil
cStatus.BlueGreenStatus.Message = ""
cStatus.Message = ""
if !reflect.DeepEqual(cs.expectStatus(), cStatus) {
t.Fatalf("expect(%s), but get(%s)", util.DumpJSON(cs.expectStatus()), util.DumpJSON(cStatus))
}
})
}
}

View File

@ -29,10 +29,9 @@ import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
"k8s.io/klog/v2"
utilpointer "k8s.io/utils/pointer"
"sigs.k8s.io/controller-runtime/pkg/client"
@ -46,7 +45,7 @@ type canaryReleaseManager struct {
func (m *canaryReleaseManager) runCanary(c *RolloutContext) error {
canaryStatus := c.NewStatus.CanaryStatus
if br, err := m.fetchBatchRelease(c.Rollout.Namespace, c.Rollout.Name); err != nil && !errors.IsNotFound(err) {
if br, err := fetchBatchRelease(m.Client, c.Rollout.Namespace, c.Rollout.Name); err != nil && !errors.IsNotFound(err) {
klog.Errorf("rollout(%s/%s) fetch batchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name, err.Error())
return err
} else if err == nil {
@ -70,23 +69,96 @@ func (m *canaryReleaseManager) runCanary(c *RolloutContext) error {
if canaryStatus.PodTemplateHash == "" {
canaryStatus.PodTemplateHash = c.Workload.PodTemplateHash
}
if m.doCanaryJump(c) {
return nil
}
// When the first batch is trafficRouting rolling and the next steps are rolling release,
// We need to clean up the canary-related resources first and then rollout the rest of the batch.
currentStep := c.Rollout.Spec.Strategy.Canary.Steps[canaryStatus.CurrentStepIndex-1]
if currentStep.Traffic == nil && len(currentStep.Matches) == 0 {
tr := newTrafficRoutingContext(c)
done, err := m.trafficRoutingManager.FinalisingTrafficRouting(tr, false)
done, err := m.trafficRoutingManager.FinalisingTrafficRouting(tr)
c.NewStatus.CanaryStatus.LastUpdateTime = tr.LastUpdateTime
if err != nil {
return err
} else if !done {
klog.Infof("rollout(%s/%s) cleaning up canary-related resources", c.Rollout.Namespace, c.Rollout.Name)
expectedTime := time.Now().Add(time.Duration(defaultGracePeriodSeconds) * time.Second)
expectedTime := time.Now().Add(tr.RecheckDuration)
c.RecheckTime = &expectedTime
return nil
}
}
switch canaryStatus.CurrentStepState {
// before CanaryStepStateUpgrade, handle some special cases, to prevent traffic loss
case v1beta1.CanaryStepStateInit:
klog.Infof("rollout(%s/%s) run canary strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateInit)
tr := newTrafficRoutingContext(c)
if currentStep.Traffic == nil && len(currentStep.Matches) == 0 {
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
canaryStatus.CurrentStepIndex, v1beta1.CanaryStepStateInit, canaryStatus.CurrentStepState)
return nil
}
/*
The following check serves to bypass the bug in ingress-nginx controller https://github.com/kubernetes/ingress-nginx/issues/9635
For partition-style: if the expected replicas of the current rollout step is not less than workload.spec.replicas,
it indicates that this step will release all stable pods to new version, ie. there will be no stable pods, which will
trigger the bug.
To avoid this issue, we restore stable Service before scaling the stable pods down to zero.
This ensures that the backends behind the stable ingress remain active, preventing the bug from being triggered.
*/
expectedReplicas, _ := intstr.GetScaledValueFromIntOrPercent(currentStep.Replicas, int(c.Workload.Replicas), true)
if expectedReplicas >= int(c.Workload.Replicas) && v1beta1.IsRealPartition(c.Rollout) {
klog.Infof("Bypass the ingress-nginx bug for partition-style, rollout(%s/%s) restore stable Service", c.Rollout.Namespace, c.Rollout.Name)
retry, err := m.trafficRoutingManager.RestoreStableService(tr)
if err != nil {
return err
} else if retry {
expectedTime := time.Now().Add(tr.RecheckDuration)
c.RecheckTime = &expectedTime
return nil
}
}
/*
The following check is used to solve scenario like this:
steps:
- replicas: 1 # first batch
matches:
- headers:
- name: user-agent
type: Exact
value: pc
in the first batch, pods with new version will be created in step CanaryStepStateUpgrade, once ready,
they will serve as active backends behind the stable service, because the stable service hasn't been
modified by rollout (ie. it selects pods of all versions).
Thus, requests with or without the header (user-agent: pc) will be routed to pods of all versions evenly, before
we arrive the CanaryStepStateTrafficRouting step.
To avoid this issue, we
- patch selector to stable Service before CanaryStepStateUpgrade step.
*/
if canaryStatus.CurrentStepIndex == 1 {
if !tr.DisableGenerateCanaryService {
klog.Infof("Before the first batch, rollout(%s/%s) patch stable Service", c.Rollout.Namespace, c.Rollout.Name)
retry, err := m.trafficRoutingManager.PatchStableService(tr)
if err != nil {
return err
} else if retry {
expectedTime := time.Now().Add(tr.RecheckDuration)
c.RecheckTime = &expectedTime
return nil
}
}
}
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
canaryStatus.CurrentStepIndex, v1beta1.CanaryStepStateInit, canaryStatus.CurrentStepState)
fallthrough
case v1beta1.CanaryStepStateUpgrade:
klog.Infof("rollout(%s/%s) run canary strategy, and state(%s)", c.Rollout.Namespace, c.Rollout.Name, v1beta1.CanaryStepStateUpgrade)
done, err := m.doCanaryUpgrade(c)
@ -94,6 +166,13 @@ func (m *canaryReleaseManager) runCanary(c *RolloutContext) error {
return err
} else if done {
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateTrafficRouting
// To correspond with the above explanation wrt. the mentioned bug https://github.com/kubernetes/ingress-nginx/issues/9635
// we likewise do this check again to skip the CanaryStepStateTrafficRouting step, since
// it has been done in the CanaryStepInit step
expectedReplicas, _ := intstr.GetScaledValueFromIntOrPercent(currentStep.Replicas, int(c.Workload.Replicas), true)
if expectedReplicas >= int(c.Workload.Replicas) && v1beta1.IsRealPartition(c.Rollout) {
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateMetricsAnalysis
}
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
klog.Infof("rollout(%s/%s) step(%d) state from(%s) -> to(%s)", c.Rollout.Namespace, c.Rollout.Name,
canaryStatus.CurrentStepIndex, v1beta1.CanaryStepStateUpgrade, canaryStatus.CurrentStepState)
@ -144,7 +223,8 @@ func (m *canaryReleaseManager) runCanary(c *RolloutContext) error {
if len(c.Rollout.Spec.Strategy.Canary.Steps) > int(canaryStatus.CurrentStepIndex) {
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
canaryStatus.CurrentStepIndex++
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateUpgrade
canaryStatus.NextStepIndex = util.NextBatchIndex(c.Rollout, canaryStatus.CurrentStepIndex)
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateInit
klog.Infof("rollout(%s/%s) canary step from(%d) -> to(%d)", c.Rollout.Namespace, c.Rollout.Name, canaryStatus.CurrentStepIndex-1, canaryStatus.CurrentStepIndex)
} else {
klog.Infof("rollout(%s/%s) canary run all steps, and completed", c.Rollout.Namespace, c.Rollout.Name)
@ -170,7 +250,7 @@ func (m *canaryReleaseManager) doCanaryUpgrade(c *RolloutContext) (bool, error)
cond.Message = fmt.Sprintf("Rollout is in step(%d/%d), and upgrade workload to new version", canaryStatus.CurrentStepIndex, steps)
c.NewStatus.Message = cond.Message
// run batch release to upgrade the workloads
done, br, err := m.runBatchRelease(c.Rollout, getRolloutID(c.Workload), canaryStatus.CurrentStepIndex, c.Workload.IsInRollback)
done, br, err := runBatchRelease(m, c.Rollout, getRolloutID(c.Workload), canaryStatus.CurrentStepIndex, c.Workload.IsInRollback)
if err != nil {
return false, err
} else if !done {
@ -232,116 +312,103 @@ func (m *canaryReleaseManager) doCanaryPaused(c *RolloutContext) (bool, error) {
return false, nil
}
func (m *canaryReleaseManager) doCanaryJump(c *RolloutContext) (jumped bool) {
canaryStatus := c.NewStatus.CanaryStatus
// since we forbid adding or removing steps, currentStepIndex should always be valid
currentStep := c.Rollout.Spec.Strategy.Canary.Steps[canaryStatus.CurrentStepIndex-1]
// nextIndex=-1 means the release is done, nextIndex=0 is not used
if nextIndex := canaryStatus.NextStepIndex; nextIndex != util.NextBatchIndex(c.Rollout, canaryStatus.CurrentStepIndex) && nextIndex > 0 {
currentIndexBackup := canaryStatus.CurrentStepIndex
currentStepStateBackup := canaryStatus.CurrentStepState
// update the current and next stepIndex
canaryStatus.CurrentStepIndex = nextIndex
canaryStatus.NextStepIndex = util.NextBatchIndex(c.Rollout, nextIndex)
nextStep := c.Rollout.Spec.Strategy.Canary.Steps[nextIndex-1]
// compare next step and current step to decide the state we should go
if reflect.DeepEqual(nextStep.Replicas, currentStep.Replicas) {
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateTrafficRouting
} else {
canaryStatus.CurrentStepState = v1beta1.CanaryStepStateInit
}
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
klog.Infof("rollout(%s/%s) step(%d->%d) state from(%s -> %s)",
c.Rollout.Namespace, c.Rollout.Name,
currentIndexBackup, canaryStatus.CurrentStepIndex,
currentStepStateBackup, canaryStatus.CurrentStepState)
return true
}
return false
}
// cleanup after rollout is completed or finished
func (m *canaryReleaseManager) doCanaryFinalising(c *RolloutContext) (bool, error) {
canaryStatus := c.NewStatus.CanaryStatus
// when CanaryStatus is nil, which means canary action hasn't started yet, don't need doing cleanup
if c.NewStatus.CanaryStatus == nil {
if canaryStatus == nil {
return true, nil
}
// 1. rollout progressing complete, remove rollout progressing annotation in workload
err := m.removeRolloutProgressingAnnotation(c)
// rollout progressing complete, remove rollout progressing annotation in workload
err := removeRolloutProgressingAnnotation(m.Client, c)
if err != nil {
return false, err
}
tr := newTrafficRoutingContext(c)
// 2. remove stable service the pod revision selector, so stable service will be selector all version pods.
done, err := m.trafficRoutingManager.FinalisingTrafficRouting(tr, true)
c.NewStatus.CanaryStatus.LastUpdateTime = tr.LastUpdateTime
if err != nil || !done {
return done, err
// execute steps based on the predefined order for each reason
nextStep := nextCanaryTask(c.FinalizeReason, canaryStatus.FinalisingStep)
// if current step is empty, set it with the first step
// if current step is end, we just return
if len(canaryStatus.FinalisingStep) == 0 {
canaryStatus.FinalisingStep = nextStep
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
} else if canaryStatus.FinalisingStep == v1beta1.FinalisingStepTypeEnd {
klog.Infof("rollout(%s/%s) finalising process is already completed", c.Rollout.Namespace, c.Rollout.Name)
return true, nil
}
// 3. set workload.pause=false; set workload.partition=0
done, err = m.finalizingBatchRelease(c)
if err != nil || !done {
return done, err
}
// 4. modify network api(ingress or gateway api) configuration, and route 100% traffic to stable pods.
done, err = m.trafficRoutingManager.FinalisingTrafficRouting(tr, false)
c.NewStatus.CanaryStatus.LastUpdateTime = tr.LastUpdateTime
if err != nil || !done {
return done, err
}
// 5. delete batchRelease crd
done, err = m.removeBatchRelease(c)
if err != nil {
klog.Errorf("rollout(%s/%s) Finalize batchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name, err.Error())
return false, err
} else if !done {
klog.Infof("rollout(%s/%s) Finalising Step is %s", c.Rollout.Namespace, c.Rollout.Name, canaryStatus.FinalisingStep)
var retry bool
// the order of steps is maitained by calculating the nextStep
switch canaryStatus.FinalisingStep {
// set workload.pause=false; set workload.partition=0
case v1beta1.FinalisingStepResumeWorkload:
retry, err = finalizingBatchRelease(m.Client, c)
// delete batchRelease
case v1beta1.FinalisingStepReleaseWorkloadControl:
retry, err = removeBatchRelease(m.Client, c)
// restore the gateway resources (ingress/gatewayAPI/Istio), that means
// only stable Service will accept the traffic
case v1beta1.FinalisingStepRouteTrafficToStable:
retry, err = m.trafficRoutingManager.RestoreGateway(tr)
// restore the stable service
case v1beta1.FinalisingStepRestoreStableService:
retry, err = m.trafficRoutingManager.RestoreStableService(tr)
// remove canary service
case v1beta1.FinalisingStepRemoveCanaryService:
retry, err = m.trafficRoutingManager.RemoveCanaryService(tr)
default:
nextStep = nextCanaryTask(c.FinalizeReason, "")
klog.Warningf("unexpected finalising step, current step(%s), start from the first step(%s)", canaryStatus.FinalisingStep, nextStep)
canaryStatus.FinalisingStep = nextStep
return false, nil
}
klog.Infof("rollout(%s/%s) doCanaryFinalising success", c.Rollout.Namespace, c.Rollout.Name)
return true, nil
if err != nil || retry {
return false, err
}
// current step is done, run the next step
canaryStatus.LastUpdateTime = &metav1.Time{Time: time.Now()}
canaryStatus.FinalisingStep = nextStep
if canaryStatus.FinalisingStep == v1beta1.FinalisingStepTypeEnd {
return true, nil
}
return false, nil
}
func (m *canaryReleaseManager) removeRolloutProgressingAnnotation(c *RolloutContext) error {
if c.Workload == nil {
return nil
}
if _, ok := c.Workload.Annotations[util.InRolloutProgressingAnnotation]; !ok {
return nil
}
workloadRef := c.Rollout.Spec.WorkloadRef
workloadGVK := schema.FromAPIVersionAndKind(workloadRef.APIVersion, workloadRef.Kind)
obj := util.GetEmptyWorkloadObject(workloadGVK)
obj.SetNamespace(c.Workload.Namespace)
obj.SetName(c.Workload.Name)
body := fmt.Sprintf(`{"metadata":{"annotations":{"%s":null}}}`, util.InRolloutProgressingAnnotation)
if err := m.Patch(context.TODO(), obj, client.RawPatch(types.MergePatchType, []byte(body))); err != nil {
klog.Errorf("rollout(%s/%s) patch workload(%s) failed: %s", c.Rollout.Namespace, c.Rollout.Name, c.Workload.Name, err.Error())
return err
}
klog.Infof("remove rollout(%s/%s) workload(%s) annotation[%s] success", c.Rollout.Namespace, c.Rollout.Name, c.Workload.Name, util.InRolloutProgressingAnnotation)
return nil
func (m *canaryReleaseManager) fetchClient() client.Client {
return m.Client
}
func (m *canaryReleaseManager) runBatchRelease(rollout *v1beta1.Rollout, rolloutId string, batch int32, isRollback bool) (bool, *v1beta1.BatchRelease, error) {
batch = batch - 1
br, err := m.fetchBatchRelease(rollout.Namespace, rollout.Name)
if errors.IsNotFound(err) {
// create new BatchRelease Crd
br = createBatchRelease(rollout, rolloutId, batch, isRollback)
if err = m.Create(context.TODO(), br); err != nil && !errors.IsAlreadyExists(err) {
klog.Errorf("rollout(%s/%s) create BatchRelease failed: %s", rollout.Namespace, rollout.Name, err.Error())
return false, nil, err
}
klog.Infof("rollout(%s/%s) create BatchRelease(%s) success", rollout.Namespace, rollout.Name, util.DumpJSON(br))
return false, br, nil
} else if err != nil {
klog.Errorf("rollout(%s/%s) fetch BatchRelease failed: %s", rollout.Namespace, rollout.Name, err.Error())
return false, nil, err
}
// check whether batchRelease configuration is the latest
newBr := createBatchRelease(rollout, rolloutId, batch, isRollback)
if reflect.DeepEqual(br.Spec, newBr.Spec) && reflect.DeepEqual(br.Annotations, newBr.Annotations) {
klog.Infof("rollout(%s/%s) do batchRelease batch(%d) success", rollout.Namespace, rollout.Name, batch+1)
return true, br, nil
}
// update batchRelease to the latest version
if err = retry.RetryOnConflict(retry.DefaultBackoff, func() error {
if err = m.Get(context.TODO(), client.ObjectKey{Namespace: newBr.Namespace, Name: newBr.Name}, br); err != nil {
klog.Errorf("error getting BatchRelease(%s/%s) from client", newBr.Namespace, newBr.Name)
return err
}
br.Spec = newBr.Spec
br.Annotations = newBr.Annotations
return m.Client.Update(context.TODO(), br)
}); err != nil {
klog.Errorf("rollout(%s/%s) update batchRelease failed: %s", rollout.Namespace, rollout.Name, err.Error())
return false, nil, err
}
klog.Infof("rollout(%s/%s) update batchRelease(%s) configuration to latest", rollout.Namespace, rollout.Name, util.DumpJSON(br))
return false, br, nil
}
func (m *canaryReleaseManager) fetchBatchRelease(ns, name string) (*v1beta1.BatchRelease, error) {
br := &v1beta1.BatchRelease{}
// batchRelease.name is equal related rollout.name
err := m.Get(context.TODO(), client.ObjectKey{Namespace: ns, Name: name}, br)
return br, err
}
func createBatchRelease(rollout *v1beta1.Rollout, rolloutID string, batch int32, isRollback bool) *v1beta1.BatchRelease {
func (m *canaryReleaseManager) createBatchRelease(rollout *v1beta1.Rollout, rolloutID string, batch int32, isRollback bool) *v1beta1.BatchRelease {
var batches []v1beta1.ReleaseBatch
for _, step := range rollout.Spec.Strategy.Canary.Steps {
batches = append(batches, v1beta1.ReleaseBatch{CanaryReplicas: *step.Replicas})
@ -361,9 +428,10 @@ func createBatchRelease(rollout *v1beta1.Rollout, rolloutID string, batch int32,
ReleasePlan: v1beta1.ReleasePlan{
Batches: batches,
RolloutID: rolloutID,
BatchPartition: utilpointer.Int32Ptr(batch),
BatchPartition: utilpointer.Int32(batch),
FailureThreshold: rollout.Spec.Strategy.Canary.FailureThreshold,
PatchPodTemplateMetadata: rollout.Spec.Strategy.Canary.PatchPodTemplateMetadata,
RollingStyle: rollout.Spec.Strategy.GetRollingStyle(),
EnableExtraWorkloadForCanary: rollout.Spec.Strategy.Canary.EnableExtraWorkloadForCanary,
},
},
@ -378,81 +446,6 @@ func createBatchRelease(rollout *v1beta1.Rollout, rolloutID string, batch int32,
return br
}
func (m *canaryReleaseManager) removeBatchRelease(c *RolloutContext) (bool, error) {
batch := &v1beta1.BatchRelease{}
err := m.Get(context.TODO(), client.ObjectKey{Namespace: c.Rollout.Namespace, Name: c.Rollout.Name}, batch)
if err != nil && errors.IsNotFound(err) {
return true, nil
} else if err != nil {
klog.Errorf("rollout(%s/%s) fetch BatchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name)
return false, err
}
if !batch.DeletionTimestamp.IsZero() {
klog.Infof("rollout(%s/%s) BatchRelease is terminating, and wait a moment", c.Rollout.Namespace, c.Rollout.Name)
return false, nil
}
//delete batchRelease
err = m.Delete(context.TODO(), batch)
if err != nil {
klog.Errorf("rollout(%s/%s) delete BatchRelease failed: %s", c.Rollout.Namespace, c.Rollout.Name, err.Error())
return false, err
}
klog.Infof("rollout(%s/%s) deleting BatchRelease, and wait a moment", c.Rollout.Namespace, c.Rollout.Name)
return false, nil
}
func (m *canaryReleaseManager) finalizingBatchRelease(c *RolloutContext) (bool, error) {
br, err := m.fetchBatchRelease(c.Rollout.Namespace, c.Rollout.Name)
if err != nil {
if errors.IsNotFound(err) {
return true, nil
}
return false, err
}
waitReady := c.WaitReady
// The Completed phase means batchRelease controller has processed all it
// should process. If BatchRelease phase is completed, we can do nothing.
if br.Spec.ReleasePlan.BatchPartition == nil &&
br.Status.Phase == v1beta1.RolloutPhaseCompleted {
klog.Infof("rollout(%s/%s) finalizing batchRelease(%s) done", c.Rollout.Namespace, c.Rollout.Name, util.DumpJSON(br.Status))
return true, nil
}
// If BatchPartition is nil, BatchRelease will directly resume workload via:
// - * set workload Paused = false if it needs;
// - * set workload Partition = null if it needs.
if br.Spec.ReleasePlan.BatchPartition == nil {
// - If checkReady is true, finalizing policy must be "WaitResume";
// - If checkReady is false, finalizing policy must be NOT "WaitResume";
// Otherwise, we should correct it.
switch br.Spec.ReleasePlan.FinalizingPolicy {
case v1beta1.WaitResumeFinalizingPolicyType:
if waitReady { // no need to patch again
return false, nil
}
default:
if !waitReady { // no need to patch again
return false, nil
}
}
}
// Correct finalizing policy.
policy := v1beta1.ImmediateFinalizingPolicyType
if waitReady {
policy = v1beta1.WaitResumeFinalizingPolicyType
}
// Patch BatchPartition and FinalizingPolicy, BatchPartition always patch null here.
body := fmt.Sprintf(`{"spec":{"releasePlan":{"batchPartition":null,"finalizingPolicy":"%s"}}}`, policy)
if err = m.Patch(context.TODO(), br, client.RawPatch(types.MergePatchType, []byte(body))); err != nil {
return false, err
}
klog.Infof("rollout(%s/%s) patch batchRelease(%s) success", c.Rollout.Namespace, c.Rollout.Name, body)
return false, nil
}
// syncBatchRelease sync status of br to canaryStatus, and sync rollout-id of canaryStatus to br.
func (m *canaryReleaseManager) syncBatchRelease(br *v1beta1.BatchRelease, canaryStatus *v1beta1.CanaryStatus) error {
// sync from BatchRelease status to Rollout canaryStatus
@ -463,6 +456,10 @@ func (m *canaryReleaseManager) syncBatchRelease(br *v1beta1.BatchRelease, canary
// TODO: optimize the logic to better understand
canaryStatus.Message = fmt.Sprintf("BatchRelease is at state %s, rollout-id %s, step %d",
br.Status.CanaryStatus.CurrentBatchState, br.Status.ObservedRolloutID, br.Status.CanaryStatus.CurrentBatch+1)
// br.Status.Message records messages that help users to understand what is going wrong
if len(br.Status.Message) > 0 {
canaryStatus.Message += fmt.Sprintf(", %s", br.Status.Message)
}
// sync rolloutId from canaryStatus to BatchRelease
if canaryStatus.ObservedRolloutID != br.Spec.ReleasePlan.RolloutID {
@ -471,3 +468,42 @@ func (m *canaryReleaseManager) syncBatchRelease(br *v1beta1.BatchRelease, canary
}
return nil
}
// calculate next task
func nextCanaryTask(reason string, currentTask v1beta1.FinalisingStepType) v1beta1.FinalisingStepType {
var taskSequence []v1beta1.FinalisingStepType
//REVIEW - should we consider more complex scenarios?
// like, user rollbacks the workload and disables the Rollout at the same time?
switch reason {
// in rollback, we cannot remove selector of the stable service in the first step,
// which may cause some traffic to problematic new version, so we should route all traffic to stable service
// in the first step
case v1beta1.FinaliseReasonRollback: // rollback
taskSequence = []v1beta1.FinalisingStepType{
v1beta1.FinalisingStepRouteTrafficToStable, // route all traffic to stable version
v1beta1.FinalisingStepResumeWorkload, // scale up old, scale down new
v1beta1.FinalisingStepReleaseWorkloadControl,
v1beta1.FinalisingStepRestoreStableService,
v1beta1.FinalisingStepRemoveCanaryService,
}
default: // others: success/disabled/deleting rollout
taskSequence = []v1beta1.FinalisingStepType{
v1beta1.FinalisingStepRestoreStableService,
v1beta1.FinalisingStepRouteTrafficToStable,
v1beta1.FinalisingStepRemoveCanaryService,
v1beta1.FinalisingStepResumeWorkload, // scale up new, scale down old
v1beta1.FinalisingStepReleaseWorkloadControl,
}
}
// if currentTask is empty, return first task
if len(currentTask) == 0 {
return taskSequence[0]
}
// find next task
for i := range taskSequence {
if currentTask == taskSequence[i] && i < len(taskSequence)-1 {
return taskSequence[i+1]
}
}
return v1beta1.FinalisingStepTypeEnd
}

Some files were not shown because too many files have changed in this diff Show More