Compare commits

..

39 Commits

Author SHA1 Message Date
Mohit Nagaraj b1138ed602 refactor(app): improve logging in graceful eviction controllers to use structured logging
Signed-off-by: Mohit Nagaraj <mohitnagaraj20@gmail.com>
2025-08-04 12:16:50 +00:00
karmada-bot b8f6874c58
Merge pull request #6560 from karmada-io/dependabot/docker/cluster/images/alpine-3.22.1
Bump alpine from 3.22.0 to 3.22.1 in /cluster/images
2025-08-01 11:21:34 +08:00
karmada-bot 46495a227c
Merge pull request #6591 from zhzhuang-zju/releasenote-new
publish release v1.15.0-beta.0,v1.14.2,v1.13.5,v1.12.8
2025-07-31 20:50:34 +08:00
karmada-bot 295c69f76a
Merge pull request #6578 from cbaenziger/work_status_controller_structured_logging
Work status controller structured logging
2025-07-31 20:19:34 +08:00
zhzhuang-zju 7c30e24097 publish release v1.15.0-beta.0,v1.14.2,v1.13.5,v1.12.8
Signed-off-by: zhzhuang-zju <m17799853869@163.com>
2025-07-31 18:53:52 +08:00
Clay Baenziger de65bc7097 Use structured logging for work status controller
Signed-off-by: Clay Baenziger <cwb@clayb.net>
2025-07-31 02:48:41 -06:00
karmada-bot f67dcc954e
Merge pull request #6469 from zclyne/yifan/workloadrebalancer-json-logging
workloadbalancer controller now supports json format logging
2025-07-30 15:35:33 +08:00
Yifan 4bf059ffec workloadbalancer controller now supports json format logging
Signed-off-by: Yifan <zyfinori@gmail.com>
2025-07-30 14:38:45 +08:00
Zhuyu Li 2d74aee39d
Use Structure Logging Enhancement for `Cluster Resource Binding Controller` (#6576)
* use structured logging for cluster resource binding controller

Signed-off-by: zhuyulicfc49 <zyliw49@gmail.com>

* Improve logs by using 'name' as key and log name only

Signed-off-by: zhuyulicfc49 <zyliw49@gmail.com>

---------

Signed-off-by: zhuyulicfc49 <zyliw49@gmail.com>
2025-07-29 10:27:32 +08:00
karmada-bot cb0acc0d61
Merge pull request #6575 from jabellard/group-operator-proposals
Group operator design documents under `docs/proposals/karmada-operator` directory
2025-07-29 10:22:31 +08:00
karmada-bot b30c170c96
Merge pull request #6567 from wangbowen1401/master
JSON logging for RB Status Controller
2025-07-29 10:08:31 +08:00
Zhang Zhang a2c2057761
Use structured log for karmada operator (#6564)
* use structured log

Signed-off-by: zhangsquared <hi.zhangzhang@gmail.com>

* addressing comment

Signed-off-by: zhangsquared <hi.zhangzhang@gmail.com>

---------

Signed-off-by: zhangsquared <hi.zhangzhang@gmail.com>
2025-07-28 14:46:30 +08:00
karmada-bot a2980eb1b6
Merge pull request #6434 from XiShanYongYe-Chang/fix-6433
Ensure EndpointSlice informer cache is synced before reporting EndpointSlice
2025-07-26 15:02:29 +08:00
karmada-bot db76ef01e2
Merge pull request #6565 from jennryaz/erya-slog2
Use structured logging for cluster status controller
2025-07-26 14:40:28 +08:00
Eugene Ryazanov a4795803d1 Use structured logging for cluster status controller
Signed-off-by: Eugene Ryazanov <yryazanov@bloomberg.net>
2025-07-26 14:37:26 +08:00
Bowen Wang fd8241598c Update RB status controller to use JSON logging
Signed-off-by: Bowen Wang <43794678+wangbowen1401@users.noreply.github.com>
2025-07-25 20:28:37 +00:00
changzhen b0560d63dd wait for EndpointSlice informer synced when report EndpointSlice
Signed-off-by: changzhen <changzhen5@huawei.com>
2025-07-25 10:44:16 +08:00
Joe Nathan Abellard 47b5bdcafd Group operator design documents
Signed-off-by: Joe Nathan Abellard <contact@jabellard.com>
2025-07-24 19:48:36 -04:00
karmada-bot 5f4bd5e765
Merge pull request #6546 from Bhaumik10/mcs-crtl-logs
JSON logging for multiclusterservice controllers
2025-07-24 18:40:27 +08:00
karmada-bot 3024d3321e
Merge pull request #6454 from zclyne/yifan/scheduler-unit-test-port
make healthz/metrics server ports of the scheduler/descheduler unit tests to be detected automatically to avoid flaky test failures
2025-07-23 09:57:26 +08:00
Yifan Zhang a1290871ea dynamically pick free ports for healthz and metrics server of the scheduler/descheduler unit test to fix intermittent test failures
Signed-off-by: Yifan Zhang <zyfinori@gmail.com>
2025-07-22 19:54:39 -04:00
Bhaumik Patel bcb3b08376 JSON logging for multiclusterservice controllers
Signed-off-by: Bhaumik Patel <bhaumikpatel029@gmail.com>

Address comments

Signed-off-by: Bhaumik Patel <bhaumikpatel029@gmail.com>

Address comments

Address comments

Address comments
2025-07-22 09:23:19 -04:00
karmada-bot f4f63c8d25
Merge pull request #6545 from nihar4276/federatedhpajsonlogging
Add Structured Logging for Federated HPA controller
2025-07-22 15:02:26 +08:00
karmada-bot be98c622e0
Merge pull request #6563 from karmada-io/dependabot/github_actions/sigstore/cosign-installer-3.9.2
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2
2025-07-22 09:49:26 +08:00
karmada-bot 84359efa64
Merge pull request #6533 from abhi0324/add-UnitedDeployment
feat: add resource interpreter customization for UnitedDeployment
2025-07-21 21:21:25 +08:00
Abhiswant Chaudhary 7bf8413888
feat: add resource interpreter customization for UnitedDeployment
Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>
2025-07-21 14:06:12 +05:30
karmada-bot 95802b0204
Merge pull request #6524 from abhi0324/add-SideCarSet
feat: add resource interpreter customization for SidecarSet
2025-07-21 16:13:25 +08:00
dependabot[bot] 96f4744eb2
Bump sigstore/cosign-installer from 3.9.1 to 3.9.2
Bumps [sigstore/cosign-installer](https://github.com/sigstore/cosign-installer) from 3.9.1 to 3.9.2.
- [Release notes](https://github.com/sigstore/cosign-installer/releases)
- [Commits](https://github.com/sigstore/cosign-installer/compare/v3.9.1...v3.9.2)

---
updated-dependencies:
- dependency-name: sigstore/cosign-installer
  dependency-version: 3.9.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-21 06:53:18 +00:00
dependabot[bot] 1aeed5a32e
Bump alpine from 3.22.0 to 3.22.1 in /cluster/images
Bumps alpine from 3.22.0 to 3.22.1.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.22.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-21 06:40:29 +00:00
karmada-bot 592fa3224d
Merge pull request #6558 from seanlaii/controller-gen-18
Bump controller-gen version to v0.18.0
2025-07-21 11:21:24 +08:00
karmada-bot ff1b8fd429
Merge pull request #6556 from seanlaii/k8s-1.33
Update Kubernetes versions in CI workflows to include v1.33.0
2025-07-21 10:51:24 +08:00
wei-chenglai 001c107025 Bump controller-gen version to v0.18.0
Signed-off-by: wei-chenglai <qazwsx0939059006@gmail.com>
2025-07-20 22:23:19 -04:00
karmada-bot 9824e3d8f8
Merge pull request #6512 from XiShanYongYe-Chang/remove-instruction-label
Delete the propagation.karmada.io/instruction label
2025-07-21 10:00:25 +08:00
karmada-bot 0b5fe5ec82
Merge pull request #6557 from seanlaii/go-1.24.5
Bump go version to 1.24.5
2025-07-21 09:59:24 +08:00
wei-chenglai 292328b23f Update Kubernetes versions in CI workflows to include v1.33.0
Signed-off-by: wei-chenglai <qazwsx0939059006@gmail.com>
2025-07-20 21:41:22 -04:00
wei-chenglai 3c6394a391 Bump go version to 1.24.4
Signed-off-by: wei-chenglai <qazwsx0939059006@gmail.com>
2025-07-20 21:06:56 -04:00
changzhen 027741bd38 delete hte propagation.karmada.io/instruction label
Signed-off-by: changzhen <changzhen5@huawei.com>
2025-07-18 11:33:05 +08:00
nrao65 689680162e Add Structured Logging for Federated HPA controller
Signed-off-by: nrao65 <nrao65@bloomberg.net>
2025-07-17 11:20:58 -04:00
Abhiswant Chaudhary 42a772f700
feat: add resource interpreter customization for SidecarSet
Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>

feat: add resource interpreter customization for SidecarSet

Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>

fix: remove redundant status preservation in SidecarSet retention

Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>

fix: remove redundant status preservation in SidecarSet Retain

Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>

improve SidecarSet resource interpreter retention and dependency logic

Signed-off-by: Abhiswant Chaudhary <abhiswant0324@gmail.com>
2025-07-12 03:39:17 +05:30
75 changed files with 1116 additions and 604 deletions

View File

@ -19,7 +19,7 @@ jobs:
max-parallel: 5
fail-fast: false
matrix:
kubeapiserver-version: [ v1.23.4, v1.24.2, v1.25.0, v1.26.0, v1.27.3, v1.28.0, v1.29.0, v1.30.0, v1.31.0, v1.32.0 ]
kubeapiserver-version: [ v1.24.2, v1.25.0, v1.26.0, v1.27.3, v1.28.0, v1.29.0, v1.30.0, v1.31.0, v1.32.0, v1.33.0 ]
karmada-version: [ master, release-1.14, release-1.13, release-1.12 ]
env:
KARMADA_APISERVER_VERSION: ${{ matrix.kubeapiserver-version }}

View File

@ -19,7 +19,7 @@ jobs:
max-parallel: 5
fail-fast: false
matrix:
k8s: [ v1.23.4, v1.24.2, v1.25.0, v1.26.0, v1.27.3, v1.28.0, v1.29.0, v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.24.2, v1.25.0, v1.26.0, v1.27.3, v1.28.0, v1.29.0, v1.30.0, v1.31.0, v1.32.0, v1.33.0 ]
steps:
# Free up disk space on Ubuntu
- name: Free Disk Space (Ubuntu)

View File

@ -116,7 +116,7 @@ jobs:
# Here support the latest three minor releases of Kubernetes, this can be considered to be roughly
# the same as the End of Life of the Kubernetes release: https://kubernetes.io/releases/
# Please remember to update the CI Schedule Workflow when we add a new version.
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
# Free up disk space on Ubuntu
- name: Free Disk Space (Ubuntu)
@ -172,7 +172,7 @@ jobs:
# Here support the latest three minor releases of Kubernetes, this can be considered to be roughly
# the same as the End of Life of the Kubernetes release: https://kubernetes.io/releases/
# Please remember to update the CI Schedule Workflow when we add a new version.
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
# Free up disk space on Ubuntu
- name: Free Disk Space (Ubuntu)

View File

@ -42,7 +42,7 @@ jobs:
with:
go-version-file: go.mod
- name: Install Cosign
uses: sigstore/cosign-installer@v3.9.1
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: 'v2.2.3'
- name: install QEMU

View File

@ -42,7 +42,7 @@ jobs:
with:
go-version-file: go.mod
- name: Install Cosign
uses: sigstore/cosign-installer@v3.9.1
uses: sigstore/cosign-installer@v3.9.2
with:
cosign-release: 'v2.2.3'
- name: install QEMU

View File

@ -26,7 +26,7 @@ jobs:
# Here support the latest three minor releases of Kubernetes, this can be considered to be roughly
# the same as the End of Life of the Kubernetes release: https://kubernetes.io/releases/
# Please remember to update the CI Schedule Workflow when we add a new version.
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
- name: Checkout
uses: actions/checkout@v4

View File

@ -24,7 +24,7 @@ jobs:
# Here support the latest three minor releases of Kubernetes, this can be considered to be roughly
# the same as the End of Life of the Kubernetes release: https://kubernetes.io/releases/
# Please remember to update the CI Schedule Workflow when we add a new version.
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
- name: checkout code
uses: actions/checkout@v4
@ -70,7 +70,7 @@ jobs:
fail-fast: false
matrix:
# Latest three minor releases of Kubernetes
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
- name: checkout code
uses: actions/checkout@v4

View File

@ -24,7 +24,7 @@ jobs:
# Here support the latest three minor releases of Kubernetes, this can be considered to be roughly
# the same as the End of Life of the Kubernetes release: https://kubernetes.io/releases/
# Please remember to update the CI Schedule Workflow when we add a new version.
k8s: [ v1.30.0, v1.31.0, v1.32.0 ]
k8s: [ v1.31.0, v1.32.0, v1.33.0 ]
steps:
# Free up disk space on Ubuntu
- name: Free Disk Space (Ubuntu)

View File

@ -1 +1 @@
1.24.4
1.24.5

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: karmadas.operator.karmada.io
spec:
group: operator.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: workloadrebalancers.apps.karmada.io
spec:
group: apps.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: cronfederatedhpas.autoscaling.karmada.io
spec:
group: autoscaling.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: federatedhpas.autoscaling.karmada.io
spec:
group: autoscaling.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: resourceinterpretercustomizations.config.karmada.io
spec:
group: config.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: resourceinterpreterwebhookconfigurations.config.karmada.io
spec:
group: config.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: multiclusteringresses.networking.karmada.io
spec:
group: networking.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: multiclusterservices.networking.karmada.io
spec:
group: networking.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: clusteroverridepolicies.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: clusterpropagationpolicies.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: clustertaintpolicies.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: federatedresourcequotas.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: overridepolicies.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: propagationpolicies.policy.karmada.io
spec:
group: policy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: remedies.remedy.karmada.io
spec:
group: remedy.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: clusterresourcebindings.work.karmada.io
spec:
group: work.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: resourcebindings.work.karmada.io
spec:
group: work.karmada.io

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: works.work.karmada.io
spec:
group: work.karmada.io

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.22.0
FROM alpine:3.22.1
ARG BINARY

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.22.0
FROM alpine:3.22.1
ARG BINARY
ARG TARGETPLATFORM

View File

@ -348,7 +348,6 @@ func startExecutionController(ctx controllerscontext.Context) (bool, error) {
EventRecorder: ctx.Mgr.GetEventRecorderFor(execution.ControllerName),
RESTMapper: ctx.Mgr.GetRESTMapper(),
ObjectWatcher: ctx.ObjectWatcher,
PredicateFunc: helper.NewExecutionPredicateOnAgent(),
InformerManager: genericmanager.GetInstance(),
RateLimiterOptions: ctx.Opts.RateLimiterOptions,
}
@ -366,7 +365,6 @@ func startWorkStatusController(ctx controllerscontext.Context) (bool, error) {
InformerManager: genericmanager.GetInstance(),
Context: ctx.Context,
ObjectWatcher: ctx.ObjectWatcher,
PredicateFunc: helper.NewExecutionPredicateOnAgent(),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: ctx.Opts.ClusterCacheSyncTimeout,
ConcurrentWorkStatusSyncs: ctx.Opts.ConcurrentWorkSyncs,

View File

@ -454,7 +454,7 @@ func startExecutionController(ctx controllerscontext.Context) (enabled bool, err
EventRecorder: ctx.Mgr.GetEventRecorderFor(execution.ControllerName),
RESTMapper: ctx.Mgr.GetRESTMapper(),
ObjectWatcher: ctx.ObjectWatcher,
PredicateFunc: helper.NewExecutionPredicate(ctx.Mgr),
WorkPredicateFunc: helper.WorkWithinPushClusterPredicate(ctx.Mgr),
InformerManager: genericmanager.GetInstance(),
RateLimiterOptions: ctx.Opts.RateLimiterOptions,
}
@ -473,7 +473,7 @@ func startWorkStatusController(ctx controllerscontext.Context) (enabled bool, er
InformerManager: genericmanager.GetInstance(),
Context: ctx.Context,
ObjectWatcher: ctx.ObjectWatcher,
PredicateFunc: helper.NewExecutionPredicate(ctx.Mgr),
WorkPredicateFunc: helper.WorkWithinPushClusterPredicate(ctx.Mgr),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSet,
ClusterClientOption: ctx.ClusterClientOption,
ClusterCacheSyncTimeout: opts.ClusterCacheSyncTimeout,

View File

@ -18,6 +18,7 @@ package app
import (
"context"
"fmt"
"net/http"
"testing"
"time"
@ -27,6 +28,7 @@ import (
"github.com/karmada-io/karmada/cmd/descheduler/app/options"
"github.com/karmada-io/karmada/pkg/util/names"
testingutil "github.com/karmada-io/karmada/pkg/util/testing"
)
func TestNewDeschedulerCommand(t *testing.T) {
@ -66,8 +68,10 @@ func TestDeschedulerCommandFlagParsing(t *testing.T) {
}
func TestServeHealthzAndMetrics(t *testing.T) {
healthAddress := "127.0.0.1:8082"
metricsAddress := "127.0.0.1:8083"
ports, err := testingutil.GetFreePorts("127.0.0.1", 2)
require.NoError(t, err)
healthAddress := fmt.Sprintf("127.0.0.1:%d", ports[0])
metricsAddress := fmt.Sprintf("127.0.0.1:%d", ports[1])
go serveHealthzAndMetrics(healthAddress, metricsAddress)

View File

@ -18,6 +18,7 @@ package app
import (
"context"
"fmt"
"net/http"
"testing"
"time"
@ -27,6 +28,7 @@ import (
"github.com/karmada-io/karmada/cmd/scheduler/app/options"
"github.com/karmada-io/karmada/pkg/util/names"
testingutil "github.com/karmada-io/karmada/pkg/util/testing"
)
func TestNewSchedulerCommand(t *testing.T) {
@ -66,8 +68,10 @@ func TestSchedulerCommandFlagParsing(t *testing.T) {
}
func TestServeHealthzAndMetrics(t *testing.T) {
healthAddress := "127.0.0.1:8082"
metricsAddress := "127.0.0.1:8083"
ports, err := testingutil.GetFreePorts("127.0.0.1", 2)
require.NoError(t, err)
healthAddress := fmt.Sprintf("127.0.0.1:%d", ports[0])
metricsAddress := fmt.Sprintf("127.0.0.1:%d", ports[1])
go serveHealthzAndMetrics(healthAddress, metricsAddress)

View File

@ -2,48 +2,54 @@
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [v1.12.7](#v1127)
- [Downloads for v1.12.7](#downloads-for-v1127)
- [Changelog since v1.12.6](#changelog-since-v1126)
- [v1.12.8](#v1128)
- [Downloads for v1.12.8](#downloads-for-v1128)
- [Changelog since v1.12.7](#changelog-since-v1127)
- [Changes by Kind](#changes-by-kind)
- [Bug Fixes](#bug-fixes)
- [Others](#others)
- [v1.12.6](#v1126)
- [Downloads for v1.12.6](#downloads-for-v1126)
- [Changelog since v1.12.5](#changelog-since-v1125)
- [v1.12.7](#v1127)
- [Downloads for v1.12.7](#downloads-for-v1127)
- [Changelog since v1.12.6](#changelog-since-v1126)
- [Changes by Kind](#changes-by-kind-1)
- [Bug Fixes](#bug-fixes-1)
- [Others](#others-1)
- [v1.12.5](#v1125)
- [Downloads for v1.12.5](#downloads-for-v1125)
- [Changelog since v1.12.4](#changelog-since-v1124)
- [v1.12.6](#v1126)
- [Downloads for v1.12.6](#downloads-for-v1126)
- [Changelog since v1.12.5](#changelog-since-v1125)
- [Changes by Kind](#changes-by-kind-2)
- [Bug Fixes](#bug-fixes-2)
- [Others](#others-2)
- [v1.12.4](#v1124)
- [Downloads for v1.12.4](#downloads-for-v1124)
- [Changelog since v1.12.3](#changelog-since-v1123)
- [v1.12.5](#v1125)
- [Downloads for v1.12.5](#downloads-for-v1125)
- [Changelog since v1.12.4](#changelog-since-v1124)
- [Changes by Kind](#changes-by-kind-3)
- [Bug Fixes](#bug-fixes-3)
- [Others](#others-3)
- [v1.12.3](#v1123)
- [Downloads for v1.12.3](#downloads-for-v1123)
- [Changelog since v1.12.2](#changelog-since-v1122)
- [v1.12.4](#v1124)
- [Downloads for v1.12.4](#downloads-for-v1124)
- [Changelog since v1.12.3](#changelog-since-v1123)
- [Changes by Kind](#changes-by-kind-4)
- [Bug Fixes](#bug-fixes-4)
- [Others](#others-4)
- [v1.12.2](#v1122)
- [Downloads for v1.12.2](#downloads-for-v1122)
- [Changelog since v1.12.1](#changelog-since-v1121)
- [v1.12.3](#v1123)
- [Downloads for v1.12.3](#downloads-for-v1123)
- [Changelog since v1.12.2](#changelog-since-v1122)
- [Changes by Kind](#changes-by-kind-5)
- [Bug Fixes](#bug-fixes-5)
- [Others](#others-5)
- [v1.12.1](#v1121)
- [Downloads for v1.12.1](#downloads-for-v1121)
- [Changelog since v1.12.0](#changelog-since-v1120)
- [v1.12.2](#v1122)
- [Downloads for v1.12.2](#downloads-for-v1122)
- [Changelog since v1.12.1](#changelog-since-v1121)
- [Changes by Kind](#changes-by-kind-6)
- [Bug Fixes](#bug-fixes-6)
- [Others](#others-6)
- [v1.12.1](#v1121)
- [Downloads for v1.12.1](#downloads-for-v1121)
- [Changelog since v1.12.0](#changelog-since-v1120)
- [Changes by Kind](#changes-by-kind-7)
- [Bug Fixes](#bug-fixes-7)
- [Others](#others-7)
- [v1.12.0](#v1120)
- [Downloads for v1.12.0](#downloads-for-v1120)
- [What's New](#whats-new)
@ -54,7 +60,7 @@
- [Other Notable Changes](#other-notable-changes)
- [API Changes](#api-changes)
- [Deprecation](#deprecation)
- [Bug Fixes](#bug-fixes-7)
- [Bug Fixes](#bug-fixes-8)
- [Security](#security)
- [Features & Enhancements](#features--enhancements)
- [Other](#other)
@ -66,11 +72,11 @@
- [Downloads for v1.12.0-beta.0](#downloads-for-v1120-beta0)
- [Changelog since v1.12.0-alpha.1](#changelog-since-v1120-alpha1)
- [Urgent Update Notes](#urgent-update-notes)
- [Changes by Kind](#changes-by-kind-7)
- [Changes by Kind](#changes-by-kind-8)
- [API Changes](#api-changes-1)
- [Features & Enhancements](#features--enhancements-1)
- [Deprecation](#deprecation-1)
- [Bug Fixes](#bug-fixes-8)
- [Bug Fixes](#bug-fixes-9)
- [Security](#security-1)
- [Other](#other-1)
- [Dependencies](#dependencies-1)
@ -80,11 +86,11 @@
- [Downloads for v1.12.0-alpha.1](#downloads-for-v1120-alpha1)
- [Changelog since v1.11.0](#changelog-since-v1110)
- [Urgent Update Notes](#urgent-update-notes-1)
- [Changes by Kind](#changes-by-kind-8)
- [Changes by Kind](#changes-by-kind-9)
- [API Changes](#api-changes-2)
- [Features & Enhancements](#features--enhancements-2)
- [Deprecation](#deprecation-2)
- [Bug Fixes](#bug-fixes-9)
- [Bug Fixes](#bug-fixes-10)
- [Security](#security-2)
- [Other](#other-2)
- [Dependencies](#dependencies-2)
@ -93,6 +99,20 @@
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# v1.12.8
## Downloads for v1.12.8
Download v1.12.8 in the [v1.12.8 release page](https://github.com/karmada-io/karmada/releases/tag/v1.12.8).
## Changelog since v1.12.7
### Changes by Kind
#### Bug Fixes
- `karmada-controller-manager`: Fixed the issue that resources will be recreated after being deleted on the cluster when resource is suspended for dispatching. ([#6538](https://github.com/karmada-io/karmada/pull/6538), @luyb177)
- `karmada-controller-manager`: Fixed the issue that EndpointSlice are deleted unexpectedly due to the EndpointSlice informer cache not being synced. ([#6585](https://github.com/karmada-io/karmada/pull/6585), @XiShanYongYe-Chang)
#### Others
- The base image `alpine` now has been promoted from 3.22.0 to 3.22.1. ([#6562](https://github.com/karmada-io/karmada/pull/6562))
# v1.12.7
## Downloads for v1.12.7

View File

@ -2,30 +2,36 @@
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [v1.13.4](#v1134)
- [Downloads for v1.13.4](#downloads-for-v1134)
- [Changelog since v1.13.3](#changelog-since-v1133)
- [v1.13.5](#v1135)
- [Downloads for v1.13.5](#downloads-for-v1135)
- [Changelog since v1.13.4](#changelog-since-v1134)
- [Changes by Kind](#changes-by-kind)
- [Bug Fixes](#bug-fixes)
- [Others](#others)
- [v1.13.3](#v1133)
- [Downloads for v1.13.3](#downloads-for-v1133)
- [Changelog since v1.13.2](#changelog-since-v1132)
- [v1.13.4](#v1134)
- [Downloads for v1.13.4](#downloads-for-v1134)
- [Changelog since v1.13.3](#changelog-since-v1133)
- [Changes by Kind](#changes-by-kind-1)
- [Bug Fixes](#bug-fixes-1)
- [Others](#others-1)
- [v1.13.2](#v1132)
- [Downloads for v1.13.2](#downloads-for-v1132)
- [Changelog since v1.13.1](#changelog-since-v1131)
- [v1.13.3](#v1133)
- [Downloads for v1.13.3](#downloads-for-v1133)
- [Changelog since v1.13.2](#changelog-since-v1132)
- [Changes by Kind](#changes-by-kind-2)
- [Bug Fixes](#bug-fixes-2)
- [Others](#others-2)
- [v1.13.1](#v1131)
- [Downloads for v1.13.1](#downloads-for-v1131)
- [Changelog since v1.13.0](#changelog-since-v1130)
- [v1.13.2](#v1132)
- [Downloads for v1.13.2](#downloads-for-v1132)
- [Changelog since v1.13.1](#changelog-since-v1131)
- [Changes by Kind](#changes-by-kind-3)
- [Bug Fixes](#bug-fixes-3)
- [Others](#others-3)
- [v1.13.1](#v1131)
- [Downloads for v1.13.1](#downloads-for-v1131)
- [Changelog since v1.13.0](#changelog-since-v1130)
- [Changes by Kind](#changes-by-kind-4)
- [Bug Fixes](#bug-fixes-4)
- [Others](#others-4)
- [v1.13.0](#v1130)
- [Downloads for v1.13.0](#downloads-for-v1130)
- [Urgent Update Notes](#urgent-update-notes)
@ -38,7 +44,7 @@
- [Other Notable Changes](#other-notable-changes)
- [API Changes](#api-changes)
- [Deprecation](#deprecation)
- [Bug Fixes](#bug-fixes-4)
- [Bug Fixes](#bug-fixes-5)
- [Security](#security)
- [Features & Enhancements](#features--enhancements)
- [Other](#other)
@ -50,11 +56,11 @@
- [Downloads for v1.13.0-rc.0](#downloads-for-v1130-rc0)
- [Changelog since v1.13.0-beta.0](#changelog-since-v1130-beta0)
- [Urgent Update Notes](#urgent-update-notes-1)
- [Changes by Kind](#changes-by-kind-4)
- [Changes by Kind](#changes-by-kind-5)
- [API Changes](#api-changes-1)
- [Features & Enhancements](#features--enhancements-1)
- [Deprecation](#deprecation-1)
- [Bug Fixes](#bug-fixes-5)
- [Bug Fixes](#bug-fixes-6)
- [Security](#security-1)
- [Other](#other-1)
- [Dependencies](#dependencies-1)
@ -64,11 +70,11 @@
- [Downloads for v1.13.0-beta.0](#downloads-for-v1130-beta0)
- [Changelog since v1.13.0-alpha.2](#changelog-since-v1130-alpha2)
- [Urgent Update Notes](#urgent-update-notes-2)
- [Changes by Kind](#changes-by-kind-5)
- [Changes by Kind](#changes-by-kind-6)
- [API Changes](#api-changes-2)
- [Features & Enhancements](#features--enhancements-2)
- [Deprecation](#deprecation-2)
- [Bug Fixes](#bug-fixes-6)
- [Bug Fixes](#bug-fixes-7)
- [Security](#security-2)
- [Other](#other-2)
- [Dependencies](#dependencies-2)
@ -78,11 +84,11 @@
- [Downloads for v1.13.0-alpha.2](#downloads-for-v1130-alpha2)
- [Changelog since v1.13.0-alpha.1](#changelog-since-v1130-alpha1)
- [Urgent Update Notes](#urgent-update-notes-3)
- [Changes by Kind](#changes-by-kind-6)
- [Changes by Kind](#changes-by-kind-7)
- [API Changes](#api-changes-3)
- [Features & Enhancements](#features--enhancements-3)
- [Deprecation](#deprecation-3)
- [Bug Fixes](#bug-fixes-7)
- [Bug Fixes](#bug-fixes-8)
- [Security](#security-3)
- [Other](#other-3)
- [Dependencies](#dependencies-3)
@ -92,11 +98,11 @@
- [Downloads for v1.13.0-alpha.1](#downloads-for-v1130-alpha1)
- [Changelog since v1.12.0](#changelog-since-v1120)
- [Urgent Update Notes](#urgent-update-notes-4)
- [Changes by Kind](#changes-by-kind-7)
- [Changes by Kind](#changes-by-kind-8)
- [API Changes](#api-changes-4)
- [Features & Enhancements](#features--enhancements-4)
- [Deprecation](#deprecation-4)
- [Bug Fixes](#bug-fixes-8)
- [Bug Fixes](#bug-fixes-9)
- [Security](#security-4)
- [Other](#other-4)
- [Dependencies](#dependencies-4)
@ -105,6 +111,20 @@
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# v1.13.5
## Downloads for v1.13.5
Download v1.13.5 in the [v1.13.5 release page](https://github.com/karmada-io/karmada/releases/tag/v1.13.5).
## Changelog since v1.13.4
### Changes by Kind
#### Bug Fixes
- `karmada-controller-manager`: Fixed the issue that resources will be recreated after being deleted on the cluster when resource is suspended for dispatching. ([#6537](https://github.com/karmada-io/karmada/pull/6537), @luyb177)
- `karmada-controller-manager`: Fixed the issue that EndpointSlice are deleted unexpectedly due to the EndpointSlice informer cache not being synced. ([#6584](https://github.com/karmada-io/karmada/pull/6584), @XiShanYongYe-Chang)
#### Others
- The base image `alpine` now has been promoted from 3.22.0 to 3.22.1. ([#6561](https://github.com/karmada-io/karmada/pull/6561))
# v1.13.4
## Downloads for v1.13.4

View File

@ -2,12 +2,18 @@
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [v1.14.1](#v1141)
- [Downloads for v1.14.1](#downloads-for-v1141)
- [Changelog since v1.14.0](#changelog-since-v1140)
- [v1.14.2](#v1142)
- [Downloads for v1.14.2](#downloads-for-v1142)
- [Changelog since v1.14.1](#changelog-since-v1141)
- [Changes by Kind](#changes-by-kind)
- [Bug Fixes](#bug-fixes)
- [Others](#others)
- [v1.14.1](#v1141)
- [Downloads for v1.14.1](#downloads-for-v1141)
- [Changelog since v1.14.0](#changelog-since-v1140)
- [Changes by Kind](#changes-by-kind-1)
- [Bug Fixes](#bug-fixes-1)
- [Others](#others-1)
- [v1.14.0](#v1140)
- [Downloads for v1.14.0](#downloads-for-v1140)
- [Urgent Update Notes](#urgent-update-notes)
@ -19,7 +25,7 @@
- [Other Notable Changes](#other-notable-changes)
- [API Changes](#api-changes)
- [Deprecation](#deprecation)
- [Bug Fixes](#bug-fixes-1)
- [Bug Fixes](#bug-fixes-2)
- [Security](#security)
- [Features & Enhancements](#features--enhancements)
- [Other](#other)
@ -31,11 +37,11 @@
- [Downloads for v1.14.0-rc.0](#downloads-for-v1140-rc0)
- [Changelog since v1.14.0-beta.0](#changelog-since-v1140-beta0)
- [Urgent Update Notes](#urgent-update-notes-1)
- [Changes by Kind](#changes-by-kind-1)
- [Changes by Kind](#changes-by-kind-2)
- [API Changes](#api-changes-1)
- [Features & Enhancements](#features--enhancements-1)
- [Deprecation](#deprecation-1)
- [Bug Fixes](#bug-fixes-2)
- [Bug Fixes](#bug-fixes-3)
- [Security](#security-1)
- [Other](#other-1)
- [Dependencies](#dependencies-1)
@ -46,11 +52,11 @@
- [Downloads for v1.14.0-beta.0](#downloads-for-v1140-beta0)
- [Changelog since v1.14.0-alpha.2](#changelog-since-v1140-alpha2)
- [Urgent Update Notes](#urgent-update-notes-2)
- [Changes by Kind](#changes-by-kind-2)
- [Changes by Kind](#changes-by-kind-3)
- [API Changes](#api-changes-2)
- [Features & Enhancements](#features--enhancements-2)
- [Deprecation](#deprecation-2)
- [Bug Fixes](#bug-fixes-3)
- [Bug Fixes](#bug-fixes-4)
- [Security](#security-2)
- [Other](#other-2)
- [Dependencies](#dependencies-2)
@ -61,11 +67,11 @@
- [Downloads for v1.14.0-alpha.2](#downloads-for-v1140-alpha2)
- [Changelog since v1.14.0-alpha.1](#changelog-since-v1140-alpha1)
- [Urgent Update Notes](#urgent-update-notes-3)
- [Changes by Kind](#changes-by-kind-3)
- [Changes by Kind](#changes-by-kind-4)
- [API Changes](#api-changes-3)
- [Features & Enhancements](#features--enhancements-3)
- [Deprecation](#deprecation-3)
- [Bug Fixes](#bug-fixes-4)
- [Bug Fixes](#bug-fixes-5)
- [Security](#security-3)
- [Other](#other-3)
- [Dependencies](#dependencies-3)
@ -76,11 +82,11 @@
- [Downloads for v1.14.0-alpha.1](#downloads-for-v1140-alpha1)
- [Changelog since v1.13.0](#changelog-since-v1130)
- [Urgent Update Notes](#urgent-update-notes-4)
- [Changes by Kind](#changes-by-kind-4)
- [Changes by Kind](#changes-by-kind-5)
- [API Changes](#api-changes-4)
- [Features & Enhancements](#features--enhancements-4)
- [Deprecation](#deprecation-4)
- [Bug Fixes](#bug-fixes-5)
- [Bug Fixes](#bug-fixes-6)
- [Security](#security-4)
- [Other](#other-4)
- [Dependencies](#dependencies-4)
@ -89,6 +95,20 @@
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# v1.14.2
## Downloads for v1.14.2
Download v1.14.2 in the [v1.14.2 release page](https://github.com/karmada-io/karmada/releases/tag/v1.14.2).
## Changelog since v1.14.1
### Changes by Kind
#### Bug Fixes
- `karmada-controller-manager`: Fixed the issue that resources will be recreated after being deleted on the cluster when resource is suspended for dispatching. ([#6536](https://github.com/karmada-io/karmada/pull/6536), @luyb177)
- `karmada-controller-manager`: Fixed the issue that EndpointSlice are deleted unexpectedly due to the EndpointSlice informer cache not being synced. ([#6583](https://github.com/karmada-io/karmada/pull/6583), @XiShanYongYe-Chang)
#### Others
- The base image `alpine` now has been promoted from 3.22.0 to 3.22.1. ([#6559](https://github.com/karmada-io/karmada/pull/6559))
# v1.14.1
## Downloads for v1.14.1

View File

@ -2,9 +2,9 @@
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
- [v1.15.0-alpha.2](#v1150-alpha2)
- [Downloads for v1.15.0-alpha.2](#downloads-for-v1150-alpha2)
- [Changelog since v1.15.0-alpha.1](#changelog-since-v1150-alpha1)
- [v1.15.0-beta.0](#v1150-beta0)
- [Downloads for v1.15.0-beta.0](#downloads-for-v1150-beta0)
- [Changelog since v1.15.0-alpha.2](#changelog-since-v1150-alpha2)
- [Urgent Update Notes](#urgent-update-notes)
- [Changes by Kind](#changes-by-kind)
- [API Changes](#api-changes)
@ -17,9 +17,9 @@
- [Helm Charts](#helm-charts)
- [Instrumentation](#instrumentation)
- [Performance](#performance)
- [v1.15.0-alpha.1](#v1150-alpha1)
- [Downloads for v1.15.0-alpha.1](#downloads-for-v1150-alpha1)
- [Changelog since v1.14.0](#changelog-since-v1140)
- [v1.15.0-alpha.2](#v1150-alpha2)
- [Downloads for v1.15.0-alpha.2](#downloads-for-v1150-alpha2)
- [Changelog since v1.15.0-alpha.1](#changelog-since-v1150-alpha1)
- [Urgent Update Notes](#urgent-update-notes-1)
- [Changes by Kind](#changes-by-kind-1)
- [API Changes](#api-changes-1)
@ -31,9 +31,68 @@
- [Dependencies](#dependencies-1)
- [Helm Charts](#helm-charts-1)
- [Instrumentation](#instrumentation-1)
- [Performance](#performance-1)
- [v1.15.0-alpha.1](#v1150-alpha1)
- [Downloads for v1.15.0-alpha.1](#downloads-for-v1150-alpha1)
- [Changelog since v1.14.0](#changelog-since-v1140)
- [Urgent Update Notes](#urgent-update-notes-2)
- [Changes by Kind](#changes-by-kind-2)
- [API Changes](#api-changes-2)
- [Features & Enhancements](#features--enhancements-2)
- [Deprecation](#deprecation-2)
- [Bug Fixes](#bug-fixes-2)
- [Security](#security-2)
- [Other](#other-2)
- [Dependencies](#dependencies-2)
- [Helm Charts](#helm-charts-2)
- [Instrumentation](#instrumentation-2)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
# v1.15.0-beta.0
## Downloads for v1.15.0-beta.0
Download v1.15.0-beta.0 in the [v1.15.0-beta.0 release page](https://github.com/karmada-io/karmada/releases/tag/v1.15.0-beta.0).
## Changelog since v1.15.0-alpha.2
## Urgent Update Notes
None.
## Changes by Kind
### API Changes
None.
### Features & Enhancements
- `karmada-controller-manager`: Added resource interpreter support for OpenKruise SidecarSet. Karmada can now interpret and manage OpenKruise SidecarSet resources across clusters, including multi-cluster status aggregation, health checks, dependency resolution for ConfigMaps and Secrets, and comprehensive test coverage. ([#6524](https://github.com/karmada-io/karmada/pull/6524), @abhi0324)
- `karmada-controller-manager`: Added resource interpreter support for OpenKruise UnitedDeployment. Karmada can now interpret and manage OpenKruise UnitedDeployment resources across clusters, including multi-cluster status aggregation, health checks, dependency resolution for ConfigMaps and Secrets, and comprehensive test coverage. ([#6533](https://github.com/karmada-io/karmada/pull/6533), @abhi0324)
### Deprecation
- The deprecated label `propagation.karmada.io/instruction`, which was designed to suspend Work propagation, has now been removed. ([#6512](https://github.com/karmada-io/karmada/pull/6512), @XiShanYongYe-Chang)
### Bug Fixes
- `karmada-controller-manager`: Fixed the issue that EndpointSlice are deleted unexpectedly due to the EndpointSlice informer cache not being synced. ([#6434](https://github.com/karmada-io/karmada/pull/6434), @XiShanYongYe-Chang)
### Security
- Bump go version to 1.24.5 for addressing CVE-2025-4674 concern. ([#6557](https://github.com/karmada-io/karmada/pull/6557), @seanlaii)
## Other
### Dependencies
- Upgraded sigs.k8s.io/metrics-server to v0.8.0. ([#6548](https://github.com/karmada-io/karmada/pull/6548), @seanlaii)
- Upgraded sigs.k8s.io/kind to v0.29.0. ([#6549](https://github.com/karmada-io/karmada/pull/6549), @seanlaii)
- Upgraded vektra/mockery to v3.5.1, switching to a configuration-driven approach via mockery.yaml and removing deprecated v2 flags like --inpackage and --name. ([#6550](https://github.com/karmada-io/karmada/pull/6550), @liaolecheng)
- Upgraded controller-gen to v0.18.0. ([#6558](https://github.com/karmada-io/karmada/pull/6558), @seanlaii)
### Helm Charts
None.
### Instrumentation
None.
### Performance
None.
# v1.15.0-alpha.2
## Downloads for v1.15.0-alpha.2

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: workloads.workload.example.io
spec:
group: workload.example.io

2
go.mod
View File

@ -1,6 +1,6 @@
module github.com/karmada-io/karmada
go 1.24.4 // keep in sync with .go-version
go 1.24.5 // keep in sync with .go-version
require (
github.com/adhocore/gronx v1.6.3

View File

@ -19,7 +19,7 @@ set -o nounset
set -o pipefail
CONTROLLER_GEN_PKG="sigs.k8s.io/controller-tools/cmd/controller-gen"
CONTROLLER_GEN_VER="v0.17.3"
CONTROLLER_GEN_VER="v0.18.0"
source hack/util.sh

View File

@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.3
controller-gen.kubebuilder.io/version: v0.18.0
name: karmadas.operator.karmada.io
spec:
group: operator.karmada.io

View File

@ -97,7 +97,7 @@ func (ctrl *Controller) Reconcile(ctx context.Context, req controllerruntime.Req
}
if err := ctrl.validateKarmada(ctx, karmada); err != nil {
klog.Errorf("Validation failed for karmada: %+v", err)
klog.ErrorS(err, "Validation failed for karmada", "name", karmada.Name)
return controllerruntime.Result{}, nil
}

View File

@ -80,7 +80,7 @@ func validateETCD(etcd *operatorv1alpha1.Etcd, karmadaName string, fldPath *fiel
replicas := *etcd.Local.CommonSettings.Replicas
if (replicas % 2) == 0 {
klog.Warningf("invalid etcd replicas %d, expected an odd number", replicas)
klog.InfoS("Using an even number of etcd replicas is not recommended", "replicas", replicas)
}
}

View File

@ -76,10 +76,7 @@ func CreateOrUpdateWork(ctx context.Context, c client.Client, workMeta metav1.Ob
// Do the same thing as the mutating webhook does, add the permanent ID to workload if not exist,
// This is to avoid unnecessary updates to the Work object, especially when controller starts.
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if runtimeObject.Labels[util.PropagationInstruction] != util.PropagationInstructionSuppressed {
util.SetLabelsAndAnnotationsForWorkload(resource, runtimeObject)
}
workloadJSON, err := json.Marshal(resource)
if err != nil {
klog.Errorf("Failed to marshal workload(%s/%s), error: %v", resource.GetNamespace(), resource.GetName(), err)

View File

@ -30,7 +30,6 @@ import (
workv1alpha1 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha1"
workv1alpha2 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
"github.com/karmada-io/karmada/pkg/util"
)
func TestCreateOrUpdateWork(t *testing.T) {
@ -70,50 +69,6 @@ func TestCreateOrUpdateWork(t *testing.T) {
assert.Equal(t, 1, len(work.Spec.Workload.Manifests))
},
},
{
name: "create work with PropagationInstruction",
workMeta: metav1.ObjectMeta{
Namespace: "default",
Name: "test-work",
Labels: map[string]string{
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
util.PropagationInstruction: "some-value",
},
Annotations: map[string]string{
workv1alpha2.ResourceConflictResolutionAnnotation: "overwrite",
},
},
resource: &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": map[string]interface{}{
"name": "test-deployment",
"uid": "test-uid",
},
},
},
verify: func(t *testing.T, c client.Client) {
work := &workv1alpha1.Work{}
err := c.Get(context.TODO(), client.ObjectKey{Namespace: "default", Name: "test-work"}, work)
assert.NoError(t, err)
// Get the resource from manifests
manifest := &unstructured.Unstructured{}
err = manifest.UnmarshalJSON(work.Spec.Workload.Manifests[0].Raw)
assert.NoError(t, err)
// Verify labels and annotations were set
labels := manifest.GetLabels()
assert.Equal(t, util.ManagedByKarmadaLabelValue, labels[util.ManagedByKarmadaLabel])
annotations := manifest.GetAnnotations()
assert.Equal(t, "test-uid", annotations[workv1alpha2.ResourceTemplateUIDAnnotation])
assert.Equal(t, "test-work", annotations[workv1alpha2.WorkNameAnnotation])
assert.Equal(t, "default", annotations[workv1alpha2.WorkNamespaceAnnotation])
assert.Equal(t, "overwrite", annotations[workv1alpha2.ResourceConflictResolutionAnnotation])
},
},
{
name: "update existing work",
existingWork: &workv1alpha1.Work{

View File

@ -71,7 +71,7 @@ type Controller struct {
EventRecorder record.EventRecorder
RESTMapper meta.RESTMapper
ObjectWatcher objectwatcher.ObjectWatcher
PredicateFunc predicate.Predicate
WorkPredicateFunc predicate.Predicate
InformerManager genericmanager.MultiClusterInformerManager
RateLimiterOptions ratelimiterflag.Options
}
@ -133,14 +133,19 @@ func (c *Controller) Reconcile(ctx context.Context, req controllerruntime.Reques
// SetupWithManager creates a controller and register to controller manager.
func (c *Controller) SetupWithManager(mgr controllerruntime.Manager) error {
return controllerruntime.NewControllerManagedBy(mgr).
Named(ControllerName).
For(&workv1alpha1.Work{}, builder.WithPredicates(c.PredicateFunc)).
ctrlBuilder := controllerruntime.NewControllerManagedBy(mgr).Named(ControllerName).
WithEventFilter(predicate.GenerationChangedPredicate{}).
WithOptions(controller.Options{
RateLimiter: ratelimiterflag.DefaultControllerRateLimiter[controllerruntime.Request](c.RateLimiterOptions),
}).
Complete(c)
})
if c.WorkPredicateFunc != nil {
ctrlBuilder.For(&workv1alpha1.Work{}, builder.WithPredicates(c.WorkPredicateFunc))
} else {
ctrlBuilder.For(&workv1alpha1.Work{})
}
return ctrlBuilder.Complete(c)
}
func (c *Controller) syncWork(ctx context.Context, clusterName string, work *workv1alpha1.Work) (controllerruntime.Result, error) {

View File

@ -139,13 +139,13 @@ func (c *FHPAController) SetupWithManager(mgr controllerruntime.Manager) error {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *FHPAController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling FederatedHPA %s.", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling FederatedHPA", "namespacedName", req.NamespacedName.String())
hpa := &autoscalingv1alpha1.FederatedHPA{}
key := req.NamespacedName.String()
if err := c.Client.Get(ctx, req.NamespacedName, hpa); err != nil {
if apierrors.IsNotFound(err) {
klog.Infof("FederatedHPA %s has been deleted in %s", req.Name, req.Namespace)
klog.InfoS("FederatedHPA has been deleted in namespace", "hpaName", req.Name, "namespace", req.Namespace)
c.recommendationsLock.Lock()
delete(c.recommendations, key)
c.recommendationsLock.Unlock()
@ -344,7 +344,7 @@ func (c *FHPAController) reconcileAutoscaler(ctx context.Context, hpa *autoscali
retErr = err
}
klog.V(4).Infof("proposing %v desired replicas (based on %s from %s) for %s", metricDesiredReplicas, metricName, metricTimestamp, reference)
klog.V(4).InfoS("proposing desired replicas for resource", "desiredReplicas", metricDesiredReplicas, "metricName", metricName, "metricTimestamp", metricTimestamp, "resource", reference)
rescaleMetric := ""
if metricDesiredReplicas > desiredReplicas {
@ -382,8 +382,8 @@ func (c *FHPAController) reconcileAutoscaler(ctx context.Context, hpa *autoscali
setCondition(hpa, autoscalingv2.AbleToScale, corev1.ConditionTrue, "SucceededRescale", "the HPA controller was able to update the target scale to %d", desiredReplicas)
c.EventRecorder.Eventf(hpa, corev1.EventTypeNormal, "SuccessfulRescale", "New size: %d; reason: %s", desiredReplicas, rescaleReason)
c.storeScaleEvent(hpa.Spec.Behavior, key, currentReplicas, desiredReplicas)
klog.Infof("Successful rescale of %s, old size: %d, new size: %d, reason: %s",
hpa.Name, currentReplicas, desiredReplicas, rescaleReason)
klog.InfoS("Successfully rescaled FederatedHPA",
"hpaName", hpa.Name, "currentReplicas", currentReplicas, "desiredReplicas", desiredReplicas, "rescaleReason", rescaleReason)
if desiredReplicas > currentReplicas {
actionLabel = monitor.ActionLabelScaleUp
@ -391,7 +391,7 @@ func (c *FHPAController) reconcileAutoscaler(ctx context.Context, hpa *autoscali
actionLabel = monitor.ActionLabelScaleDown
}
} else {
klog.V(4).Infof("decided not to scale %s to %v (last scale time was %s)", reference, desiredReplicas, hpa.Status.LastScaleTime)
klog.V(4).InfoS("decided not to scale resource", "resource", reference, "desiredReplicas", desiredReplicas, "hpaLastScaleTime", hpa.Status.LastScaleTime)
desiredReplicas = currentReplicas
}
@ -484,19 +484,19 @@ func (c *FHPAController) scaleForTargetCluster(ctx context.Context, clusters []s
for _, cluster := range clusters {
clusterClient, err := c.ClusterScaleClientSetFunc(cluster, c.Client)
if err != nil {
klog.Errorf("Failed to get cluster client of cluster %s.", cluster)
klog.ErrorS(err, "Failed to get cluster client of cluster", "cluster", cluster)
continue
}
clusterInformerManager, err := c.buildPodInformerForCluster(clusterClient)
if err != nil {
klog.Errorf("Failed to get or create informer for cluster %s. Error: %v.", cluster, err)
klog.ErrorS(err, "Failed to get or create informer for cluster", "cluster", cluster)
continue
}
scale, err := clusterClient.ScaleClient.Scales(hpa.Namespace).Get(ctx, targetGR, hpa.Spec.ScaleTargetRef.Name, metav1.GetOptions{})
if err != nil {
klog.Errorf("Failed to get scale subResource of resource %s in cluster %s.", hpa.Spec.ScaleTargetRef.Name, cluster)
klog.ErrorS(err, "Failed to get scale subResource of resource in cluster", "resource", hpa.Spec.ScaleTargetRef.Name, "cluster", cluster)
continue
}
@ -523,19 +523,19 @@ func (c *FHPAController) scaleForTargetCluster(ctx context.Context, clusters []s
podInterface, err := clusterInformerManager.Lister(podGVR)
if err != nil {
klog.Errorf("Failed to get podInterface for cluster %s.", cluster)
klog.ErrorS(err, "Failed to get podInterface for cluster", "cluster", cluster)
continue
}
podLister, ok := podInterface.(listcorev1.PodLister)
if !ok {
klog.Errorf("Failed to convert interface to PodLister for cluster %s.", cluster)
klog.ErrorS(nil, "Failed to convert interface to PodLister for cluster", "cluster", cluster)
continue
}
podList, err := podLister.Pods(hpa.Namespace).List(selector)
if err != nil {
klog.Errorf("Failed to get podList for cluster %s.", cluster)
klog.ErrorS(err, "Failed to get podList for cluster", "cluster", cluster)
continue
}
@ -561,7 +561,7 @@ func (c *FHPAController) buildPodInformerForCluster(clusterScaleClient *util.Clu
}
if _, err := singleClusterInformerManager.Lister(podGVR); err != nil {
klog.Errorf("Failed to get the lister for pods: %v", err)
klog.ErrorS(err, "Failed to get the lister for pods")
}
c.TypedInformerManager.Start(clusterScaleClient.ClusterName)
@ -576,7 +576,7 @@ func (c *FHPAController) buildPodInformerForCluster(clusterScaleClient *util.Clu
}
return nil
}(); err != nil {
klog.Errorf("Failed to sync cache for cluster: %s, error: %v", clusterScaleClient.ClusterName, err)
klog.ErrorS(err, "Failed to sync cache for cluster", "cluster", clusterScaleClient.ClusterName)
c.TypedInformerManager.Stop(clusterScaleClient.ClusterName)
return nil, err
}
@ -1377,7 +1377,7 @@ func (c *FHPAController) updateStatus(ctx context.Context, hpa *autoscalingv1alp
c.EventRecorder.Event(hpa, corev1.EventTypeWarning, "FailedUpdateStatus", err.Error())
return fmt.Errorf("failed to update status for %s: %v", hpa.Name, err)
}
klog.V(2).Infof("Successfully updated status for %s", hpa.Name)
klog.V(2).InfoS("Successfully updated status for hpa", "hpaName", hpa.Name)
return nil
}

View File

@ -128,7 +128,7 @@ func getPodMetrics(rawMetrics []metricsapi.PodMetrics, resource corev1.ResourceN
resValue, found := c.Usage[resource]
if !found {
missing = true
klog.V(2).Infof("missing resource metric %v for %s/%s", resource, m.Namespace, m.Name)
klog.V(2).InfoS("missing resource metric", "resource", resource, "namespace", m.Namespace, "name", m.Name)
break
}
podSum += resValue.MilliValue()

View File

@ -52,7 +52,7 @@ type CRBGracefulEvictionController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *CRBGracefulEvictionController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).InfoS("Reconciling ClusterResourceBinding", "clusterResourceBinding", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling ClusterResourceBinding", "name", req.NamespacedName.String())
binding := &workv1alpha2.ClusterResourceBinding{}
if err := c.Client.Get(ctx, req.NamespacedName, binding); err != nil {

View File

@ -52,7 +52,7 @@ type RBGracefulEvictionController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *RBGracefulEvictionController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).InfoS("Reconciling ResourceBinding", "resourceBinding", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling ResourceBinding", "namespace", req.Namespace, "name", req.Name)
binding := &workv1alpha2.ResourceBinding{}
if err := c.Client.Get(ctx, req.NamespacedName, binding); err != nil {

View File

@ -408,6 +408,14 @@ func (c *ServiceExportController) reportEndpointSliceWithServiceExportCreate(ctx
return nil
}
// Before retrieving EndpointSlice objects from the informer, ensure the informer cache is synced.
// This is necessary because the informer for EndpointSlice is created dynamically in the Reconcile() routine
// when a Work resource containing an ServiceExport is detected for the cluster. If the informer is not yet synced,
// return an error and wait a retry at the next time.
if !singleClusterManager.IsInformerSynced(endpointSliceGVR) {
return fmt.Errorf("the informer for cluster %s has not been synced, wait a retry at the next time", serviceExportKey.Cluster)
}
endpointSliceLister := singleClusterManager.Lister(endpointSliceGVR)
if endpointSliceObjects, err = endpointSliceLister.ByNamespace(serviceExportKey.Namespace).List(labels.SelectorFromSet(labels.Set{
discoveryv1.LabelServiceName: serviceExportKey.Name,
@ -483,6 +491,14 @@ func (c *ServiceExportController) reportEndpointSliceWithEndpointSliceCreateOrUp
return nil
}
// Before retrieving ServiceExport objects from the informer, ensure the informer cache is synced.
// This is necessary because the informer for ServiceExport is created dynamically in the Reconcile() routine
// when a Work resource containing an ServiceExport is detected for the cluster. If the informer is not yet synced,
// return an error and wait a retry at the next time.
if !singleClusterManager.IsInformerSynced(serviceExportGVR) {
return fmt.Errorf("the informer for cluster %s has not been synced, wait a retry at the next time", clusterName)
}
serviceExportLister := singleClusterManager.Lister(serviceExportGVR)
_, err := serviceExportLister.ByNamespace(endpointSlice.GetNamespace()).Get(relatedServiceName)
if err != nil {
@ -614,6 +630,7 @@ func cleanEndpointSliceWork(ctx context.Context, c client.Client, work *workv1al
klog.Errorf("Failed to update work(%s/%s): %v", work.Namespace, work.Name, err)
return err
}
klog.Infof("Successfully updated work(%s/%s)", work.Namespace, work.Name)
return nil
}
@ -621,6 +638,7 @@ func cleanEndpointSliceWork(ctx context.Context, c client.Client, work *workv1al
klog.Errorf("Failed to delete work(%s/%s), Error: %v", work.Namespace, work.Name, err)
return err
}
klog.Infof("Successfully deleted work(%s/%s)", work.Namespace, work.Name)
return nil
}

View File

@ -18,6 +18,7 @@ package multiclusterservice
import (
"context"
"errors"
"fmt"
"reflect"
"strings"
@ -84,7 +85,7 @@ const EndpointSliceCollectControllerName = "endpointslice-collect-controller"
// Reconcile performs a full reconciliation for the object referred to by the Request.
func (c *EndpointSliceCollectController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling Work %s", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling Work", "namespace", req.Namespace, "name", req.Name)
work := &workv1alpha1.Work{}
if err := c.Client.Get(ctx, req.NamespacedName, work); err != nil {
@ -105,7 +106,7 @@ func (c *EndpointSliceCollectController) Reconcile(ctx context.Context, req cont
clusterName, err := names.GetClusterName(work.Namespace)
if err != nil {
klog.Errorf("Failed to get cluster name for work %s/%s", work.Namespace, work.Name)
klog.ErrorS(err, "Failed to get cluster name for work", "namespace", work.Namespace, "name", work.Name)
return controllerruntime.Result{}, err
}
@ -144,14 +145,15 @@ func (c *EndpointSliceCollectController) collectEndpointSlice(key util.QueueKey)
ctx := context.Background()
fedKey, ok := key.(keys.FederatedKey)
if !ok {
klog.Errorf("Failed to collect endpointslice as invalid key: %v", key)
var ErrInvalidKey = errors.New("invalid key")
klog.ErrorS(ErrInvalidKey, "Failed to collect endpointslice as invalid key", "key", key)
return fmt.Errorf("invalid key")
}
klog.V(4).Infof("Begin to collect %s %s.", fedKey.Kind, fedKey.NamespaceKey())
klog.V(4).InfoS("Begin to collect", "kind", fedKey.Kind, "namespaceKey", fedKey.NamespaceKey())
if err := c.handleEndpointSliceEvent(ctx, fedKey); err != nil {
klog.Errorf("Failed to handle endpointSlice(%s) event, Error: %v",
fedKey.NamespaceKey(), err)
klog.ErrorS(err, "Failed to handle endpointSlice event", "namespaceKey",
fedKey.NamespaceKey())
return err
}
@ -161,17 +163,18 @@ func (c *EndpointSliceCollectController) collectEndpointSlice(key util.QueueKey)
func (c *EndpointSliceCollectController) buildResourceInformers(clusterName string) error {
cluster, err := util.GetCluster(c.Client, clusterName)
if err != nil {
klog.Errorf("Failed to get the given member cluster %s", clusterName)
klog.ErrorS(err, "Failed to get the given member cluster", "cluster", clusterName)
return err
}
if !util.IsClusterReady(&cluster.Status) {
klog.Errorf("Stop collect endpointslice for cluster(%s) as cluster not ready.", cluster.Name)
var ErrClusterNotReady = errors.New("cluster not ready")
klog.ErrorS(ErrClusterNotReady, "Stop collect endpointslice for cluster as cluster not ready.", "cluster", cluster.Name)
return fmt.Errorf("cluster(%s) not ready", cluster.Name)
}
if err := c.registerInformersAndStart(cluster); err != nil {
klog.Errorf("Failed to register informer for Cluster %s. Error: %v.", cluster.Name, err)
klog.ErrorS(err, "Failed to register informer for Cluster", "cluster", cluster.Name)
return err
}
@ -185,7 +188,7 @@ func (c *EndpointSliceCollectController) registerInformersAndStart(cluster *clus
if singleClusterInformerManager == nil {
dynamicClusterClient, err := c.ClusterDynamicClientSetFunc(cluster.Name, c.Client, c.ClusterClientOption)
if err != nil {
klog.Errorf("Failed to build dynamic cluster client for cluster %s.", cluster.Name)
klog.ErrorS(err, "Failed to build dynamic cluster client for cluster", "cluster", cluster.Name)
return err
}
singleClusterInformerManager = c.InformerManager.ForCluster(dynamicClusterClient.ClusterName, dynamicClusterClient.DynamicClientSet, 0)
@ -220,7 +223,7 @@ func (c *EndpointSliceCollectController) registerInformersAndStart(cluster *clus
}
return nil
}(); err != nil {
klog.Errorf("Failed to sync cache for cluster: %s, error: %v", cluster.Name, err)
klog.ErrorS(err, "Failed to sync cache for cluster", "cluster", cluster.Name)
c.InformerManager.Stop(cluster.Name)
return err
}
@ -245,7 +248,7 @@ func (c *EndpointSliceCollectController) genHandlerAddFunc(clusterName string) f
curObj := obj.(runtime.Object)
key, err := keys.FederatedKeyFunc(clusterName, curObj)
if err != nil {
klog.Warningf("Failed to generate key for obj: %s", curObj.GetObjectKind().GroupVersionKind())
klog.ErrorS(err, "Failed to generate key for obj", "gvk", curObj.GetObjectKind().GroupVersionKind())
return
}
c.worker.Add(key)
@ -258,7 +261,7 @@ func (c *EndpointSliceCollectController) genHandlerUpdateFunc(clusterName string
if !reflect.DeepEqual(oldObj, newObj) {
key, err := keys.FederatedKeyFunc(clusterName, curObj)
if err != nil {
klog.Warningf("Failed to generate key for obj: %s", curObj.GetObjectKind().GroupVersionKind())
klog.ErrorS(err, "Failed to generate key for obj", "gvk", curObj.GetObjectKind().GroupVersionKind())
return
}
c.worker.Add(key)
@ -278,7 +281,7 @@ func (c *EndpointSliceCollectController) genHandlerDeleteFunc(clusterName string
oldObj := obj.(runtime.Object)
key, err := keys.FederatedKeyFunc(clusterName, oldObj)
if err != nil {
klog.Warningf("Failed to generate key for obj: %s", oldObj.GetObjectKind().GroupVersionKind())
klog.ErrorS(err, "Failed to generate key for obj", "gvk", oldObj.GetObjectKind().GroupVersionKind())
return
}
c.worker.Add(key)
@ -308,7 +311,7 @@ func (c *EndpointSliceCollectController) handleEndpointSliceEvent(ctx context.Co
util.MultiClusterServiceNamespaceLabel: endpointSliceKey.Namespace,
util.MultiClusterServiceNameLabel: util.GetLabelValue(endpointSliceObj.GetLabels(), discoveryv1.LabelServiceName),
})}); err != nil {
klog.Errorf("Failed to list workList reported by endpointSlice(%s/%s), error: %v", endpointSliceKey.Namespace, endpointSliceKey.Name, err)
klog.ErrorS(err, "Failed to list workList reported by endpointSlice", "namespace", endpointSliceKey.Namespace, "name", endpointSliceKey.Name)
return err
}
@ -324,8 +327,8 @@ func (c *EndpointSliceCollectController) handleEndpointSliceEvent(ctx context.Co
}
if err = c.reportEndpointSliceWithEndpointSliceCreateOrUpdate(ctx, endpointSliceKey.Cluster, endpointSliceObj); err != nil {
klog.Errorf("Failed to handle endpointSlice(%s) event, Error: %v",
endpointSliceKey.NamespaceKey(), err)
klog.ErrorS(err, "Failed to handle endpointSlice event", "namespaceKey",
endpointSliceKey.NamespaceKey())
return err
}
@ -336,7 +339,7 @@ func (c *EndpointSliceCollectController) collectTargetEndpointSlice(ctx context.
manager := c.InformerManager.GetSingleClusterManager(clusterName)
if manager == nil {
err := fmt.Errorf("failed to get informer manager for cluster %s", clusterName)
klog.Errorf("%v", err)
klog.ErrorS(err, "Failed to get informer manager for cluster")
return err
}
@ -347,13 +350,13 @@ func (c *EndpointSliceCollectController) collectTargetEndpointSlice(ctx context.
})
epsList, err := manager.Lister(discoveryv1.SchemeGroupVersion.WithResource("endpointslices")).ByNamespace(svcNamespace).List(selector)
if err != nil {
klog.Errorf("Failed to list EndpointSlice for Service(%s/%s) in cluster(%s), Error: %v", svcNamespace, svcName, clusterName, err)
klog.ErrorS(err, "Failed to list EndpointSlice for Service in a cluster", "namespace", svcNamespace, "name", svcName, "cluster", clusterName)
return err
}
for _, epsObj := range epsList {
eps := &discoveryv1.EndpointSlice{}
if err = helper.ConvertToTypedObject(epsObj, eps); err != nil {
klog.Errorf("Failed to convert object to EndpointSlice, error: %v", err)
klog.ErrorS(err, "Failed to convert object to EndpointSlice")
return err
}
if util.GetLabelValue(eps.GetLabels(), discoveryv1.LabelManagedBy) == util.EndpointSliceDispatchControllerLabelValue {
@ -361,7 +364,7 @@ func (c *EndpointSliceCollectController) collectTargetEndpointSlice(ctx context.
}
epsUnstructured, err := helper.ToUnstructured(eps)
if err != nil {
klog.Errorf("Failed to convert EndpointSlice %s/%s to unstructured, error: %v", eps.GetNamespace(), eps.GetName(), err)
klog.ErrorS(err, "Failed to convert EndpointSlice to unstructured", "namespace", eps.GetNamespace(), "name", eps.GetName())
return err
}
if err = c.reportEndpointSliceWithEndpointSliceCreateOrUpdate(ctx, clusterName, epsUnstructured); err != nil {
@ -394,7 +397,7 @@ func reportEndpointSlice(ctx context.Context, c client.Client, endpointSlice *un
// indicate the Work should be not propagated since it's collected resource.
if err := ctrlutil.CreateOrUpdateWork(ctx, c, workMeta, endpointSlice, ctrlutil.WithSuspendDispatching(true)); err != nil {
klog.Errorf("Failed to create or update work(%s/%s), Error: %v", workMeta.Namespace, workMeta.Name, err)
klog.ErrorS(err, "Failed to create or update work", "namespace", workMeta.Namespace, "name", workMeta.Name)
return err
}
@ -408,7 +411,7 @@ func getEndpointSliceWorkMeta(ctx context.Context, c client.Client, ns string, w
Namespace: ns,
Name: workName,
}, existWork); err != nil && !apierrors.IsNotFound(err) {
klog.Errorf("Get EndpointSlice work(%s/%s) error:%v", ns, workName, err)
klog.ErrorS(err, "Get EndpointSlice work", "namespace", ns, "name", workName)
return metav1.ObjectMeta{}, err
}
@ -449,7 +452,7 @@ func cleanupWorkWithEndpointSliceDelete(ctx context.Context, c client.Client, en
if apierrors.IsNotFound(err) {
return nil
}
klog.Errorf("Failed to get work(%s) in executionSpace(%s): %v", workNamespaceKey.String(), executionSpace, err)
klog.ErrorS(err, "Failed to get work in executionSpace", "namespaceKey", workNamespaceKey.String(), "executionSpace", executionSpace)
return err
}
@ -472,14 +475,14 @@ func cleanProviderClustersEndpointSliceWork(ctx context.Context, c client.Client
work.Labels[util.EndpointSliceWorkManagedByLabel] = strings.Join(controllerSet.UnsortedList(), ".")
if err := c.Update(ctx, work); err != nil {
klog.Errorf("Failed to update work(%s/%s): %v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to update work", "namespace", work.Namespace, "name", work.Name)
return err
}
return nil
}
if err := c.Delete(ctx, work); err != nil {
klog.Errorf("Failed to delete work(%s/%s): %v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to delete work", "namespace", work.Namespace, "name", work.Name)
return err
}

View File

@ -66,7 +66,7 @@ type EndpointsliceDispatchController struct {
// Reconcile performs a full reconciliation for the object referred to by the Request.
func (c *EndpointsliceDispatchController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling Work %s", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling Work", "namespacedName", req.NamespacedName.String())
work := &workv1alpha1.Work{}
if err := c.Client.Get(ctx, req.NamespacedName, work); err != nil {
@ -83,7 +83,7 @@ func (c *EndpointsliceDispatchController) Reconcile(ctx context.Context, req con
mcsName := util.GetLabelValue(work.Labels, util.MultiClusterServiceNameLabel)
if !work.DeletionTimestamp.IsZero() || mcsName == "" {
if err := c.cleanupEndpointSliceFromConsumerClusters(ctx, work); err != nil {
klog.Errorf("Failed to cleanup EndpointSlice from consumer clusters for work %s/%s:%v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to cleanup EndpointSlice from consumer clusters for work", "namespace", work.Namespace, "name", work.Name)
return controllerruntime.Result{}, err
}
return controllerruntime.Result{}, nil
@ -93,7 +93,7 @@ func (c *EndpointsliceDispatchController) Reconcile(ctx context.Context, req con
mcs := &networkingv1alpha1.MultiClusterService{}
if err := c.Client.Get(ctx, types.NamespacedName{Namespace: mcsNS, Name: mcsName}, mcs); err != nil {
if apierrors.IsNotFound(err) {
klog.Warningf("MultiClusterService %s/%s is not found", mcsNS, mcsName)
klog.ErrorS(err, "MultiClusterService is not found", "namespace", mcsNS, "name", mcsName)
return controllerruntime.Result{}, nil
}
return controllerruntime.Result{}, err
@ -185,7 +185,7 @@ func (c *EndpointsliceDispatchController) newClusterFunc() handler.MapFunc {
mcsList := &networkingv1alpha1.MultiClusterServiceList{}
if err := c.Client.List(ctx, mcsList, &client.ListOptions{}); err != nil {
klog.Errorf("Failed to list MultiClusterService, error: %v", err)
klog.ErrorS(err, "Failed to list MultiClusterService")
return nil
}
@ -193,7 +193,7 @@ func (c *EndpointsliceDispatchController) newClusterFunc() handler.MapFunc {
for _, mcs := range mcsList.Items {
clusterSet, err := helper.GetConsumerClusters(c.Client, mcs.DeepCopy())
if err != nil {
klog.Errorf("Failed to get provider clusters, error: %v", err)
klog.ErrorS(err, "Failed to get provider clusters")
continue
}
@ -203,7 +203,7 @@ func (c *EndpointsliceDispatchController) newClusterFunc() handler.MapFunc {
workList, err := c.getClusterEndpointSliceWorks(ctx, mcs.Namespace, mcs.Name)
if err != nil {
klog.Errorf("Failed to list work, error: %v", err)
klog.ErrorS(err, "Failed to list work")
continue
}
for _, work := range workList {
@ -229,7 +229,7 @@ func (c *EndpointsliceDispatchController) getClusterEndpointSliceWorks(ctx conte
util.MultiClusterServiceNamespaceLabel: mcsNamespace,
}),
}); err != nil {
klog.Errorf("Failed to list work, error: %v", err)
klog.ErrorS(err, "Failed to list work")
return nil, err
}
@ -249,7 +249,7 @@ func (c *EndpointsliceDispatchController) newMultiClusterServiceFunc() handler.M
workList, err := c.getClusterEndpointSliceWorks(ctx, mcsNamespace, mcsName)
if err != nil {
klog.Errorf("Failed to list work, error: %v", err)
klog.ErrorS(err, "Failed to list work")
return nil
}
@ -273,7 +273,7 @@ func (c *EndpointsliceDispatchController) cleanOrphanDispatchedEndpointSlice(ctx
util.MultiClusterServiceNameLabel: mcs.Name,
util.MultiClusterServiceNamespaceLabel: mcs.Namespace,
})}); err != nil {
klog.Errorf("Failed to list works, error is: %v", err)
klog.ErrorS(err, "Failed to list works")
return err
}
@ -285,13 +285,13 @@ func (c *EndpointsliceDispatchController) cleanOrphanDispatchedEndpointSlice(ctx
consumerClusters, err := helper.GetConsumerClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get consumer clusters, error is: %v", err)
klog.ErrorS(err, "Failed to get consumer clusters")
return err
}
cluster, err := names.GetClusterName(work.Namespace)
if err != nil {
klog.Errorf("Failed to get cluster name for work %s/%s", work.Namespace, work.Name)
klog.ErrorS(err, "Failed to get cluster name for work", "namespace", work.Namespace, "name", work.Name)
return err
}
@ -300,7 +300,7 @@ func (c *EndpointsliceDispatchController) cleanOrphanDispatchedEndpointSlice(ctx
}
if err = c.Client.Delete(ctx, work.DeepCopy()); err != nil {
klog.Errorf("Failed to delete work %s/%s, error is: %v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to delete work", "namespace", work.Namespace, "name", work.Name)
return err
}
}
@ -311,13 +311,13 @@ func (c *EndpointsliceDispatchController) cleanOrphanDispatchedEndpointSlice(ctx
func (c *EndpointsliceDispatchController) dispatchEndpointSlice(ctx context.Context, work *workv1alpha1.Work, mcs *networkingv1alpha1.MultiClusterService) error {
epsSourceCluster, err := names.GetClusterName(work.Namespace)
if err != nil {
klog.Errorf("Failed to get EndpointSlice source cluster name for work %s/%s", work.Namespace, work.Name)
klog.ErrorS(err, "Failed to get EndpointSlice source cluster name for work", "namespace", work.Namespace, "name", work.Name)
return err
}
consumerClusters, err := helper.GetConsumerClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get consumer clusters, error is: %v", err)
klog.ErrorS(err, "Failed to get consumer clusters")
return err
}
for clusterName := range consumerClusters {
@ -330,7 +330,7 @@ func (c *EndpointsliceDispatchController) dispatchEndpointSlice(ctx context.Cont
c.EventRecorder.Eventf(mcs, corev1.EventTypeWarning, events.EventReasonClusterNotFound, "Consumer cluster %s is not found", clusterName)
continue
}
klog.Errorf("Failed to get cluster %s, error is: %v", clusterName, err)
klog.ErrorS(err, "Failed to get cluster", "cluster", clusterName)
return err
}
if !util.IsClusterReady(&clusterObj.Status) {
@ -361,13 +361,13 @@ func (c *EndpointsliceDispatchController) ensureEndpointSliceWork(ctx context.Co
manifest := work.Spec.Workload.Manifests[0]
unstructuredObj := &unstructured.Unstructured{}
if err := unstructuredObj.UnmarshalJSON(manifest.Raw); err != nil {
klog.Errorf("Failed to unmarshal work manifest, error is: %v", err)
klog.ErrorS(err, "Failed to unmarshal work manifest")
return err
}
endpointSlice := &discoveryv1.EndpointSlice{}
if err := helper.ConvertToTypedObject(unstructuredObj, endpointSlice); err != nil {
klog.Errorf("Failed to convert unstructured object to typed object, error is: %v", err)
klog.ErrorS(err, "Failed to convert unstructured object to typed object")
return err
}
@ -397,12 +397,12 @@ func (c *EndpointsliceDispatchController) ensureEndpointSliceWork(ctx context.Co
}
unstructuredEPS, err := helper.ToUnstructured(endpointSlice)
if err != nil {
klog.Errorf("Failed to convert typed object to unstructured object, error is: %v", err)
klog.ErrorS(err, "Failed to convert typed object to unstructured object")
return err
}
if err := ctrlutil.CreateOrUpdateWork(ctx, c.Client, workMeta, unstructuredEPS); err != nil {
klog.Errorf("Failed to dispatch EndpointSlice %s/%s from %s to cluster %s:%v",
work.GetNamespace(), work.GetName(), providerCluster, consumerCluster, err)
klog.ErrorS(err, "Failed to dispatch EndpointSlice",
"namespace", work.GetNamespace(), "name", work.GetName(), "providerCluster", providerCluster, "consumerCluster", consumerCluster)
return err
}
@ -414,13 +414,13 @@ func (c *EndpointsliceDispatchController) cleanupEndpointSliceFromConsumerCluste
workList := &workv1alpha1.WorkList{}
err := c.Client.List(ctx, workList)
if err != nil {
klog.Errorf("Failed to list works serror: %v", err)
klog.ErrorS(err, "Failed to list works")
return err
}
epsSourceCluster, err := names.GetClusterName(work.Namespace)
if err != nil {
klog.Errorf("Failed to get EndpointSlice provider cluster name for work %s/%s", work.Namespace, work.Name)
klog.ErrorS(err, "Failed to get EndpointSlice provider cluster name for work", "namespace", work.Namespace, "name", work.Name)
return err
}
for _, item := range workList.Items {
@ -434,7 +434,7 @@ func (c *EndpointsliceDispatchController) cleanupEndpointSliceFromConsumerCluste
if controllerutil.RemoveFinalizer(work, util.MCSEndpointSliceDispatchControllerFinalizer) {
if err := c.Client.Update(ctx, work); err != nil {
klog.Errorf("Failed to remove %s finalizer for work %s/%s:%v", util.MCSEndpointSliceDispatchControllerFinalizer, work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to remove finalizer for work", "finalizer", util.MCSEndpointSliceDispatchControllerFinalizer, "namespace", work.Namespace, "name", work.Name)
return err
}
}

View File

@ -69,7 +69,7 @@ type MCSController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *MCSController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling MultiClusterService(%s/%s)", req.Namespace, req.Name)
klog.V(4).InfoS("Reconciling MultiClusterService", "namespace", req.Namespace, "name", req.Name)
mcs := &networkingv1alpha1.MultiClusterService{}
if err := c.Client.Get(ctx, req.NamespacedName, mcs); err != nil {
@ -77,7 +77,7 @@ func (c *MCSController) Reconcile(ctx context.Context, req controllerruntime.Req
// The mcs no longer exist, in which case we stop processing.
return controllerruntime.Result{}, nil
}
klog.Errorf("Failed to get MultiClusterService object(%s):%v", req.NamespacedName, err)
klog.ErrorS(err, "Failed to get MultiClusterService object", "namespacedName", req.NamespacedName)
return controllerruntime.Result{}, err
}
@ -103,7 +103,7 @@ func (c *MCSController) Reconcile(ctx context.Context, req controllerruntime.Req
}
func (c *MCSController) handleMultiClusterServiceDelete(ctx context.Context, mcs *networkingv1alpha1.MultiClusterService) (controllerruntime.Result, error) {
klog.V(4).Infof("Begin to handle MultiClusterService(%s/%s) delete event", mcs.Namespace, mcs.Name)
klog.V(4).InfoS("Begin to handle MultiClusterService delete event", "namespace", mcs.Namespace, "name", mcs.Name)
if err := c.retrieveService(ctx, mcs); err != nil {
c.EventRecorder.Event(mcs, corev1.EventTypeWarning, events.EventReasonSyncServiceFailed,
@ -120,12 +120,12 @@ func (c *MCSController) handleMultiClusterServiceDelete(ctx context.Context, mcs
if controllerutil.RemoveFinalizer(mcs, util.MCSControllerFinalizer) {
err := c.Client.Update(ctx, mcs)
if err != nil {
klog.Errorf("Failed to update MultiClusterService(%s/%s) with finalizer:%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to update MultiClusterService with finalizer", "namespace", mcs.Namespace, "name", mcs.Name)
return controllerruntime.Result{}, err
}
}
klog.V(4).Infof("Success to delete MultiClusterService(%s/%s)", mcs.Namespace, mcs.Name)
klog.V(4).InfoS("Success to delete MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return controllerruntime.Result{}, nil
}
@ -135,7 +135,7 @@ func (c *MCSController) retrieveMultiClusterService(ctx context.Context, mcs *ne
networkingv1alpha1.MultiClusterServicePermanentIDLabel: mcsID,
})
if err != nil {
klog.Errorf("Failed to list work by MultiClusterService(%s/%s): %v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to list work by MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
@ -145,7 +145,7 @@ func (c *MCSController) retrieveMultiClusterService(ctx context.Context, mcs *ne
}
clusterName, err := names.GetClusterName(work.Namespace)
if err != nil {
klog.Errorf("Failed to get member cluster name for work %s/%s:%v", work.Namespace, work.Name, work)
klog.ErrorS(err, "Failed to get member cluster name for work", "namespace", work.Namespace, "name", work.Name)
continue
}
@ -154,17 +154,17 @@ func (c *MCSController) retrieveMultiClusterService(ctx context.Context, mcs *ne
}
if err = c.cleanProviderEndpointSliceWork(ctx, work.DeepCopy()); err != nil {
klog.Errorf("Failed to clean provider EndpointSlice work(%s/%s):%v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to clean provider EndpointSlice work", "namespace", work.Namespace, "name", work.Name)
return err
}
if err = c.Client.Delete(ctx, work.DeepCopy()); err != nil && !apierrors.IsNotFound(err) {
klog.Errorf("Error while deleting work(%s/%s): %v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Error while deleting work", "namespace", work.Namespace, "name", work.Name)
return err
}
}
klog.V(4).Infof("Success to clean up MultiClusterService(%s/%s) work: %v", mcs.Namespace, mcs.Name, err)
klog.V(4).InfoS("Success to clean up MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return nil
}
@ -177,7 +177,7 @@ func (c *MCSController) cleanProviderEndpointSliceWork(ctx context.Context, work
util.MultiClusterServiceNamespaceLabel: util.GetLabelValue(work.Labels, util.MultiClusterServiceNamespaceLabel),
}),
}); err != nil {
klog.Errorf("Failed to list workList reported by work(MultiClusterService)(%s/%s): %v", work.Namespace, work.Name, err)
klog.ErrorS(err, "Failed to list workList reported by work(MultiClusterService)", "namespace", work.Namespace, "name", work.Name)
return err
}
@ -204,16 +204,16 @@ func (c *MCSController) cleanProviderEndpointSliceWork(ctx context.Context, work
}
func (c *MCSController) handleMultiClusterServiceCreateOrUpdate(ctx context.Context, mcs *networkingv1alpha1.MultiClusterService) error {
klog.V(4).Infof("Begin to handle MultiClusterService(%s/%s) create or update event", mcs.Namespace, mcs.Name)
klog.V(4).InfoS("Begin to handle MultiClusterService create or update event", "namespace", mcs.Namespace, "name", mcs.Name)
providerClusters, err := helper.GetProviderClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get provider clusters by MultiClusterService(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get provider clusters by MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
consumerClusters, err := helper.GetConsumerClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get consumer clusters by MultiClusterService(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get consumer clusters by MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
@ -228,7 +228,7 @@ func (c *MCSController) handleMultiClusterServiceCreateOrUpdate(ctx context.Cont
if controllerutil.RemoveFinalizer(mcs, util.MCSControllerFinalizer) {
err := c.Client.Update(ctx, mcs)
if err != nil {
klog.Errorf("Failed to remove finalizer(%s) from MultiClusterService(%s/%s):%v", util.MCSControllerFinalizer, mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to remove finalizer from MultiClusterService", "finalizer", util.MCSControllerFinalizer, "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
}
@ -239,7 +239,7 @@ func (c *MCSController) handleMultiClusterServiceCreateOrUpdate(ctx context.Cont
if controllerutil.AddFinalizer(mcs, util.MCSControllerFinalizer) {
err = c.Client.Update(ctx, mcs)
if err != nil {
klog.Errorf("Failed to add finalizer(%s) to MultiClusterService(%s/%s): %v ", util.MCSControllerFinalizer, mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to add finalizer to MultiClusterService", "finalizer", util.MCSControllerFinalizer, "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
}
@ -259,7 +259,7 @@ func (c *MCSController) handleMultiClusterServiceCreateOrUpdate(ctx context.Cont
err = c.Client.Get(ctx, types.NamespacedName{Namespace: mcs.Namespace, Name: mcs.Name}, svc)
// If the Service is deleted, the Service's ResourceBinding will be cleaned by GC
if err != nil {
klog.Errorf("Failed to get service(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get service", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
@ -268,7 +268,7 @@ func (c *MCSController) handleMultiClusterServiceCreateOrUpdate(ctx context.Cont
return err
}
klog.V(4).Infof("Success to reconcile MultiClusterService(%s/%s)", mcs.Namespace, mcs.Name)
klog.V(4).InfoS("Success to reconcile MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return nil
}
@ -280,7 +280,7 @@ func (c *MCSController) propagateMultiClusterService(ctx context.Context, mcs *n
c.EventRecorder.Eventf(mcs, corev1.EventTypeWarning, events.EventReasonClusterNotFound, "Provider cluster %s is not found", clusterName)
continue
}
klog.Errorf("Failed to get cluster %s, error is: %v", clusterName, err)
klog.ErrorS(err, "Failed to get cluster", "cluster", clusterName)
return err
}
if !util.IsClusterReady(&clusterObj.Status) {
@ -306,12 +306,12 @@ func (c *MCSController) propagateMultiClusterService(ctx context.Context, mcs *n
mcsObj, err := helper.ToUnstructured(mcs)
if err != nil {
klog.Errorf("Failed to convert MultiClusterService(%s/%s) to unstructured object, err is %v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to convert MultiClusterService to unstructured object", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
if err = ctrlutil.CreateOrUpdateWork(ctx, c, workMeta, mcsObj, ctrlutil.WithSuspendDispatching(true)); err != nil {
klog.Errorf("Failed to create or update MultiClusterService(%s/%s) work in the given member cluster %s, err is %v",
mcs.Namespace, mcs.Name, clusterName, err)
klog.ErrorS(err, "Failed to create or update MultiClusterService work in the given member cluster",
"namespace", mcs.Namespace, "name", mcs.Name, "cluster", clusterName)
return err
}
}
@ -323,7 +323,7 @@ func (c *MCSController) retrieveService(ctx context.Context, mcs *networkingv1al
svc := &corev1.Service{}
err := c.Client.Get(ctx, types.NamespacedName{Namespace: mcs.Namespace, Name: mcs.Name}, svc)
if err != nil && !apierrors.IsNotFound(err) {
klog.Errorf("Failed to get service(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get service", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
@ -338,7 +338,7 @@ func (c *MCSController) retrieveService(ctx context.Context, mcs *networkingv1al
}
if err = c.Client.Update(ctx, svcCopy); err != nil {
klog.Errorf("Failed to update service(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to update service", "namespace", mcs.Namespace, "name", mcs.Name)
return err
}
@ -348,7 +348,7 @@ func (c *MCSController) retrieveService(ctx context.Context, mcs *networkingv1al
if apierrors.IsNotFound(err) {
return nil
}
klog.Errorf("Failed to get ResourceBinding(%s/%s):%v", mcs.Namespace, names.GenerateBindingName(svc.Kind, svc.Name), err)
klog.ErrorS(err, "Failed to get ResourceBinding", "namespace", mcs.Namespace, "name", names.GenerateBindingName(svc.Kind, svc.Name))
return err
}
@ -364,7 +364,7 @@ func (c *MCSController) retrieveService(ctx context.Context, mcs *networkingv1al
delete(rbCopy.Labels, networkingv1alpha1.MultiClusterServicePermanentIDLabel)
}
if err := c.Client.Update(ctx, rbCopy); err != nil {
klog.Errorf("Failed to update ResourceBinding(%s/%s):%v", mcs.Namespace, names.GenerateBindingName(svc.Kind, svc.Name), err)
klog.ErrorS(err, "Failed to update ResourceBinding", "namespace", mcs.Namespace, "name", names.GenerateBindingName(svc.Kind, svc.Name))
return err
}
@ -374,13 +374,13 @@ func (c *MCSController) retrieveService(ctx context.Context, mcs *networkingv1al
func (c *MCSController) propagateService(ctx context.Context, mcs *networkingv1alpha1.MultiClusterService, svc *corev1.Service,
providerClusters, consumerClusters sets.Set[string]) error {
if err := c.claimMultiClusterServiceForService(ctx, svc, mcs); err != nil {
klog.Errorf("Failed to claim for Service(%s/%s), err is %v", svc.Namespace, svc.Name, err)
klog.ErrorS(err, "Failed to claim for Service", "namespace", svc.Namespace, "name", svc.Name)
return err
}
binding, err := c.buildResourceBinding(svc, mcs, providerClusters, consumerClusters)
if err != nil {
klog.Errorf("Failed to build ResourceBinding for Service(%s/%s), err is %v", svc.Namespace, svc.Name, err)
klog.ErrorS(err, "Failed to build ResourceBinding for Service", "namespace", svc.Namespace, "name", svc.Name)
return err
}
@ -417,17 +417,17 @@ func (c *MCSController) propagateService(ctx context.Context, mcs *networkingv1a
return nil
})
if err != nil {
klog.Errorf("Failed to create/update ResourceBinding(%s/%s):%v", bindingCopy.Namespace, bindingCopy.Name, err)
klog.ErrorS(err, "Failed to create/update ResourceBinding", "namespace", bindingCopy.Namespace, "name", bindingCopy.Name)
return err
}
switch operationResult {
case controllerutil.OperationResultCreated:
klog.Infof("Create ResourceBinding(%s/%s) successfully.", binding.GetNamespace(), binding.GetName())
klog.InfoS("Create ResourceBinding successfully.", "namespace", binding.GetNamespace(), "name", binding.GetName())
case controllerutil.OperationResultUpdated:
klog.Infof("Update ResourceBinding(%s/%s) successfully.", binding.GetNamespace(), binding.GetName())
klog.InfoS("Update ResourceBinding successfully.", "namespace", binding.GetNamespace(), "name", binding.GetName())
default:
klog.V(2).Infof("ResourceBinding(%s/%s) is up to date.", binding.GetNamespace(), binding.GetName())
klog.V(2).InfoS("ResourceBinding is up to date.", "namespace", binding.GetNamespace(), "name", binding.GetName())
}
return nil
@ -500,7 +500,7 @@ func (c *MCSController) claimMultiClusterServiceForService(ctx context.Context,
svcCopy.Annotations[networkingv1alpha1.MultiClusterServiceNamespaceAnnotation] = mcs.Namespace
if err := c.Client.Update(ctx, svcCopy); err != nil {
klog.Errorf("Failed to update service(%s/%s):%v ", svc.Namespace, svc.Name, err)
klog.ErrorS(err, "Failed to update service", "namespace", svc.Namespace, "name", svc.Name)
return err
}
@ -608,7 +608,7 @@ func (c *MCSController) serviceHasCrossClusterMultiClusterService(svc *corev1.Se
if err := c.Client.Get(context.Background(),
types.NamespacedName{Namespace: svc.Namespace, Name: svc.Name}, mcs); err != nil {
if !apierrors.IsNotFound(err) {
klog.Errorf("Failed to get MultiClusterService(%s/%s):%v", svc.Namespace, svc.Name, err)
klog.ErrorS(err, "Failed to get MultiClusterService", "namespace", svc.Namespace, "name", svc.Name)
}
return false
}
@ -626,10 +626,10 @@ func (c *MCSController) clusterMapFunc() handler.MapFunc {
return nil
}
klog.V(4).Infof("Begin to sync mcs with cluster %s.", clusterName)
klog.V(4).InfoS("Begin to sync mcs with cluster", "cluster", clusterName)
mcsList := &networkingv1alpha1.MultiClusterServiceList{}
if err := c.Client.List(ctx, mcsList, &client.ListOptions{}); err != nil {
klog.Errorf("Failed to list MultiClusterService, error: %v", err)
klog.ErrorS(err, "Failed to list MultiClusterService")
return nil
}
@ -658,7 +658,7 @@ func (c *MCSController) needSyncMultiClusterService(mcs *networkingv1alpha1.Mult
providerClusters, err := helper.GetProviderClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get provider clusters by MultiClusterService(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get provider clusters by MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return false, err
}
if providerClusters.Has(clusterName) {
@ -667,7 +667,7 @@ func (c *MCSController) needSyncMultiClusterService(mcs *networkingv1alpha1.Mult
consumerClusters, err := helper.GetConsumerClusters(c.Client, mcs)
if err != nil {
klog.Errorf("Failed to get consumer clusters by MultiClusterService(%s/%s):%v", mcs.Namespace, mcs.Name, err)
klog.ErrorS(err, "Failed to get consumer clusters by MultiClusterService", "namespace", mcs.Namespace, "name", mcs.Name)
return false, err
}
if consumerClusters.Has(clusterName) {

View File

@ -125,7 +125,7 @@ type ClusterStatusController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will requeue the reconcile key after the duration.
func (c *ClusterStatusController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Syncing cluster status: %s", req.NamespacedName.Name)
klog.V(4).InfoS("Syncing cluster status", "cluster", req.NamespacedName.Name)
cluster := &clusterv1alpha1.Cluster{}
if err := c.Client.Get(ctx, req.NamespacedName, cluster); err != nil {
@ -151,9 +151,9 @@ func (c *ClusterStatusController) Reconcile(ctx context.Context, req controllerr
}
// start syncing status only when the finalizer is present on the given Cluster to
// avoid conflict with cluster controller.
// avoid conflict with the cluster controller.
if !controllerutil.ContainsFinalizer(cluster, util.ClusterControllerFinalizer) {
klog.V(2).Infof("Waiting finalizer present for member cluster: %s", cluster.Name)
klog.V(2).InfoS("Waiting finalizer present for member cluster", "cluster", cluster.Name)
return controllerruntime.Result{Requeue: true}, nil
}
@ -190,7 +190,7 @@ func (c *ClusterStatusController) syncClusterStatus(ctx context.Context, cluster
// create a ClusterClient for the given member cluster
clusterClient, err := c.ClusterClientSetFunc(cluster.Name, c.Client, c.ClusterClientOption)
if err != nil {
klog.Errorf("Failed to create a ClusterClient for the given member cluster: %v, err is : %v", cluster.Name, err)
klog.ErrorS(err, "Failed to create a ClusterClient for the given member cluster", "cluster", cluster.Name)
return setStatusCollectionFailedCondition(ctx, c.Client, cluster, fmt.Sprintf("failed to create a ClusterClient: %v", err))
}
@ -200,8 +200,8 @@ func (c *ClusterStatusController) syncClusterStatus(ctx context.Context, cluster
// cluster is offline after retry timeout, update cluster status immediately and return.
if !online && readyCondition.Status != metav1.ConditionTrue {
klog.V(2).Infof("Cluster(%s) still offline after %s, ensuring offline is set.",
cluster.Name, c.ClusterFailureThreshold.Duration)
klog.V(2).InfoS("Cluster still offline after ensuring offline is set",
"cluster", cluster.Name, "duration", c.ClusterFailureThreshold.Duration)
return updateStatusCondition(ctx, c.Client, cluster, *readyCondition)
}
@ -235,7 +235,7 @@ func (c *ClusterStatusController) setCurrentClusterStatus(clusterClient *util.Cl
var conditions []metav1.Condition
clusterVersion, err := getKubernetesVersion(clusterClient)
if err != nil {
klog.Errorf("Failed to get Kubernetes version for Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to get Kubernetes version for Cluster", "cluster", cluster.GetName())
}
currentClusterStatus.KubernetesVersion = clusterVersion
@ -245,11 +245,11 @@ func (c *ClusterStatusController) setCurrentClusterStatus(clusterClient *util.Cl
if len(apiEnables) == 0 {
apiEnablementCondition = util.NewCondition(clusterv1alpha1.ClusterConditionCompleteAPIEnablements,
apiEnablementEmptyAPIEnablements, "collected empty APIEnablements from the cluster", metav1.ConditionFalse)
klog.Errorf("Failed to get any APIs installed in Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to get any APIs installed in Cluster", "cluster", cluster.GetName())
} else if err != nil {
apiEnablementCondition = util.NewCondition(clusterv1alpha1.ClusterConditionCompleteAPIEnablements,
apiEnablementPartialAPIEnablements, fmt.Sprintf("might collect partial APIEnablements(%d) from the cluster", len(apiEnables)), metav1.ConditionFalse)
klog.Warningf("Maybe get partial(%d) APIs installed in Cluster %s. Error: %v.", len(apiEnables), cluster.GetName(), err)
klog.ErrorS(err, "Collected partial number of APIs installed in Cluster", "numApiEnablements", len(apiEnables), "cluster", cluster.GetName())
} else {
apiEnablementCondition = util.NewCondition(clusterv1alpha1.ClusterConditionCompleteAPIEnablements,
apiEnablementsComplete, "collected complete APIEnablements from the cluster", metav1.ConditionTrue)
@ -261,19 +261,19 @@ func (c *ClusterStatusController) setCurrentClusterStatus(clusterClient *util.Cl
// get or create informer for pods and nodes in member cluster
clusterInformerManager, err := c.buildInformerForCluster(clusterClient)
if err != nil {
klog.Errorf("Failed to get or create informer for Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to get or create informer for Cluster", "cluster", cluster.GetName())
// in large-scale clusters, the timeout may occur.
// if clusterInformerManager fails to be built, should be returned, otherwise, it may cause a nil pointer
return nil, err
}
nodes, err := listNodes(clusterInformerManager)
if err != nil {
klog.Errorf("Failed to list nodes for Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to list nodes for Cluster", "cluster", cluster.GetName())
}
pods, err := listPods(clusterInformerManager)
if err != nil {
klog.Errorf("Failed to list pods for Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to list pods for Cluster", "cluster", cluster.GetName())
}
currentClusterStatus.NodeSummary = getNodeSummary(nodes)
currentClusterStatus.ResourceSummary = getResourceSummary(nodes, pods)
@ -296,7 +296,7 @@ func (c *ClusterStatusController) updateStatusIfNeeded(ctx context.Context, clus
meta.SetStatusCondition(&currentClusterStatus.Conditions, condition)
}
if !equality.Semantic.DeepEqual(cluster.Status, currentClusterStatus) {
klog.V(4).Infof("Start to update cluster status: %s", cluster.Name)
klog.V(4).InfoS("Start to update cluster status", "cluster", cluster.Name)
err := retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
_, err = helper.UpdateStatus(ctx, c.Client, cluster, func() error {
cluster.Status.KubernetesVersion = currentClusterStatus.KubernetesVersion
@ -311,7 +311,7 @@ func (c *ClusterStatusController) updateStatusIfNeeded(ctx context.Context, clus
return err
})
if err != nil {
klog.Errorf("Failed to update health status of the member cluster: %v, err is : %v", cluster.Name, err)
klog.ErrorS(err, "Failed to update health status of the member cluster", "cluster", cluster.Name)
return err
}
}
@ -320,7 +320,7 @@ func (c *ClusterStatusController) updateStatusIfNeeded(ctx context.Context, clus
}
func updateStatusCondition(ctx context.Context, c client.Client, cluster *clusterv1alpha1.Cluster, conditions ...metav1.Condition) error {
klog.V(4).Infof("Start to update cluster(%s) status condition", cluster.Name)
klog.V(4).InfoS("Start to update cluster status condition", "cluster", cluster.Name)
err := retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
_, err = helper.UpdateStatus(ctx, c, cluster, func() error {
for _, condition := range conditions {
@ -331,7 +331,7 @@ func updateStatusCondition(ctx context.Context, c client.Client, cluster *cluste
return err
})
if err != nil {
klog.Errorf("Failed to update status condition of the member cluster: %v, err is : %v", cluster.Name, err)
klog.ErrorS(err, "Failed to update status condition of the member cluster", "cluster", cluster.Name)
return err
}
return nil
@ -344,7 +344,7 @@ func (c *ClusterStatusController) initializeGenericInformerManagerForCluster(clu
dynamicClient, err := c.ClusterDynamicClientSetFunc(clusterClient.ClusterName, c.Client, c.ClusterClientOption)
if err != nil {
klog.Errorf("Failed to build dynamic cluster client for cluster %s.", clusterClient.ClusterName)
klog.ErrorS(err, "Failed to build dynamic cluster client", "cluster", clusterClient.ClusterName)
return
}
c.GenericInformerManager.ForCluster(clusterClient.ClusterName, dynamicClient.DynamicClientSet, 0)
@ -366,7 +366,7 @@ func (c *ClusterStatusController) buildInformerForCluster(clusterClient *util.Cl
if !singleClusterInformerManager.IsInformerSynced(gvr) {
allSynced = false
if _, err := singleClusterInformerManager.Lister(gvr); err != nil {
klog.Errorf("Failed to get the lister for gvr %s: %v", gvr.String(), err)
klog.ErrorS(err, "Failed to get the lister for gvr", "gvr", gvr.String())
}
}
}
@ -389,7 +389,7 @@ func (c *ClusterStatusController) buildInformerForCluster(clusterClient *util.Cl
}
return nil
}(); err != nil {
klog.Errorf("Failed to sync cache for cluster: %s, error: %v", clusterClient.ClusterName, err)
klog.ErrorS(err, "Failed to sync cache for cluster", "cluster", clusterClient.ClusterName)
c.TypedInformerManager.Stop(clusterClient.ClusterName)
return nil, err
}
@ -422,12 +422,12 @@ func (c *ClusterStatusController) initLeaseController(cluster *clusterv1alpha1.C
// start syncing lease
go func() {
klog.Infof("Starting syncing lease for cluster: %s", cluster.Name)
klog.InfoS("Starting syncing lease for cluster", "cluster", cluster.Name)
// lease controller will keep running until the stop channel is closed(context is canceled)
clusterLeaseController.Run(ctx)
klog.Infof("Stop syncing lease for cluster: %s", cluster.Name)
klog.InfoS("Stop syncing lease for cluster", "cluster", cluster.Name)
c.ClusterLeaseControllers.Delete(cluster.Name) // ensure the cache is clean
}()
}
@ -440,12 +440,12 @@ func getClusterHealthStatus(clusterClient *util.ClusterClient) (online, healthy
}
if err != nil {
klog.Errorf("Failed to do cluster health check for cluster %v, err is : %v ", clusterClient.ClusterName, err)
klog.ErrorS(err, "Failed to do cluster health check for cluster", "cluster", clusterClient.ClusterName)
return false, false
}
if healthStatus != http.StatusOK {
klog.Infof("Member cluster %v isn't healthy", clusterClient.ClusterName)
klog.InfoS("Member cluster isn't healthy", "cluster", clusterClient.ClusterName)
return true, false
}
@ -627,7 +627,8 @@ func getNodeAvailable(allocatable corev1.ResourceList, podResources *util.Resour
// When too many pods have been created, scheduling will fail so that the allocating pods number may be huge.
// If allowedPodNumber is less than or equal to 0, we don't allow more pods to be created.
if allowedPodNumber <= 0 {
klog.Warningf("The number of schedulable Pods on the node is less than or equal to 0, we won't add the node to cluster resource models.")
klog.InfoS("The number of schedulable Pods on the node is less than or equal to 0, " +
"we won't add the node to cluster resource models.")
return nil
}
@ -647,7 +648,7 @@ func getAllocatableModelings(cluster *clusterv1alpha1.Cluster, nodes []*corev1.N
}
modelingSummary, err := modeling.InitSummary(cluster.Spec.ResourceModels)
if err != nil {
klog.Errorf("Failed to init cluster summary from cluster resource models for Cluster %s. Error: %v.", cluster.GetName(), err)
klog.ErrorS(err, "Failed to init cluster summary from cluster resource models for Cluster", "cluster", cluster.GetName())
return nil
}

View File

@ -58,7 +58,7 @@ type CRBStatusController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *CRBStatusController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling ClusterResourceBinding %s.", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling ClusterResourceBinding", "name", req.NamespacedName.Name)
binding := &workv1alpha2.ClusterResourceBinding{}
if err := c.Client.Get(ctx, req.NamespacedName, binding); err != nil {
@ -112,8 +112,7 @@ func (c *CRBStatusController) SetupWithManager(mgr controllerruntime.Manager) er
func (c *CRBStatusController) syncBindingStatus(ctx context.Context, binding *workv1alpha2.ClusterResourceBinding) error {
err := helper.AggregateClusterResourceBindingWorkStatus(ctx, c.Client, binding, c.EventRecorder)
if err != nil {
klog.Errorf("Failed to aggregate workStatues to clusterResourceBinding(%s), Error: %v",
binding.Name, err)
klog.ErrorS(err, "Failed to aggregate workStatues to clusterResourceBinding", "name", binding.Name)
return err
}

View File

@ -58,7 +58,7 @@ type RBStatusController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *RBStatusController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling ResourceBinding %s.", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling ResourceBinding", "namespace", req.Namespace, "name", req.Name)
binding := &workv1alpha2.ResourceBinding{}
if err := c.Client.Get(ctx, req.NamespacedName, binding); err != nil {
@ -114,8 +114,7 @@ func (c *RBStatusController) SetupWithManager(mgr controllerruntime.Manager) err
func (c *RBStatusController) syncBindingStatus(ctx context.Context, binding *workv1alpha2.ResourceBinding) error {
err := helper.AggregateResourceBindingWorkStatus(ctx, c.Client, binding, c.EventRecorder)
if err != nil {
klog.Errorf("Failed to aggregate workStatus to resourceBinding(%s/%s), Error: %v",
binding.Namespace, binding.Name, err)
klog.ErrorS(err, "Failed to aggregate workStatues to ResourceBinding", "namespace", binding.Namespace, "name", binding.Name)
return err
}

View File

@ -70,7 +70,7 @@ type WorkStatusController struct {
// ConcurrentWorkStatusSyncs is the number of Work status that are allowed to sync concurrently.
ConcurrentWorkStatusSyncs int
ObjectWatcher objectwatcher.ObjectWatcher
PredicateFunc predicate.Predicate
WorkPredicateFunc predicate.Predicate
ClusterDynamicClientSetFunc util.NewClusterDynamicClientSetFunc
ClusterClientOption *util.ClientOption
ClusterCacheSyncTimeout metav1.Duration
@ -82,7 +82,7 @@ type WorkStatusController struct {
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *WorkStatusController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling status of Work %s.", req.NamespacedName.String())
klog.V(4).InfoS("Reconciling status of Work.", "namespace", req.Namespace, "name", req.Name)
work := &workv1alpha1.Work{}
if err := c.Client.Get(ctx, req.NamespacedName, work); err != nil {
@ -104,19 +104,20 @@ func (c *WorkStatusController) Reconcile(ctx context.Context, req controllerrunt
clusterName, err := names.GetClusterName(work.GetNamespace())
if err != nil {
klog.Errorf("Failed to get member cluster name by %s. Error: %v.", work.GetNamespace(), err)
klog.ErrorS(err, "Failed to get member cluster name from Work.", "namespace", work.GetNamespace())
return controllerruntime.Result{}, err
}
cluster, err := util.GetCluster(c.Client, clusterName)
if err != nil {
klog.Errorf("Failed to the get given member cluster %s", clusterName)
klog.ErrorS(err, "Failed to get the given member cluster", "cluster", clusterName)
return controllerruntime.Result{}, err
}
if !util.IsClusterReady(&cluster.Status) {
klog.Errorf("Stop syncing the Work(%s/%s) to the cluster(%s) as not ready.", work.Namespace, work.Name, cluster.Name)
return controllerruntime.Result{}, fmt.Errorf("cluster(%s) not ready", cluster.Name)
err := fmt.Errorf("cluster(%s) not ready", cluster.Name)
klog.ErrorS(err, "Stop syncing the Work to the cluster as not ready.", "namespace", work.Namespace, "name", work.Name, "cluster", cluster.Name)
return controllerruntime.Result{}, err
}
return c.buildResourceInformers(cluster, work)
@ -127,7 +128,7 @@ func (c *WorkStatusController) Reconcile(ctx context.Context, req controllerrunt
func (c *WorkStatusController) buildResourceInformers(cluster *clusterv1alpha1.Cluster, work *workv1alpha1.Work) (controllerruntime.Result, error) {
err := c.registerInformersAndStart(cluster, work)
if err != nil {
klog.Errorf("Failed to register informer for Work %s/%s. Error: %v.", work.GetNamespace(), work.GetName(), err)
klog.ErrorS(err, "Failed to register informer for Work.", "namespace", work.GetNamespace(), "name", work.GetName())
return controllerruntime.Result{}, err
}
return controllerruntime.Result{}, nil
@ -171,13 +172,13 @@ func generateKey(obj interface{}) (util.QueueKey, error) {
func getClusterNameFromAnnotation(resource *unstructured.Unstructured) (string, error) {
workNamespace, exist := resource.GetAnnotations()[workv1alpha2.WorkNamespaceAnnotation]
if !exist {
klog.V(5).Infof("Ignore resource(kind=%s, %s/%s) which is not managed by Karmada.", resource.GetKind(), resource.GetNamespace(), resource.GetName())
klog.V(5).InfoS("Ignore resource which is not managed by Karmada.", "kind", resource.GetKind(), "namespace", resource.GetNamespace(), "name", resource.GetName())
return "", nil
}
cluster, err := names.GetClusterName(workNamespace)
if err != nil {
klog.Errorf("Failed to get cluster name from work namespace: %s, error: %v.", workNamespace, err)
klog.ErrorS(err, "Failed to get cluster name from Work.", "namespace", workNamespace)
return "", err
}
return cluster, nil
@ -188,8 +189,9 @@ func (c *WorkStatusController) syncWorkStatus(key util.QueueKey) error {
ctx := context.Background()
fedKey, ok := key.(keys.FederatedKey)
if !ok {
klog.Errorf("Failed to sync status as invalid key: %v", key)
return fmt.Errorf("invalid key")
err := fmt.Errorf("invalid key")
klog.ErrorS(err, "Failed to sync status", "key", key)
return err
}
observedObj, err := helper.GetObjectFromCache(c.RESTMapper, c.InformerManager, fedKey)
@ -204,7 +206,7 @@ func (c *WorkStatusController) syncWorkStatus(key util.QueueKey) error {
workNamespace, nsExist := observedAnnotations[workv1alpha2.WorkNamespaceAnnotation]
workName, nameExist := observedAnnotations[workv1alpha2.WorkNameAnnotation]
if !nsExist || !nameExist {
klog.Infof("Ignore object(%s) which not managed by Karmada.", fedKey.String())
klog.InfoS("Ignoring object which is not managed by Karmada.", "object", fedKey.String())
return nil
}
@ -215,7 +217,7 @@ func (c *WorkStatusController) syncWorkStatus(key util.QueueKey) error {
return nil
}
klog.Errorf("Failed to get Work(%s/%s) from cache: %v", workNamespace, workName, err)
klog.ErrorS(err, "Failed to get Work from cache", "namespace", workNamespace, "name", workName)
return err
}
@ -228,7 +230,7 @@ func (c *WorkStatusController) syncWorkStatus(key util.QueueKey) error {
return err
}
klog.Infof("Reflecting the resource(kind=%s, %s/%s) status to the Work(%s/%s).", observedObj.GetKind(), observedObj.GetNamespace(), observedObj.GetName(), workNamespace, workName)
klog.InfoS("Reflecting resource status to Work.", "kind", observedObj.GetKind(), "resource", observedObj.GetNamespace()+"/"+observedObj.GetName(), "namespace", workNamespace, "name", workName)
return c.reflectStatus(ctx, workObject, observedObj)
}
@ -244,7 +246,7 @@ func (c *WorkStatusController) updateResource(ctx context.Context, observedObj *
clusterName, err := names.GetClusterName(workObject.Namespace)
if err != nil {
klog.Errorf("Failed to get member cluster name: %v", err)
klog.ErrorS(err, "Failed to get member cluster name", "cluster", workObject.Namespace)
return err
}
@ -255,7 +257,7 @@ func (c *WorkStatusController) updateResource(ctx context.Context, observedObj *
operationResult, updateErr := c.ObjectWatcher.Update(ctx, clusterName, desiredObj, observedObj)
metrics.CountUpdateResourceToCluster(updateErr, desiredObj.GetAPIVersion(), desiredObj.GetKind(), clusterName, string(operationResult))
if updateErr != nil {
klog.Errorf("Updating %s failed: %v", fedKey.String(), updateErr)
klog.ErrorS(updateErr, "Updating resource failed", "resource", fedKey.String())
return updateErr
}
// We can't return even after a success updates, because that might lose the chance to collect status.
@ -282,7 +284,7 @@ func (c *WorkStatusController) handleDeleteEvent(ctx context.Context, key keys.F
return nil
}
klog.Errorf("Failed to get Work from cache: %v", err)
klog.ErrorS(err, "Failed to get Work from cache")
return err
}
@ -296,11 +298,6 @@ func (c *WorkStatusController) handleDeleteEvent(ctx context.Context, key keys.F
return nil
}
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if util.GetLabelValue(work.Labels, util.PropagationInstruction) == util.PropagationInstructionSuppressed {
return nil
}
reCreateErr := c.recreateResourceIfNeeded(ctx, work, key)
if reCreateErr != nil {
c.updateAppliedCondition(ctx, work, metav1.ConditionFalse, "ReCreateFailed", reCreateErr.Error())
@ -321,7 +318,7 @@ func (c *WorkStatusController) recreateResourceIfNeeded(ctx context.Context, wor
if reflect.DeepEqual(desiredGVK, workloadKey.GroupVersionKind()) &&
manifest.GetNamespace() == workloadKey.Namespace &&
manifest.GetName() == workloadKey.Name {
klog.Infof("Recreating resource(%s).", workloadKey.String())
klog.InfoS("Recreating resource.", "resource", workloadKey.String())
err := c.ObjectWatcher.Create(ctx, workloadKey.Cluster, manifest)
metrics.CountCreateResourceToCluster(err, workloadKey.GroupVersion().String(), workloadKey.Kind, workloadKey.Cluster, true)
if err != nil {
@ -354,7 +351,7 @@ func (c *WorkStatusController) updateAppliedCondition(ctx context.Context, work
})
if err != nil {
klog.Errorf("Failed to update condition of work %s/%s: %s", work.Namespace, work.Name, err.Error())
klog.ErrorS(err, "Failed to update condition of work.", "namespace", work.Namespace, "name", work.Name)
}
}
@ -362,8 +359,7 @@ func (c *WorkStatusController) updateAppliedCondition(ctx context.Context, work
func (c *WorkStatusController) reflectStatus(ctx context.Context, work *workv1alpha1.Work, clusterObj *unstructured.Unstructured) error {
statusRaw, err := c.ResourceInterpreter.ReflectStatus(clusterObj)
if err != nil {
klog.Errorf("Failed to reflect status for object(%s/%s/%s) with resourceInterpreter, err: %v",
clusterObj.GetKind(), clusterObj.GetNamespace(), clusterObj.GetName(), err)
klog.ErrorS(err, "Failed to reflect status for object with resourceInterpreter", "kind", clusterObj.GetKind(), "resource", clusterObj.GetNamespace()+"/"+clusterObj.GetName())
c.EventRecorder.Eventf(work, corev1.EventTypeWarning, events.EventReasonReflectStatusFailed, "Reflect status for object(%s/%s/%s) failed, err: %s.", clusterObj.GetKind(), clusterObj.GetNamespace(), clusterObj.GetName(), err.Error())
return err
}
@ -394,7 +390,7 @@ func (c *WorkStatusController) reflectStatus(ctx context.Context, work *workv1al
func (c *WorkStatusController) interpretHealth(clusterObj *unstructured.Unstructured, work *workv1alpha1.Work) workv1alpha1.ResourceHealth {
// For kind that doesn't have health check, we treat it as healthy.
if !c.ResourceInterpreter.HookEnabled(clusterObj.GroupVersionKind(), configv1alpha1.InterpreterOperationInterpretHealth) {
klog.V(5).Infof("skipping health assessment for object: %v %s/%s as missing customization and will treat it as healthy.", clusterObj.GroupVersionKind(), clusterObj.GetNamespace(), clusterObj.GetName())
klog.V(5).InfoS("skipping health assessment for object as customization missing; will treat it as healthy.", "kind", clusterObj.GroupVersionKind(), "resource", clusterObj.GetNamespace()+"/"+clusterObj.GetName())
return workv1alpha1.ResourceHealthy
}
@ -501,7 +497,7 @@ func (c *WorkStatusController) registerInformersAndStart(cluster *clusterv1alpha
}
return nil
}(); err != nil {
klog.Errorf("Failed to sync cache for cluster: %s, error: %v", cluster.Name, err)
klog.ErrorS(err, "Failed to sync cache for cluster", "cluster", cluster.Name)
c.InformerManager.Stop(cluster.Name)
return err
}
@ -516,12 +512,12 @@ func (c *WorkStatusController) getGVRsFromWork(work *workv1alpha1.Work) (map[sch
workload := &unstructured.Unstructured{}
err := workload.UnmarshalJSON(manifest.Raw)
if err != nil {
klog.Errorf("Failed to unmarshal workload. Error: %v.", err)
klog.ErrorS(err, "Failed to unmarshal workload.")
return nil, err
}
gvr, err := restmapper.GetGroupVersionResource(c.RESTMapper, workload.GroupVersionKind())
if err != nil {
klog.Errorf("Failed to get GVR from GVK for resource %s/%s. Error: %v.", workload.GetNamespace(), workload.GetName(), err)
klog.ErrorS(err, "Failed to get GVR from GVK for resource.", "namespace", workload.GetNamespace(), "name", workload.GetName())
return nil, err
}
gvrTargets[gvr] = true
@ -538,7 +534,7 @@ func (c *WorkStatusController) getSingleClusterManager(cluster *clusterv1alpha1.
if singleClusterInformerManager == nil {
dynamicClusterClient, err := c.ClusterDynamicClientSetFunc(cluster.Name, c.Client, c.ClusterClientOption)
if err != nil {
klog.Errorf("Failed to build dynamic cluster client for cluster %s.", cluster.Name)
klog.ErrorS(err, "Failed to build dynamic cluster client for cluster.", "cluster", cluster.Name)
return nil, err
}
singleClusterInformerManager = c.InformerManager.ForCluster(dynamicClusterClient.ClusterName, dynamicClusterClient.DynamicClientSet, 0)
@ -548,18 +544,24 @@ func (c *WorkStatusController) getSingleClusterManager(cluster *clusterv1alpha1.
// SetupWithManager creates a controller and register to controller manager.
func (c *WorkStatusController) SetupWithManager(mgr controllerruntime.Manager) error {
return controllerruntime.NewControllerManagedBy(mgr).
Named(WorkStatusControllerName).
For(&workv1alpha1.Work{}, builder.WithPredicates(c.PredicateFunc)).
ctrlBuilder := controllerruntime.NewControllerManagedBy(mgr).Named(WorkStatusControllerName).
WithOptions(controller.Options{
RateLimiter: ratelimiterflag.DefaultControllerRateLimiter[controllerruntime.Request](c.RateLimiterOptions),
}).Complete(c)
})
if c.WorkPredicateFunc != nil {
ctrlBuilder.For(&workv1alpha1.Work{}, builder.WithPredicates(c.WorkPredicateFunc))
} else {
ctrlBuilder.For(&workv1alpha1.Work{})
}
return ctrlBuilder.Complete(c)
}
func (c *WorkStatusController) eventf(object *unstructured.Unstructured, eventType, reason, messageFmt string, args ...interface{}) {
ref, err := util.GenEventRef(object)
if err != nil {
klog.Errorf("Ignore event(%s) as failing to build event reference for: kind=%s, %s due to %v", reason, object.GetKind(), klog.KObj(object), err)
klog.ErrorS(err, "Ignoring event. Failed to build event reference.", "reason", reason, "kind", object.GetKind(), "reference", klog.KObj(object))
return
}
c.EventRecorder.Eventf(ref, eventType, reason, messageFmt, args...)

View File

@ -107,7 +107,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
Data: map[string][]byte{clusterv1alpha1.SecretTokenKey: []byte("token"), clusterv1alpha1.SecretCADataKey: testCA},
}).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSet,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -135,7 +135,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -163,7 +163,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -192,7 +192,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -220,7 +220,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -248,7 +248,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster1", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -276,7 +276,7 @@ func TestWorkStatusController_Reconcile(t *testing.T) {
c: WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionFalse)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -335,7 +335,7 @@ func TestWorkStatusController_getEventHandler(t *testing.T) {
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionFalse)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -351,7 +351,7 @@ func TestWorkStatusController_RunWorkQueue(_ *testing.T) {
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionFalse)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -740,7 +740,7 @@ func newWorkStatusController(cluster *clusterv1alpha1.Cluster, dynamicClientSets
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(cluster).WithStatusSubresource().Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -874,7 +874,7 @@ func TestWorkStatusController_recreateResourceIfNeeded(t *testing.T) {
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -921,7 +921,7 @@ func TestWorkStatusController_buildStatusIdentifier(t *testing.T) {
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},
@ -982,7 +982,7 @@ func TestWorkStatusController_mergeStatus(t *testing.T) {
c := WorkStatusController{
Client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(newCluster("cluster", clusterv1alpha1.ClusterConditionReady, metav1.ConditionTrue)).Build(),
InformerManager: genericmanager.GetInstance(),
PredicateFunc: helper.NewClusterPredicateOnAgent("test"),
WorkPredicateFunc: helper.NewClusterPredicateOnAgent("test"),
ClusterDynamicClientSetFunc: util.NewClusterDynamicClientSetForAgent,
ClusterCacheSyncTimeout: metav1.Duration{},
RateLimiterOptions: ratelimiterflag.Options{},

View File

@ -76,13 +76,13 @@ func (c *RebalancerController) SetupWithManager(mgr controllerruntime.Manager) e
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (c *RebalancerController) Reconcile(ctx context.Context, req controllerruntime.Request) (controllerruntime.Result, error) {
klog.V(4).Infof("Reconciling for WorkloadRebalancer %s", req.Name)
klog.V(4).InfoS("Reconciling for WorkloadRebalancer %s", req.Name)
// 1. get latest WorkloadRebalancer
rebalancer := &appsv1alpha1.WorkloadRebalancer{}
if err := c.Client.Get(ctx, req.NamespacedName, rebalancer); err != nil {
if apierrors.IsNotFound(err) {
klog.Infof("no need to reconcile WorkloadRebalancer for it not found")
klog.InfoS("no need to reconcile WorkloadRebalancer for it not found")
return controllerruntime.Result{}, nil
}
return controllerruntime.Result{}, err
@ -203,7 +203,7 @@ func (c *RebalancerController) triggerReschedule(ctx context.Context, metadata m
if resource.Workload.Namespace != "" {
binding := &workv1alpha2.ResourceBinding{}
if err := c.Client.Get(ctx, client.ObjectKey{Namespace: resource.Workload.Namespace, Name: bindingName}, binding); err != nil {
klog.Errorf("get binding for resource %+v failed: %+v", resource.Workload, err)
klog.ErrorS(err, "get binding for resource failed", "resource", resource.Workload)
c.recordAndCountRebalancerFailed(&newStatus.ObservedWorkloads[i], &retryNum, err)
continue
}
@ -212,7 +212,7 @@ func (c *RebalancerController) triggerReschedule(ctx context.Context, metadata m
binding.Spec.RescheduleTriggeredAt = &metadata.CreationTimestamp
if err := c.Client.Update(ctx, binding); err != nil {
klog.Errorf("update binding for resource %+v failed: %+v", resource.Workload, err)
klog.ErrorS(err, "update binding for resource failed", "resource", resource.Workload)
c.recordAndCountRebalancerFailed(&newStatus.ObservedWorkloads[i], &retryNum, err)
continue
}
@ -221,7 +221,7 @@ func (c *RebalancerController) triggerReschedule(ctx context.Context, metadata m
} else {
clusterbinding := &workv1alpha2.ClusterResourceBinding{}
if err := c.Client.Get(ctx, client.ObjectKey{Name: bindingName}, clusterbinding); err != nil {
klog.Errorf("get cluster binding for resource %+v failed: %+v", resource.Workload, err)
klog.ErrorS(err, "get cluster binding for resource failed", "resource", resource.Workload)
c.recordAndCountRebalancerFailed(&newStatus.ObservedWorkloads[i], &retryNum, err)
continue
}
@ -230,7 +230,7 @@ func (c *RebalancerController) triggerReschedule(ctx context.Context, metadata m
clusterbinding.Spec.RescheduleTriggeredAt = &metadata.CreationTimestamp
if err := c.Client.Update(ctx, clusterbinding); err != nil {
klog.Errorf("update cluster binding for resource %+v failed: %+v", resource.Workload, err)
klog.ErrorS(err, "update cluster binding for resource failed", "resource", resource.Workload)
c.recordAndCountRebalancerFailed(&newStatus.ObservedWorkloads[i], &retryNum, err)
continue
}
@ -239,8 +239,8 @@ func (c *RebalancerController) triggerReschedule(ctx context.Context, metadata m
}
}
klog.V(4).Infof("Finish handling WorkloadRebalancer (%s), %d/%d resource success in all, while %d resource need retry",
metadata.Name, successNum, len(newStatus.ObservedWorkloads), retryNum)
klog.V(4).InfoS(fmt.Sprintf("Finish handling WorkloadRebalancer, %d/%d resource success in all, while %d resource need retry",
successNum, len(newStatus.ObservedWorkloads), retryNum), "workloadRebalancer", metadata.Name, "successNum", successNum, "totalNum", len(newStatus.ObservedWorkloads), "retryNum", retryNum)
return newStatus, retryNum
}
@ -269,26 +269,26 @@ func (c *RebalancerController) updateWorkloadRebalancerStatus(ctx context.Contex
modifiedRebalancer.Status = *newStatus
return retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
klog.V(4).Infof("Start to patch WorkloadRebalancer(%s) status", rebalancer.Name)
klog.V(4).InfoS("Start to patch WorkloadRebalancer status", "workloadRebalancer", rebalancer.Name)
if err = c.Client.Status().Patch(ctx, modifiedRebalancer, client.MergeFrom(rebalancer)); err != nil {
klog.Errorf("Failed to patch WorkloadRebalancer (%s) status, err: %+v", rebalancer.Name, err)
klog.ErrorS(err, "Failed to patch WorkloadRebalancer status", "workloadRebalancer", rebalancer.Name)
return err
}
klog.V(4).Infof("Patch WorkloadRebalancer(%s) successful", rebalancer.Name)
klog.V(4).InfoS("Patch WorkloadRebalancer successful", "workloadRebalancer", rebalancer.Name)
return nil
})
}
func (c *RebalancerController) deleteWorkloadRebalancer(ctx context.Context, rebalancer *appsv1alpha1.WorkloadRebalancer) error {
klog.V(4).Infof("Start to clean up WorkloadRebalancer(%s)", rebalancer.Name)
klog.V(4).InfoS("Start to clean up WorkloadRebalancer", "workloadRebalancer", rebalancer.Name)
options := &client.DeleteOptions{Preconditions: &metav1.Preconditions{ResourceVersion: &rebalancer.ResourceVersion}}
if err := c.Client.Delete(ctx, rebalancer, options); err != nil {
klog.Errorf("Cleaning up WorkloadRebalancer(%s) failed: %+v.", rebalancer.Name, err)
klog.ErrorS(err, "Cleaning up WorkloadRebalancer failed", "workloadRebalancer", rebalancer.Name)
return err
}
klog.V(4).Infof("Cleaning up WorkloadRebalancer(%s) successful", rebalancer.Name)
klog.V(4).InfoS("Cleaning up WorkloadRebalancer successful", "workloadRebalancer", rebalancer.Name)
return nil
}
@ -296,6 +296,6 @@ func timeLeft(r *appsv1alpha1.WorkloadRebalancer) time.Duration {
expireAt := r.Status.FinishTime.Add(time.Duration(*r.Spec.TTLSecondsAfterFinished) * time.Second)
remainingTTL := time.Until(expireAt)
klog.V(4).Infof("Found Rebalancer(%s) finished at: %+v, remainingTTL: %+v", r.Name, r.Status.FinishTime.UTC(), remainingTTL)
klog.V(4).InfoS("Check remaining TTL", "workloadRebalancer", r.Name, "FinishTime", r.Status.FinishTime.UTC(), "remainingTTL", remainingTTL)
return remainingTTL
}

View File

@ -0,0 +1,156 @@
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-sidecarset
spec:
target:
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
customizations:
statusReflection:
luaScript: |
function ReflectStatus(observedObj)
if observedObj.status == nil then
return {}
end
return {
matchedPods = observedObj.status.matchedPods or 0,
updatedPods = observedObj.status.updatedPods or 0,
readyPods = observedObj.status.readyPods or 0
}
end
replicaResource:
luaScript: |
function GetReplicas(obj)
-- SidecarSet doesn't manage replicas directly, return 0
return 0
end
statusAggregation:
luaScript: |
function AggregateStatus(desiredObj, statusItems)
local matchedPods = 0
local updatedPods = 0
local readyPods = 0
for i = 1, #statusItems do
local status = statusItems[i].status or {}
matchedPods = matchedPods + (status.matchedPods or 0)
updatedPods = updatedPods + (status.updatedPods or 0)
readyPods = readyPods + (status.readyPods or 0)
end
return {
apiVersion = "apps.kruise.io/v1alpha1",
kind = "SidecarSet",
metadata = desiredObj.metadata,
status = {
matchedPods = matchedPods,
updatedPods = updatedPods,
readyPods = readyPods
}
}
end
retention:
luaScript: |
function Retain(desiredObj, observedObj)
-- No specific retention logic needed as Karmada handles status preservation
return desiredObj
end
healthInterpretation:
luaScript: |
function InterpretHealth(observedObj)
if observedObj.status == nil then
return false
end
local matchedPods = observedObj.status.matchedPods or 0
local updatedPods = observedObj.status.updatedPods or 0
-- If no pods are matched, consider it healthy (nothing to update)
if matchedPods == 0 then
return true
end
-- A SidecarSet is healthy if all matched pods have been updated
return updatedPods == matchedPods
end
dependencyInterpretation:
luaScript: |
function GetDependencies(desiredObj)
local dependencies = {}
if not desiredObj.spec then
return dependencies
end
-- Helper function to add a dependency
local function addDependency(kind, name, namespace)
table.insert(dependencies, {
apiVersion = "v1",
kind = kind,
name = name,
namespace = namespace or (desiredObj.metadata and desiredObj.metadata.namespace)
})
end
-- Check for references in containers
if desiredObj.spec.containers then
for i = 1, #desiredObj.spec.containers do
local container = desiredObj.spec.containers[i]
-- Check environment variables
if container.env then
for j = 1, #container.env do
local env = container.env[j]
if env.valueFrom then
if env.valueFrom.configMapKeyRef then
addDependency("ConfigMap", env.valueFrom.configMapKeyRef.name)
end
if env.valueFrom.secretKeyRef then
addDependency("Secret", env.valueFrom.secretKeyRef.name)
end
end
end
end
-- Check envFrom
if container.envFrom then
for j = 1, #container.envFrom do
local envFrom = container.envFrom[j]
if envFrom.configMapRef then
addDependency("ConfigMap", envFrom.configMapRef.name)
end
if envFrom.secretRef then
addDependency("Secret", envFrom.secretRef.name)
end
end
end
end
end
-- Check for volume references
if desiredObj.spec.volumes then
for i = 1, #desiredObj.spec.volumes do
local volume = desiredObj.spec.volumes[i]
-- Standard volume types
if volume.configMap then
addDependency("ConfigMap", volume.configMap.name)
end
if volume.secret then
addDependency("Secret", volume.secret.secretName)
end
-- Projected volumes
if volume.projected and volume.projected.sources then
for j = 1, #volume.projected.sources do
local source = volume.projected.sources[j]
if source.configMap then
addDependency("ConfigMap", source.configMap.name)
end
if source.secret then
addDependency("Secret", source.secret.name)
end
end
end
end
end
return dependencies
end

View File

@ -0,0 +1,12 @@
tests:
- desiredInputPath: testdata/desired-sidecarset-nginx.yaml
statusInputPath: testdata/status-file.yaml
operation: AggregateStatus
- desiredInputPath: testdata/desired-sidecarset-nginx.yaml
operation: InterpretDependency
- observedInputPath: testdata/observed-sidecarset-nginx.yaml
operation: InterpretReplica
- observedInputPath: testdata/observed-sidecarset-nginx.yaml
operation: InterpretHealth
- observedInputPath: testdata/observed-sidecarset-nginx.yaml
operation: InterpretStatus

View File

@ -0,0 +1,35 @@
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
labels:
app: sample
name: sample-sidecarset
namespace: test-sidecarset
generation: 1
spec:
selector:
matchLabels:
app: sample
test: sidecarset
containers:
- name: sidecar
image: busybox:latest
env:
- name: CONFIG_DATA
valueFrom:
configMapKeyRef:
name: sidecar-config
key: config
- name: SECRET_DATA
valueFrom:
secretKeyRef:
name: sidecar-secret
key: secret
injectionStrategy: BeforeAppContainer
volumes:
- name: configmap
configMap:
name: sidecar-config
- name: secret
secret:
secretName: sidecar-secret

View File

@ -0,0 +1,39 @@
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
labels:
app: sample
name: sample-sidecarset
namespace: test-sidecarset
generation: 1
spec:
selector:
matchLabels:
app: sample
test: sidecarset
containers:
- name: sidecar
image: busybox:latest
env:
- name: CONFIG_DATA
valueFrom:
configMapKeyRef:
name: sidecar-config
key: config
- name: SECRET_DATA
valueFrom:
secretKeyRef:
name: sidecar-secret
key: secret
injectionStrategy: BeforeAppContainer
volumes:
- name: configmap
configMap:
name: sidecar-config
- name: secret
secret:
secretName: sidecar-secret
status:
matchedPods: 3
updatedPods: 3
readyPods: 3

View File

@ -0,0 +1,21 @@
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sample-sidecarset
namespace: test-sidecarset
clusterName: member1
status:
matchedPods: 2
updatedPods: 2
readyPods: 2
---
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
metadata:
name: sample-sidecarset
namespace: test-sidecarset
clusterName: member2
status:
matchedPods: 1
updatedPods: 1
readyPods: 1

View File

@ -0,0 +1,171 @@
apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
name: declarative-configuration-uniteddeployment
spec:
target:
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
customizations:
replicaResource:
luaScript: >
local kube = require("kube")
function GetReplicas(obj)
local replica = obj.spec.replicas
local requirement = kube.accuratePodRequirements(obj.spec.template)
return replica, requirement
end
replicaRevision:
luaScript: >
function ReviseReplica(obj, desiredReplica)
obj.spec.replicas = desiredReplica
return obj
end
statusAggregation:
luaScript: >
function AggregateStatus(desiredObj, statusItems)
if desiredObj.status == nil then
desiredObj.status = {}
end
if desiredObj.metadata.generation == nil then
desiredObj.metadata.generation = 0
end
if desiredObj.status.observedGeneration == nil then
desiredObj.status.observedGeneration = 0
end
-- Initialize status fields if status does not exist
-- If the UnitedDeployment is not spread to any cluster,
-- its status also should be aggregated
if statusItems == nil then
desiredObj.status.observedGeneration = desiredObj.metadata.generation
desiredObj.status.replicas = 0
desiredObj.status.readyReplicas = 0
desiredObj.status.updatedReplicas = 0
desiredObj.status.availableReplicas = 0
desiredObj.status.unavailableReplicas = 0
return desiredObj
end
local generation = desiredObj.metadata.generation
local observedGeneration = desiredObj.status.observedGeneration
local replicas = 0
local updatedReplicas = 0
local readyReplicas = 0
local availableReplicas = 0
local unavailableReplicas = 0
-- Use a map to merge conditions by type
local conditionsMap = {}
-- Count all members that their status is updated to the latest generation
local observedResourceTemplateGenerationCount = 0
for i = 1, #statusItems do
local itemStatus = statusItems[i].status
if itemStatus ~= nil then
replicas = replicas + (itemStatus.replicas or 0)
updatedReplicas = updatedReplicas + (itemStatus.updatedReplicas or 0)
readyReplicas = readyReplicas + (itemStatus.readyReplicas or 0)
availableReplicas = availableReplicas + (itemStatus.availableReplicas or 0)
unavailableReplicas = unavailableReplicas + (itemStatus.unavailableReplicas or 0)
-- Merge conditions from all clusters using a map
if itemStatus.conditions ~= nil then
for _, condition in ipairs(itemStatus.conditions) do
conditionsMap[condition.type] = condition
end
end
-- Check if the member's status is updated to the latest generation
local resourceTemplateGeneration = itemStatus.resourceTemplateGeneration or 0
local memberGeneration = itemStatus.generation or 0
local memberObservedGeneration = itemStatus.observedGeneration or 0
if resourceTemplateGeneration == generation and memberGeneration == memberObservedGeneration then
observedResourceTemplateGenerationCount = observedResourceTemplateGenerationCount + 1
end
end
end
-- Convert conditionsMap back to a list
local conditions = {}
for _, condition in pairs(conditionsMap) do
table.insert(conditions, condition)
end
-- Update the observed generation based on the observedResourceTemplateGenerationCount
if observedResourceTemplateGenerationCount == #statusItems then
desiredObj.status.observedGeneration = generation
else
desiredObj.status.observedGeneration = observedGeneration
end
desiredObj.status.replicas = replicas
desiredObj.status.updatedReplicas = updatedReplicas
desiredObj.status.readyReplicas = readyReplicas
desiredObj.status.availableReplicas = availableReplicas
desiredObj.status.unavailableReplicas = unavailableReplicas
if #conditions > 0 then
desiredObj.status.conditions = conditions
end
return desiredObj
end
statusReflection:
luaScript: >
function ReflectStatus(observedObj)
local status = {}
if observedObj == nil or observedObj.status == nil then
return status
end
status.replicas = observedObj.status.replicas
status.updatedReplicas = observedObj.status.updatedReplicas
status.readyReplicas = observedObj.status.readyReplicas
status.availableReplicas = observedObj.status.availableReplicas
status.unavailableReplicas = observedObj.status.unavailableReplicas
status.observedGeneration = observedObj.status.observedGeneration
status.conditions = observedObj.status.conditions
-- handle member resource generation report
if observedObj.metadata ~= nil then
status.generation = observedObj.metadata.generation
-- handle resource template generation report
if observedObj.metadata.annotations ~= nil then
local resourceTemplateGeneration = tonumber(observedObj.metadata.annotations["resourcetemplate.karmada.io/generation"])
if resourceTemplateGeneration ~= nil then
status.resourceTemplateGeneration = resourceTemplateGeneration
end
end
end
return status
end
healthInterpretation:
luaScript: >
function InterpretHealth(observedObj)
if observedObj == nil or observedObj.status == nil or observedObj.metadata == nil or observedObj.spec == nil then
return false
end
if observedObj.status.observedGeneration ~= observedObj.metadata.generation then
return false
end
if observedObj.spec.replicas ~= nil then
if observedObj.status.updatedReplicas < observedObj.spec.replicas then
return false
end
end
if observedObj.status.availableReplicas < observedObj.status.updatedReplicas then
return false
end
return true
end
dependencyInterpretation:
luaScript: >
local kube = require("kube")
function GetDependencies(desiredObj)
local refs = kube.getPodDependencies(desiredObj.spec.template, desiredObj.metadata.namespace)
return refs
end

View File

@ -0,0 +1,15 @@
tests:
- desiredInputPath: testdata/desired-uniteddeployment.yaml
statusInputPath: testdata/status-file.yaml
operation: AggregateStatus
- desiredInputPath: testdata/desired-uniteddeployment.yaml
operation: InterpretDependency
- observedInputPath: testdata/observed-uniteddeployment.yaml
operation: InterpretReplica
- observedInputPath: testdata/observed-uniteddeployment.yaml
operation: ReviseReplica
desiredReplicas: 1
- observedInputPath: testdata/observed-uniteddeployment.yaml
operation: InterpretHealth
- observedInputPath: testdata/observed-uniteddeployment.yaml
operation: InterpretStatus

View File

@ -0,0 +1,40 @@
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: sample-uniteddeployment
namespace: test-namespace
generation: 1
spec:
replicas: 3
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: nginx
image: nginx:latest
env:
- name: CONFIG_DATA
valueFrom:
configMapKeyRef:
name: app-config
key: config
- name: SECRET_DATA
valueFrom:
secretKeyRef:
name: app-secret
key: token
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
topologySpread:
- topologyKey: kubernetes.io/hostname
maxSkew: 1

View File

@ -0,0 +1,60 @@
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: sample-uniteddeployment
namespace: test-namespace
generation: 1
resourceVersion: "12345"
uid: "a1b2c3d4-5678-90ef-ghij-klmnopqrstuv"
spec:
replicas: 3
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: nginx
image: nginx:latest
env:
- name: CONFIG_DATA
valueFrom:
configMapKeyRef:
name: app-config
key: config
- name: SECRET_DATA
valueFrom:
secretKeyRef:
name: app-secret
key: token
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
topologySpread:
- topologyKey: kubernetes.io/hostname
maxSkew: 1
status:
replicas: 3
readyReplicas: 3
updatedReplicas: 3
availableReplicas: 3
collisionCount: 0
observedGeneration: 1
conditions:
- type: Available
status: "True"
lastTransitionTime: "2023-01-01T00:00:00Z"
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: "True"
lastTransitionTime: "2023-01-01T00:00:00Z"
reason: NewReplicaSetAvailable
message: ReplicaSet has successfully progressed.

View File

@ -0,0 +1,27 @@
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: sample-uniteddeployment
namespace: test-namespace
clusterName: member1
status:
replicas: 2
readyReplicas: 2
updatedReplicas: 2
availableReplicas: 2
collisionCount: 0
observedGeneration: 1
---
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: sample-uniteddeployment
namespace: test-namespace
clusterName: member2
status:
replicas: 1
readyReplicas: 1
updatedReplicas: 1
availableReplicas: 1
collisionCount: 0
observedGeneration: 1

View File

@ -40,18 +40,6 @@ const (
// This label indicates the name.
MultiClusterServiceNameLabel = "multiclusterservice.karmada.io/name"
// PropagationInstruction is used to mark a resource(like Work) propagation instruction.
// Valid values includes:
// - suppressed: indicates that the resource should not be propagated.
//
// Note: This instruction is intended to set on Work objects to indicate the Work should be ignored by
// execution controller. The instruction maybe deprecated once we extend the Work API and no other scenario want this.
//
// Deprecated: This label has been deprecated since v1.14, and will be replaced by the filed .spec.suspendDispatching of
// Work API. This label should only be used internally by Karmada, but for compatibility, the deletion will be postponed
// to release 1.15.
PropagationInstruction = "propagation.karmada.io/instruction"
// FederatedResourceQuotaNamespaceLabel is added to Work to specify associated FederatedResourceQuota's namespace.
FederatedResourceQuotaNamespaceLabel = "federatedresourcequota.karmada.io/namespace"
@ -90,9 +78,6 @@ const (
// RetainReplicasValue is an optional value of RetainReplicasLabel, indicating retain
RetainReplicasValue = "true"
// PropagationInstructionSuppressed indicates that the resource should not be propagated.
PropagationInstructionSuppressed = "suppressed"
)
// Define annotations used by karmada system.

View File

@ -29,21 +29,14 @@ import (
"github.com/karmada-io/karmada/pkg/util/names"
)
// NewExecutionPredicate generates the event filter function to skip events that the controllers are uninterested.
// WorkWithinPushClusterPredicate generates the event filter function to skip events that the controllers are uninterested.
// Used by controllers:
// - execution controller working in karmada-controller-manager
// - work status controller working in karmada-controller-manager
func NewExecutionPredicate(mgr controllerruntime.Manager) predicate.Funcs {
predFunc := func(eventType string, object client.Object) bool {
func WorkWithinPushClusterPredicate(mgr controllerruntime.Manager) predicate.Funcs {
predFunc := func(object client.Object) bool {
obj := object.(*workv1alpha1.Work)
// Ignore the object that has been suppressed.
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if util.GetLabelValue(obj.Labels, util.PropagationInstruction) == util.PropagationInstructionSuppressed {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as propagation instruction is suppressed.", obj.Namespace, obj.Name, eventType)
return false
}
clusterName, err := names.GetClusterName(obj.Namespace)
if err != nil {
klog.Errorf("Failed to get member cluster name for work %s/%s", obj.Namespace, obj.Name)
@ -61,13 +54,13 @@ func NewExecutionPredicate(mgr controllerruntime.Manager) predicate.Funcs {
return predicate.Funcs{
CreateFunc: func(createEvent event.CreateEvent) bool {
return predFunc("create", createEvent.Object)
return predFunc(createEvent.Object)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return predFunc("update", updateEvent.ObjectNew) || predFunc("update", updateEvent.ObjectOld)
return predFunc(updateEvent.ObjectNew) || predFunc(updateEvent.ObjectOld)
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return predFunc("delete", deleteEvent.Object)
return predFunc(deleteEvent.Object)
},
GenericFunc: func(event.GenericEvent) bool {
return false
@ -80,12 +73,6 @@ func NewPredicateForServiceExportController(mgr controllerruntime.Manager) predi
predFunc := func(eventType string, object client.Object) bool {
obj := object.(*workv1alpha1.Work)
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if util.GetLabelValue(obj.Labels, util.PropagationInstruction) == util.PropagationInstructionSuppressed {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as propagation instruction is suppressed.", obj.Namespace, obj.Name, eventType)
return false
}
if util.IsWorkSuspendDispatching(obj) {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as dispatching is suspended.", obj.Namespace, obj.Name, eventType)
return false
@ -178,12 +165,6 @@ func NewPredicateForServiceExportControllerOnAgent(curClusterName string) predic
predFunc := func(eventType string, object client.Object) bool {
obj := object.(*workv1alpha1.Work)
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if util.GetLabelValue(obj.Labels, util.PropagationInstruction) == util.PropagationInstructionSuppressed {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as propagation instruction is suppressed.", obj.Namespace, obj.Name, eventType)
return false
}
if util.IsWorkSuspendDispatching(obj) {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as dispatching is suspended.", obj.Namespace, obj.Name, eventType)
return false
@ -240,37 +221,3 @@ func NewPredicateForEndpointSliceCollectControllerOnAgent(curClusterName string)
},
}
}
// NewExecutionPredicateOnAgent generates the event filter function to skip events that the controllers are uninterested.
// Used by controllers:
// - execution controller working in agent
// - work status controller working in agent
func NewExecutionPredicateOnAgent() predicate.Funcs {
predFunc := func(eventType string, object client.Object) bool {
obj := object.(*workv1alpha1.Work)
// Ignore the object that has been suppressed.
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
if util.GetLabelValue(obj.Labels, util.PropagationInstruction) == util.PropagationInstructionSuppressed {
klog.V(5).Infof("Ignored Work(%s/%s) %s event as propagation instruction is suppressed.", obj.Namespace, obj.Name, eventType)
return false
}
return true
}
return predicate.Funcs{
CreateFunc: func(createEvent event.CreateEvent) bool {
return predFunc("create", createEvent.Object)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return predFunc("update", updateEvent.ObjectNew) || predFunc("update", updateEvent.ObjectOld)
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return predFunc("delete", deleteEvent.Object)
},
GenericFunc: func(event.GenericEvent) bool {
return false
},
}
}

View File

@ -20,6 +20,7 @@ import (
"testing"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/ptr"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
@ -27,7 +28,6 @@ import (
clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
workv1alpha1 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha1"
"github.com/karmada-io/karmada/pkg/util"
"github.com/karmada-io/karmada/pkg/util/gclient"
"github.com/karmada-io/karmada/pkg/util/names"
)
@ -85,7 +85,7 @@ func TestNewClusterPredicateOnAgent(t *testing.T) {
}
}
func TestNewExecutionPredicate(t *testing.T) {
func TestWorkWithinPushClusterPredicate(t *testing.T) {
type args struct {
mgr controllerruntime.Manager
obj client.Object
@ -98,30 +98,6 @@ func TestNewExecutionPredicate(t *testing.T) {
args args
want want
}{
{
name: "object is suppressed",
args: args{
mgr: &fakeManager{client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(
&clusterv1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{Name: "cluster"},
Spec: clusterv1alpha1.ClusterSpec{SyncMode: clusterv1alpha1.Push},
},
).Build()},
obj: &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
},
},
},
want: want{
create: false,
update: false,
delete: false,
generic: false,
},
},
{
name: "get cluster name error",
args: args{
@ -200,7 +176,7 @@ func TestNewExecutionPredicate(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pred := NewExecutionPredicate(tt.args.mgr)
pred := WorkWithinPushClusterPredicate(tt.args.mgr)
if got := pred.Create(event.CreateEvent{Object: tt.args.obj}); got != tt.want.create {
t.Errorf("Create() got = %v, want %v", got, tt.want.create)
return
@ -221,23 +197,25 @@ func TestNewExecutionPredicate(t *testing.T) {
}
}
func TestNewExecutionPredicate_Update(t *testing.T) {
func TestWorkWithinPushClusterPredicate_Update(t *testing.T) {
mgr := &fakeManager{client: fake.NewClientBuilder().WithScheme(gclient.NewSchema()).WithObjects(
&clusterv1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{Name: "cluster"},
ObjectMeta: metav1.ObjectMeta{Name: "cluster1"},
Spec: clusterv1alpha1.ClusterSpec{SyncMode: clusterv1alpha1.Pull},
},
&clusterv1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{Name: "cluster2"},
Spec: clusterv1alpha1.ClusterSpec{SyncMode: clusterv1alpha1.Push},
},
).Build()}
unmatched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster1",
},
}
matched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster2",
},
}
@ -281,127 +259,7 @@ func TestNewExecutionPredicate_Update(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pred := NewExecutionPredicate(mgr)
if got := pred.Update(tt.args.event); got != tt.want {
t.Errorf("Update() got = %v, want %v", got, tt.want)
return
}
})
}
}
func TestNewExecutionPredicateOnAgent(t *testing.T) {
type want struct {
create, update, delete, generic bool
}
tests := []struct {
name string
obj client.Object
want want
}{
{
name: "object is suppressed",
obj: &workv1alpha1.Work{ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
util.PropagationInstruction: util.PropagationInstructionSuppressed,
}}},
want: want{
create: false,
update: false,
delete: false,
generic: false,
},
},
{
name: "matched",
obj: &workv1alpha1.Work{ObjectMeta: metav1.ObjectMeta{}},
want: want{
create: true,
update: true,
delete: true,
generic: false,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pred := NewExecutionPredicateOnAgent()
if got := pred.Create(event.CreateEvent{Object: tt.obj}); got != tt.want.create {
t.Errorf("Create() got = %v, want %v", got, tt.want.create)
return
}
if got := pred.Update(event.UpdateEvent{ObjectNew: tt.obj, ObjectOld: tt.obj}); got != tt.want.update {
t.Errorf("Update() got = %v, want %v", got, tt.want.update)
return
}
if got := pred.Delete(event.DeleteEvent{Object: tt.obj}); got != tt.want.delete {
t.Errorf("Delete() got = %v, want %v", got, tt.want.delete)
return
}
if got := pred.Generic(event.GenericEvent{Object: tt.obj}); got != tt.want.generic {
t.Errorf("Generic() got = %v, want %v", got, tt.want.generic)
return
}
})
}
}
func TestNewExecutionPredicateOnAgent_Update(t *testing.T) {
unmatched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
},
}
matched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
},
}
type args struct {
event event.UpdateEvent
}
tests := []struct {
name string
args args
want bool
}{
{
name: "both old and new are unmatched",
args: args{
event: event.UpdateEvent{ObjectOld: unmatched, ObjectNew: unmatched},
},
want: false,
},
{
name: "old is unmatched, new is matched",
args: args{
event: event.UpdateEvent{ObjectOld: unmatched, ObjectNew: matched},
},
want: true,
},
{
name: "old is matched, new is unmatched",
args: args{
event: event.UpdateEvent{ObjectOld: matched, ObjectNew: unmatched},
},
want: true,
},
{
name: "both old and new are matched",
args: args{
event: event.UpdateEvent{ObjectOld: matched, ObjectNew: matched},
},
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
pred := NewExecutionPredicateOnAgent()
pred := WorkWithinPushClusterPredicate(mgr)
if got := pred.Update(tt.args.event); got != tt.want {
t.Errorf("Update() got = %v, want %v", got, tt.want)
return
@ -435,8 +293,9 @@ func TestNewPredicateForServiceExportController(t *testing.T) {
obj: &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
},
Spec: workv1alpha1.WorkSpec{
SuspendDispatching: ptr.To(true),
},
},
},
@ -556,8 +415,9 @@ func TestNewPredicateForServiceExportController_Update(t *testing.T) {
unmatched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
},
Spec: workv1alpha1.WorkSpec{
SuspendDispatching: ptr.To(true),
},
}
matched := &workv1alpha1.Work{
@ -627,11 +487,14 @@ func TestNewPredicateForServiceExportControllerOnAgent(t *testing.T) {
}{
{
name: "object is suppressed",
obj: &workv1alpha1.Work{ObjectMeta: metav1.ObjectMeta{
obj: &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
}},
},
Spec: workv1alpha1.WorkSpec{
SuspendDispatching: ptr.To(true),
},
},
want: want{
create: false,
update: false,
@ -702,8 +565,9 @@ func TestNewPredicateForServiceExportControllerOnAgent_Update(t *testing.T) {
unmatched := &workv1alpha1.Work{
ObjectMeta: metav1.ObjectMeta{
Name: "work", Namespace: names.ExecutionSpacePrefix + "cluster",
//nolint:staticcheck // SA1019 ignore deprecated util.PropagationInstruction
Labels: map[string]string{util.PropagationInstruction: util.PropagationInstructionSuppressed},
},
Spec: workv1alpha1.WorkSpec{
SuspendDispatching: ptr.To(true),
},
}
matched := &workv1alpha1.Work{

View File

@ -21,7 +21,9 @@ import (
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"fmt"
"math/big"
"net"
"time"
)
@ -69,3 +71,37 @@ func GenerateTestCACertificate() (string, string, error) {
return string(certPEMData), string(privKeyPEMData), nil
}
// GetFreePorts attempts to find n available TCP ports on the specified host. It
// returns a slice of allocated port numbers or an error if it fails to acquire
// them.
func GetFreePorts(host string, n int) ([]int, error) {
ports := make([]int, 0, n)
listeners := make([]net.Listener, 0, n)
// Make sure we close all listeners if there's an error.
defer func() {
for _, l := range listeners {
l.Close()
}
}()
for i := 0; i < n; i++ {
listener, err := net.Listen("tcp", fmt.Sprintf("%s:0", host))
if err != nil {
return nil, err
}
listeners = append(listeners, listener)
tcpAddr, ok := listener.Addr().(*net.TCPAddr)
if !ok {
return nil, fmt.Errorf("listener address is not a *net.TCPAddr")
}
ports = append(ports, tcpAddr.Port)
}
// At this point we have all ports, so we can close the listeners.
for _, l := range listeners {
l.Close()
}
return ports, nil
}