Compare commits

...

126 Commits

Author SHA1 Message Date
Peter Jiang 093aef0dad
fix: server-side diff shows refresh/hydrate annotations (#737)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-17 13:49:52 -04:00
Alexandre Gaudreault 8007df5f6c
fix(sync): create namespace before dry-run (#731) 2025-06-16 17:23:58 -04:00
Peter Jiang f8f1b61ba3
chore: upgrade k8s to 1.33.1 (#735)
* upgrade k8s to 1.33.1

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix linter issues

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-13 11:47:45 -04:00
Michael Crenshaw 8697b44eea
Revert "Add option to skip the dryrun from the sync context (#708)" (#730)
* Revert "Add option to skip the dryrun from the sync context (#708)"

This reverts commit 717b8bfd69.

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* format

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-06-12 14:52:06 -04:00
Peter Jiang 69dfa708a6
feat: auto migrate kubectl-client-side-apply fields for SSA (#727)
* feat: auto migrate kubectl-client-side-apply fields for SSA

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix master version

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* run gofumpt

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Propagate sync error instead of logging

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* allow enable/disable of CSA migration using annotation

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix linting

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Refactor to allow for multiple managers and disable option

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* remove commentj

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* refactor

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* fix test

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Add docs for client side apply migration

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Edit comment

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-06-12 10:03:52 -04:00
Michael Crenshaw cebed7e704
chore: wrap errors (#732)
* chore: wrap errors

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* report list result along with error

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fixes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-06-06 10:49:09 -04:00
Peter Jiang 89c110b595
fix: Server-Side diff removed fields missing in diff (#722)
* fix: Server-Side diff removed fields missing in diff

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* add unit test to cover deleted field

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-05-20 14:24:09 -04:00
Michael Crenshaw 90b69e9ae5
chore(deps): bump golangci-lint (#719)
* chore(deps): bump golangci-lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix lint

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-14 17:14:57 -04:00
dependabot[bot] 60c6378a12
chore(deps): bump codecov/codecov-action from a2f73fb6db51fcd2e0aa085dfb36dea90c5e3689 to 5c47607acb93fed5485fdbf7232e8a31425f672a (#649)
* chore(deps): bump codecov/codecov-action

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from a2f73fb6db51fcd2e0aa085dfb36dea90c5e3689 to 5c47607acb93fed5485fdbf7232e8a31425f672a.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](a2f73fb6db...5c47607acb)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update ci.yaml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-05-14 17:14:49 -04:00
dependabot[bot] c7f25189f0
chore(deps): bump actions/setup-go (#720)
Bumps the dependencies group with 1 update in the / directory: [actions/setup-go](https://github.com/actions/setup-go).


Updates `actions/setup-go` from 5.1.0 to 5.5.0
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](41dfa10bad...d35c59abb0)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 5.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 16:13:50 -04:00
dependabot[bot] 9169c08c91
chore(deps): bump golang.org/x/net from 0.36.0 to 0.38.0 (#713)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-14 16:04:28 -04:00
Pasha Kostohrys d65e9d9227
feat: Enable SkipDryRunOnMissingResource sync option on Application level (#712)
* fix: go mod tidy is not working due to k8s.io/externaljwt dependency

Signed-off-by: pasha <pasha.k@fyxt.com>

* feat: Enable SkipDryRunOnMissingResource sync option on Application level

Signed-off-by: pasha <pasha.k@fyxt.com>

* feat: Enable SkipDryRunOnMissingResource sync option on Application level

* feat: add support for skipping dry run on missing resources in sync context

- Introduced a new option to skip dry run verification for missing resources at the application level.
- Updated the sync context to include a flag for this feature.
- Enhanced tests to cover scenarios where the skip dry run annotation is applied to all resources.

---------

Signed-off-by: pasha <pasha.k@fyxt.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
2025-04-20 09:41:38 +03:00
Pasha Kostohrys 5f90e7b481
fix: go mod tidy is not working due to k8s.io/externaljwt dependency (#710)
Signed-off-by: pasha <pasha.k@fyxt.com>
Co-authored-by: pasha <pasha.k@fyxt.com>
2025-04-12 22:28:44 +03:00
Nick Heijmink 717b8bfd69
Add option to skip the dryrun from the sync context (#708)
* Add option to skip the dryrun from the sync context

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix test by mocking the discovery

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix linting errors

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>

* Fix skip dryrun const

---------

Signed-off-by: Nick Heijmink <nick.heijmink@alliander.com>
2025-04-10 14:12:36 -07:00
Aaron Hoffman c61756277b
feat: return images from resources when sync occurs (#642)
* Add GetResourceImages

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Use require instead of assert

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add test for empty images case

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Rename test function to match regex

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add support for conjobs, refactor images implementation and add test for cronjobs

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Move missing images tests to single function

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Refactor test

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Add benchmark for sync

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

* Update comment on images

Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>

---------

Signed-off-by: Aaron Hoffman <31711338+Aaron-9900@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-27 09:45:14 -04:00
Andrii Korotkov 370078d070
chore: Switch dry run applies to log with debug level (#705)
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-18 09:33:28 -04:00
Peter Jiang 7258614f50
chore: add unit test for ssa with dryRun (#703)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-14 12:43:14 -04:00
dependabot[bot] e5ef2e16d8
chore(deps): bump golang.org/x/net from 0.33.0 to 0.36.0 (#700)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.33.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-13 11:41:39 -04:00
Peter Jiang 762f9b70f3
fix: Fix checking dryRun when using Server Side Apply (#699)
* fix: properly check dryRun flag for server side apply

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Remove debug logging

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-03-13 09:11:58 -04:00
Andrii Korotkov 1265e8382e
chore(deps): Update some dependencies - another run (#22228) (#696)
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-10 06:26:31 -07:00
Andrii Korotkov acb47d5407
chore(deps): Update some package versions (#690)
* chore(deps): Update some package versions

Helps with https://github.com/argoproj/argo-cd/issues/22104

Update some versions trying to avoid legacy dependencies. Bump go to 1.23.5.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Fix versions

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2025-03-05 10:26:49 -05:00
Michael Crenshaw e4cacd37c4
chore(ci): run tests on cherry-pick PRs (#694)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-03-04 14:06:13 -05:00
sivchari a16fb84a8c
bump k8s v1.32 (#665)
Signed-off-by: sivchari <shibuuuu5@gmail.com>
2025-03-02 12:29:52 -05:00
Peter Jiang 4fd18478f5
fix: Server-side diff shows incorrect diffs for list related changes (#688)
* fix: Server-side diff shows incorrect diffs for list related changes

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs for removeWebHookMutation

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

* Update docs

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>

---------

Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2025-02-27 16:20:03 -05:00
Dejan Zele Pejchev 65db274b8d
fix: stuck hook issue when a Job resource has a ttlSecondsAfterFinished field set (#646)
Signed-off-by: Dejan Zele Pejchev <pejcev.dejan@gmail.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2025-02-07 17:04:47 -05:00
Matthieu MOREL 11a5e25708
chore(deps): bump github.com/evanphx/json-patch to v5.9.11 (#682)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:37:51 -05:00
Matthieu MOREL 04266647b1
chore: enable require-error from testifylint (#681)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:01:48 -05:00
Matthieu MOREL cc13a7d417
chore: enable gocritic (#680)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 15:00:49 -05:00
Matthieu MOREL ad846ac0fd
chore: enable increment-decrement and redundant-import-alias from revive (#679)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 14:56:44 -05:00
Matthieu MOREL c323d36706
chore: enable dot-imports, duplicated-imports from revive (#678)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 13:10:59 -05:00
Matthieu MOREL 782fb85b94
chore: enable unparam linter (#677)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:52:34 -05:00
Matthieu MOREL f5aa9e4d10
chore: enable perfsprint linter (#676)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:44:40 -05:00
Matthieu MOREL 367311bd6f
chore: enable thelper linter (#675)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:43:51 -05:00
Matthieu MOREL 70bee6a3a5
chore: enable early-return, indent-error-flow and unnecessary-stmt from revive (#674)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 12:43:30 -05:00
Matthieu MOREL b111e50082
chore: enable errorlint (#673)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:25:43 -05:00
Andrii Korotkov 3ef5ab187e
fix: New kube applier for server side diff dry run with refactoring (#662)
* fix: New kube applier for server side diff dry run with refactoring

Part of a fix for: https://github.com/argoproj/argo-cd/issues/21488

Separate logic to handle server side diff dry run applies.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Break backwards compatibility for a better code

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Don't put applier constructor in the interface

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Address comments

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Address more comments

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable testifylint linter (#657)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable gofumpt, gosimple and whitespace linters (#666)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable use-any from revive (#667)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: bump golangci-lint to v1.63.4 and list argo-cd linters (#670)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: enable unused-parameter and var-declaration from revive (#668)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* chore: remove actions/cache duplicated behavior with actions/setup-go (#658)

Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Add Leo's code

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Make linter happy

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
Co-authored-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:13:38 -05:00
dependabot[bot] 30f4accb42
chore(deps): bump golang.org/x/net from 0.26.0 to 0.33.0 (#671)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.26.0 to 0.33.0.
- [Commits](https://github.com/golang/net/compare/v0.26.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-07 11:06:57 -05:00
Matthieu MOREL 00472077d3
chore: enable goimports linter (#669)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:06:31 -05:00
Matthieu MOREL bfdad63e27
chore: enable misspell linter (#672)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 11:05:44 -05:00
Matthieu MOREL a093a7627f
chore: remove actions/cache duplicated behavior with actions/setup-go (#658)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:56:59 -05:00
Matthieu MOREL ccee58366a
chore: enable unused-parameter and var-declaration from revive (#668)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:55:39 -05:00
Matthieu MOREL edb9faabbf
chore: bump golangci-lint to v1.63.4 and list argo-cd linters (#670)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-07 09:52:05 -05:00
Matthieu MOREL 7ac688a30f
chore: enable use-any from revive (#667)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 18:20:29 -05:00
Matthieu MOREL f948991e78
chore: enable gofumpt, gosimple and whitespace linters (#666)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 17:07:19 -05:00
Matthieu MOREL 382663864e
chore: enable testifylint linter (#657)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2025-02-06 10:27:22 -05:00
Siddhesh Ghadi 7e21b91e9d
Merge commit from fork
map[] in error output exposes secret data in last-applied-annotation
& patch error

Invalid secrets with stringData exposes the secret values in diff. Attempt a
normalization to prevent it.

Refactor stringData to data conversion to eliminate code duplication

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2025-01-29 10:51:13 -05:00
Michael Crenshaw d78929e7f6
fix(cluster): reduce lock contention on cluster initialization (#660)
* fix: move expensive function outside lock

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add benchmark

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2025-01-24 16:18:12 -05:00
Mykola Pelekh 54992bf424
fix: avoid resources lock contention utilizing channel (#629)
* fix: avoid resources lock contention utilizing channel

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: process events in batch when the mode is enabled (default is `false`)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* test: update unit tests to verify batch events processing flag

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* feat: make eventProcessingInterval option configurable (default is 0.1s)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

* fixup! feat: make eventProcessingInterval option configurable (default is 0.1s)

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>

---------

Signed-off-by: Mykola Pelekh <mpelekh@demonware.net>
2024-12-16 10:52:26 -05:00
JM (Jason Meridth) 8d65e80ecb
chore: update README get involved links (#647)
Change links:

- [x] from kubernetes slack to cncf slack
- [x] from k8s gitop channel to cncf argo-cd-contributors channel

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 16:23:57 -05:00
JM (Jason Meridth) 363a7155a5
chore: add CODEOWNERS (#641)
* chore: add CODEOWNERS and EMERITUS.md

Setting up CODEOWNERS file so people are automatically notified about new PRs. Can
eventually setup ruleset that requires at least one review before merging.

I based currently list in CODEOWNERS on people who have recently merged PRs. I'm wondering
if reviewers and approvers are a team/group instead of list of folks.

Setup EMERITUS.md that contains list from OWNERS.  Feedback on this PR will update
this PR.

Signed-off-by: jmeridth <jmeridth@gmail.com>

* chore: match this repo's CODEOWNERS to argoproj/argo-cd CODEOWNERS

Signed-off-by: jmeridth <jmeridth@gmail.com>

---------

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 16:22:31 -05:00
JM (Jason Meridth) 73452f8a58
fix: run go mod tidy in ci workflow (#652)
Fixes issue that showed up in https://github.com/argoproj/gitops-engine/pull/650

[Error](https://github.com/argoproj/gitops-engine/actions/runs/12300709584/job/34329534904?pr=650#step:5:96)

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 11:56:51 -05:00
JM (Jason Meridth) d948e6b41c
fix: github actions versions and warnings (#639)
* fix: github actions versions and warnings

- [x] upgrade github/actions/cache GitHub Action to latest
 - this fixings the following warnings (example list [here](https://github.com/argoproj/gitops-engine/actions/runs/11885468091)):
   - Your workflow is using a version of actions/cache that is scheduled for deprecation, actions/cache@v2.1.6. Please update your workflow to use the latest version of actions/cache to avoid interruptions. Learn more: https://github.blog/changelog/2024-09-16-notice-of-upcoming-deprecations-and-changes-in-github-actions-services/
   - The `save-state` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
   - The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
- [x] update dependabot config
  - prefix PRs with `chore(deps)`
  - group non major version updates into 1 PR

Signed-off-by: jmeridth <jmeridth@gmail.com>

* fix: switch to SHA from tags

more secure, as tags are mutable

Signed-off-by: jmeridth <jmeridth@gmail.com>

---------

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-12-12 11:31:38 -05:00
Andrii Korotkov 8849c3f30c
fix: Server side diff now works correctly with fields removal (#640)
* fix: Server side diff now works correctly with some fields removal

Helps with https://github.com/argoproj/argo-cd/issues/20792

Removed and modified sets may only contain the fields that changed, not including key fields like "name". This can cause merge to fail, since it expects those fields to be present if they are present in the predicted live.
Fortunately, we can inspect the set and derive the key fields necessary. Then they can be added to the set and used during a merge.
Also, have a new test which fails before the fix, but passes now.

Failure of the new test before the fix
```
            	Error:      	Received unexpected error:
            	            	error removing non config mutations for resource Deployment/nginx-deployment: error reverting webhook removed fields in predicted live resource: .spec.template.spec.containers: element 0: associative list with keys has an element that omits key field "name" (and doesn't have default value)
            	Test:       	TestServerSideDiff/will_test_removing_some_field_with_undoing_changes_done_by_webhook
```

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Use new version of structured merge diff with a new option

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Add DCO

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* Try to fix sonar exclusions config

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2024-12-11 15:28:47 -05:00
JM (Jason Meridth) 0371401803
chore(deps): upgrade go version in dockerfile (#638)
- [x] fix warnings about case of `as` to `AS` in Dockerfile
  - `FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 1)`
- [x] shorten go version in go.mod
- [x] update Dockerfile Go version from 1.17 to 1.22 to match go.mod
- [x] upgrade alipine/git image version to latest, current was 4 years old
  - -from alpine/git:v2.24.3 (4 years old) to alpine/git:v2.45.2
- [x] fix warning with linting
  - `WARN [config_reader] The configuration option 'run.skip-files' is deprecated, please use 'issues.exclude-files'`
- [x] add .tool-versions (asdf) to .gitignore

Signed-off-by: jmeridth <jmeridth@gmail.com>
2024-11-26 14:45:57 -05:00
Andrii Korotkov 88c35a9acf
chore(deps): Upgrade structured-merge-diff from v4.4.1 to v4.4.3 (#637)
Adding updates from the last year, let's see if that fixed some bugs

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
2024-11-17 21:19:19 -05:00
Pasha Kostohrys 847cfc9f8b
fix: Ability to disable Server Side Apply on individual resource level (#634)
* fix: Ability to disable Server Side Apply on individual resource level

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* fix: Ability to disable Server Side Apply on individual resource level

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2024-11-07 16:58:28 +02:00
Siddhesh Ghadi 9ab0b2ecae
feat: Add ability to hide certain annotations on secret resources (#577)
* Add option to hide annotations on secrets

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Handle err

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Move hide logic to a generic func

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Remove test code

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Address review comments

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Handle lastAppliedConfig special case

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Fix if logic and remove comments

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

---------

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2024-10-29 12:29:52 +02:00
Alexander Matyushentsev 09e5225f84
feat: application resource deletion protection (#630)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-10-23 06:44:23 -07:00
Michael Crenshaw 72bcdda3f0
chore: avoid unnecessary json marshal (#626)
* chore: avoid unnecessary json marshal

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more tests

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* refactor test to satisfy sonarcloud

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-17 13:19:20 -04:00
Michael Crenshaw df9b446fd7
chore: avoid unnecessary json unmarshal (#627)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-16 16:42:18 -04:00
Michael Crenshaw 3d9aab3cdc
chore: speed up resolveResourceReferences (#625)
* chore: speed up resolveResourceReferences

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* revert unnecessary changes

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-09-16 16:41:37 -04:00
Anand Francis Joseph bd7681ae3f
Added support for impersonation in the kubectl (#534)
Signed-off-by: anandf <anjoseph@redhat.com>
2024-09-04 21:08:10 -04:00
Michael Crenshaw 95e00254f8
chore: bump k8s libraries to 1.31 (#619)
* bump latest kubernetes version

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade Go version to resolve dependencies

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: ci

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* use Go1.22.3

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update from 1.29.2 to 1.30.1

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update go.sum

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* bump to 0.30.2

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary replace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* latest patch

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: bump k8s libraries from 1.30 to 1.31

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: sivchari <shibuuuu5@gmail.com>
2024-08-23 17:30:48 -04:00
sivchari 099cba69bd
chore: bump kubernetes version to 0.30.x (#579)
* bump latest kubernetes version

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade Go version to resolve dependencies

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: ci

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* use Go1.22.3

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update from 1.29.2 to 1.30.1

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* update go.sum

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* upgrade golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* bump to 0.30.2

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary replace

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* latest patch

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary toolchain line

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-08-23 11:57:20 -04:00
Andrii Korotkov 6b2984ebc4
feat: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600] (#601)
* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* improvements to graph building

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use old name

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* finish merge

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* chore: More optimal IterateHierarchyV2 and iterateChildrenV2 [#600]

Closes #600

The existing (effectively v1) implementations are suboptimal since they don't construct a graph before the iteration. They search for children by looking at all namespace resources and checking `isParentOf`, which can give `O(tree_size * namespace_resources_count)` time complexity. The v2 algorithms construct the graph and have `O(namespace_resources_count)` time complexity. See more details in the linked issues.

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>

* discard unneeded copies of child resources as we go

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unnecessary comment

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* make childrenByUID sparse

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* eliminate duplicate map

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix comment

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add useful comment back

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* use nsNodes instead of dupe map

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* remove unused struct

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* skip invalid APIVersion

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Andrii Korotkov <andrii.korotkov@verkada.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-18 13:53:51 -04:00
Michael Crenshaw 7d150d0b6b
chore: more docstrings (#606)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-17 23:20:55 -04:00
Andy Goldstein adb68bcaab
fix(clusterCache): don't miss finding live obj if obj is cluster-scoped and namespacedResources is in transition (#597)
* sync.Reconcile: guard against incomplete discovery

When Reconcile performs its logic to compare the desired state (target
objects) against the actual state (live objects), it looks up each live
object based on a key comprised of data from the target object: API
group, API kind, namespace, and name. While group, kind, and name will
always be accurate, there is a chance that the value for namespace is
not. If a cluster-scoped target object has a namespace (because it
incorrectly has a namespace from its source) or the namespace parameter
passed into the Reconcile method has a non-empty value (indicating a
default value to use on namespace-scoped objects that don't have it set
in the source), AND the resInfo ResourceInfoProvider has incomplete or
missing API discovery data, the call to IsNamespacedOrUnknown will
return true when the information is unknown. This leads to the key being
incorrect - it will have a value for namespace when it shouldn't. As a
result, indexing into liveObjByKey will fail. This failure results in
the reconciliation containing incorrect data: there will be a nil entry
appended to targetObjs when there shouldn't be.

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>

* Address code review comments

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>

---------

Signed-off-by: Andy Goldstein <andy.goldstein@gmail.com>
2024-07-14 11:31:47 -04:00
Alexandre Gaudreault a0c23b4210
fix: deadlock on start missing watches (#604)
* fix: deadlock on start missing watches

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* revert error

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* add unit test to validate some deadlock scenarios

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* test name

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

* clarify comment

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>

---------

Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2024-07-12 13:36:05 -04:00
Michael Crenshaw a22b34675f
fix: deduplicate OpenAPI definitions for GVKParser (#587) (#590)
* fix: deduplicate OpenAPI definitions for GVKParser



* do the thing that was the whole point



* more logs



* don't uniquify models



* schema for both



* more logs



* fix logic



* better tainted gvk handling, better docs, update mocks



* add a test



* improvements from comments



---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-07-01 08:36:22 -04:00
Michael Crenshaw fa0e8d60a3
chore: update static scheme (#588)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-28 11:55:02 -04:00
Michael Crenshaw f38075deb3
fix: deduplicate OpenAPI definitions for GVKParser (#587)
* fix: deduplicate OpenAPI definitions for GVKParser

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* do the thing that was the whole point

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* don't uniquify models

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* schema for both

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* more logs

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* fix logic

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* better tainted gvk handling, better docs, update mocks

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* add a test

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* improvements from comments

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-28 10:55:06 -04:00
Paul Gier 4386ff4b8d
chore: remove duplicate scheme import (#580)
Signed-off-by: Paul Gier <paul.gier@datastax.com>
2024-06-25 14:54:38 -04:00
Paul Gier 0be58f261a
fix: printing gvkparser error message (#585)
The log.Info function doesn't understand format directives, so use key/value to print error message.

Signed-off-by: Paul Gier <paul.gier@datastax.com>
2024-06-25 14:53:55 -04:00
Michael Crenshaw 83ce6ca8ce
chore(deps): bump k8s libs from 0.29.2 to 0.29.6 (#586)
* chore(deps): bump k8s libs from 0.29.2 to 0.29.6

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* oops

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

---------

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-06-15 14:59:36 -04:00
Alexander Matyushentsev 1f371a01cf
fix: replace k8s.io/endpointslice to v0.29.2 (#583)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-06-10 02:05:20 -07:00
Alexander Matyushentsev a9fd001c11
fix: replace k8s.io/endpointslice version (#582)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2024-06-09 23:18:18 -07:00
hanzala1234 8a3ce6d85c
Update condition to select right pvc as child for statefulset (#550)
* Update if condition to select right pvc as child for statefulset

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>

* fix indentation

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>

* test(cache): Add tests for isStatefulSetChild function

* test(pkg/cache): Replace JSON unmarshalling with structured approach in tests

---------

Signed-off-by: hanzala <muhammad.hanzala@waltlabs.io>
Co-authored-by: hanzala <muhammad.hanzala@waltlabs.io>
Co-authored-by: Obinna Odirionye <odirionye@gmail.com>
2024-05-14 22:01:00 +03:00
Leonardo Luz Almeida 0aecd43903
fix: handle nil ParseableType from GVKParser (#574)
* fix: handle nil ParseableType from GVKParser

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>

* address review comments

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>

---------

Signed-off-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2024-05-09 13:07:15 -04:00
sivchari 86a368824c
chore: Bump Kubernetes clients to 1.29.2 (#566)
* update k8s libs

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix: golangci-lint

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* fix nil map

Signed-off-by: sivchari <shibuuuu5@gmail.com>

* add deletion field

Signed-off-by: sivchari <shibuuuu5@gmail.com>

---------

Signed-off-by: sivchari <shibuuuu5@gmail.com>
2024-05-07 16:25:58 -04:00
Kota Kimura fbecbb86e4
feat: sync-options annotation with Force=true (#414) (#560)
Signed-off-by: kkk777-7 <kota.kimura0725@gmail.com>
2024-04-16 17:26:47 +03:00
Jonathan West 1ade3a1998
fix: fix temporary files written to '/dev/shm' not cleaned up (#568) (#569)
Signed-off-by: Jonathan West <jonwest@redhat.com>
2024-04-11 08:23:34 -04:00
Michael Crenshaw 3de313666b
chore: more logging for CRD updates (#554)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2024-04-07 21:10:09 -04:00
Siddhesh Ghadi 5fd9f449e7
feat: Prune resources in reverse of sync wave order (#538)
* Prune resources in reverse of sync wave order

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

* Use waveOverride var instead of directly patching live obj

Directly patching live objs results into incorrect wave ordering
as the new wave value from live obj is used to perform reordering during next sync

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>

---------

Signed-off-by: Siddhesh Ghadi <sghadi1203@gmail.com>
2024-01-24 00:27:10 -05:00
Anand Francis Joseph 792124280f
fix(server): Dry run always in client mode just for yaml manifest validation even with server side apply (#564)
* Revert "feat: retry with client side dry run if server one was failed (#548)"

This reverts commit c0c2dd1f6f.

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Revert "fix(server): use server side dry run in case if it is server side apply (#546)"

This reverts commit 4a5648ee41.

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed the logic to disable server side apply if it is a dry run

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added more values in the log message for better debugging

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Fixed compilation error

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Written an inline fn to get string value of dry-run strategy

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

* Added comment as requested with reference to the issue number

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>

---------

Signed-off-by: Anand Francis Joseph <anjoseph@redhat.com>
Co-authored-by: Leonardo Luz Almeida <leoluz@users.noreply.github.com>
2024-01-22 16:30:38 -05:00
Leonardo Luz Almeida c1e23597e7
fix: address kubectl auth reconcile during server-side diff (#562)
* fix: address kubectl auth reconcile during server-side diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* server-side diff force conflict

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* do not ssa when ssd rbac

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* debug

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better logs

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove debug

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* refactoring on rbacReconcile

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2024-01-22 09:58:03 -05:00
Leonardo Luz Almeida aba38192fb
feat: Implement Server-Side Diffs (#522)
* feat: Implement Server-Side Diffs

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* trigger build

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* chore: remove unused function

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* make HasAnnotationOption more generic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add server-side-diff printer option

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove managedFields during server-side-diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add ignore mutation webhook logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix configSet

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix comparison

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* merge typedconfig in typedpredictedlive

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* handle webhook diff conflicts

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix webhook normalization logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments 1/2

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments 2/2

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove kubectl getter from cluster-cache

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix query param verifier instantiation

Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>

* Add server-side-diff unit tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

---------

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-12-18 14:45:13 -05:00
pasha-codefresh c0c2dd1f6f
feat: retry with client side dry run if server one was failed (#548)
* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* feat: retry with client side dry run if server one was failed

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-11-02 11:40:24 -04:00
pasha-codefresh 4a5648ee41
fix(server): use server side dry run in case if it is server side apply (#546)
* fix: use server side dry run in case if it is server side apply

Signed-off-by: pashakostohrys <pavel@codefresh.io>

* fix: use server side dry run in case if it is server side apply

Signed-off-by: pashakostohrys <pavel@codefresh.io>

---------

Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-10-31 10:22:05 -04:00
fsl f15cf615b8
chore(deps): upgrade k8s version and client-go (#530)
Signed-off-by: fengshunli <1171313930@qq.com>
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
Co-authored-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-10-13 14:38:58 -04:00
Jesse Suen 9a03edb8e7
fix: remove lock acquisition in ClusterCache.GetAPIResources() (#543)
Signed-off-by: Jesse Suen <jesse@akuity.io>
2023-10-12 09:58:44 -07:00
Michael Crenshaw a00ce82f1c
chore: log cluster sync error (#541)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2023-09-29 16:35:05 -04:00
gdsoumya b0fffe419a
fix: resolve deadlock (#539)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-09-06 08:24:14 -07:00
gdsoumya 187312fe86
feat: auto respect rbac for discovery/sync (#532)
* feat: respect rbac for resource exclusions

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: use list call to check for permissions

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: updated implementation to handle different levels of rbac check

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: fixed linter error

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

* feat: resolve review comments

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>

---------

Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-08-11 13:20:59 -07:00
pasha-codefresh ed7c77a929
feat: Apply out of sync option only (#533)
Signed-off-by: pashakostohrys <pavel@codefresh.io>
2023-08-09 09:45:34 -04:00
fsl 425d65e076
fix: update golangci-lint ci (#529)
Signed-off-by: fengshunli <1171313930@qq.com>
2023-06-07 12:30:28 -04:00
fsl b58645a27c
fix: remove deprecated ioutil (#528)
Signed-off-by: fengshunli <1171313930@qq.com>
2023-06-07 12:20:24 -04:00
ls0f c0ffe8428a
manage clusters via proxy (#466)
Signed-off-by: ls0f <lovedboy.tk@qq.com>
2023-05-31 13:15:21 -07:00
reggie-k e56739ceba
feat: add CreateResource to kubectl (#12174 and #4116) (#516)
* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* separating kubectl and resource ops mocks

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* server dry-run for MockKubectlCmd

Signed-off-by: reggie <reginakagan@gmail.com>

* mock create noop

Signed-off-by: reggie <reginakagan@gmail.com>

* ctl create resource with createOptions

Signed-off-by: reggie <reginakagan@gmail.com>

---------

Signed-off-by: reggie <reginakagan@gmail.com>
2023-05-27 13:48:09 -04:00
Blake Pettersson ad9a694fe4
fix: do not replace namespaces (#524)
When doing `kubectl replace`, namespaces should not be affected. Fixes
argoproj/argo-cd#12810 and argoproj/argo-cd#12539.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2023-05-26 16:32:14 -07:00
Alexander Matyushentsev b4dd8b8c39
fix: avoid acquiring lock on mutex and semaphore at the same time to prevent deadlock (#521)
* fix: avoid acquiring lock on mutex and semaphore at the same time to prevent deadlock

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

* apply reviewer notes

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>

---------

Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2023-05-11 19:08:22 -07:00
Soumya Ghosh Dastidar ed70eac8b7
feat: add sync delete option (#507)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2023-02-14 08:53:51 -08:00
asingh 917f5a0f16
fix: add suspended condition (#484)
Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>

fix: add suspended condition

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>

Signed-off-by: ashutosh16 <11219262+ashutosh16@users.noreply.github.com>
2022-12-08 15:06:15 -08:00
Leonardo Luz Almeida e284fd71cb
fix: managed namespaces should not mutate the live state (#479)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-11-08 16:05:51 -05:00
Blake Pettersson b371e3bfc5
Update namespace v2 (#465)
* Revert "Revert "feat: Ability to create custom labels for namespaces created … (#455)"

This reverts commit ce2fb703a6.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>

* feat: enable namespace to be updated

Rename `WithNamespaceCreation` to `WithNamespaceModifier`, since this
method is also used for modifying existing namespaces. This method
takes a single argument for the actual updating, and unless this method
gets invoked by its caller no updating will take place (fulfilling what
the `createNamespace` argument used to do).

Within `autoCreateNamespace`, everywhere where we previously added tasks
we'll now need to check whether the namespace should be created (or
modified), which is now delegated to the `appendNsTask` and
`appendFailedNsTask` methods.

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>

Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2022-11-03 15:29:13 -04:00
Jingchao Di 9664cf8123
feat: add profile feature for agent, and fix logr's panic (#444)
Signed-off-by: jingchao.djc <jingchao.djc@antfin.com>

Signed-off-by: jingchao.djc <jingchao.djc@antfin.com>
2022-10-06 08:31:10 -07:00
Leonardo Luz Almeida 98ccd3d43f
fix: calculate SSA diffs with smd.merge.Updater (#467)
* fix: refactor ssa diff logic

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: calculate ssa diff with smd.merge.Updater

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* chore: Add golangci config file

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: remove wrong param passed to golanci-ghaction

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* doc: Add doc to the wrapper file

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* doc: Add instructions about how to extract the openapiv2 document from
k8s

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better wording

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* better code comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-10-04 09:23:20 -04:00
Leonardo Luz Almeida 3951079de1
fix: remove last-applied-configuration before diff in ssa (#460)
* fix: remove last-apply-configurations before diff in ssa

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix: add tests to validate expected behaviour

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-09-16 10:22:00 -04:00
Yujun Zhang 517c1fff4e
chore: fix typo in log message (#445)
Signed-off-by: Yujun Zhang <yujunz@nvidia.com>

Signed-off-by: Yujun Zhang <yujunz@nvidia.com>
2022-09-01 20:40:44 +02:00
Leonardo Luz Almeida c036d3f6b0
fix: sort fields to correctly calculate diff in server-side apply (#456)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-29 14:50:54 +02:00
pasha-codefresh ce2fb703a6
Revert "feat: Ability to create custom labels for namespaces created … (#455)
* Revert "feat: Ability to create custom labels for namespaces created with syncOptions CreateNamespace (#443)"

This reverts commit a56a803031.

* remove import

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* fix test

Signed-off-by: pashavictorovich <pavel@codefresh.io>

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2022-08-23 14:00:03 -04:00
Leonardo Luz Almeida 9970faba81
Cherry-Pick Retry commit in master (#452)
* fix: retry on unauthorized error when retrieving resources by gvk (#449)

* fix: retry on unauthorized when retrieving resources by gvk

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add test case to validate retry is just invoked if error is Unauthorized

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix merge conflict

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix lint

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-19 15:58:16 -04:00
pasha-codefresh a56a803031
feat: Ability to create custom labels for namespaces created with syncOptions CreateNamespace (#443)
* namespace labels hook

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* add tests

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* fix test

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* rename import

Signed-off-by: pashavictorovich <pavel@codefresh.io>

* remove deep copy

Signed-off-by: pashavictorovich <pavel@codefresh.io>

Signed-off-by: pashavictorovich <pavel@codefresh.io>
2022-08-18 18:48:07 +02:00
jannfis 2bc3fef13e
fix: Fix argument order in resource filter (#436)
Signed-off-by: jannfis <jann@mistrust.net>
2022-08-04 21:09:09 +02:00
dependabot[bot] e03364f7dd
chore(deps): bump actions/setup-go from 3.2.0 to 3.2.1 (#428)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3.2.0 to 3.2.1.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3.2.0...v3.2.1)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:52:40 +02:00
dependabot[bot] 51a33e63f1
chore(deps): bump k8s.io/klog/v2 from 2.30.0 to 2.70.1 (#426)
Bumps [k8s.io/klog/v2](https://github.com/kubernetes/klog) from 2.30.0 to 2.70.1.
- [Release notes](https://github.com/kubernetes/klog/releases)
- [Changelog](https://github.com/kubernetes/klog/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes/klog/compare/v2.30.0...v2.70.1)

---
updated-dependencies:
- dependency-name: k8s.io/klog/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:52:11 +02:00
dependabot[bot] e5e3a1cf5c
chore(deps): bump github.com/spf13/cobra from 1.2.1 to 1.5.0 (#420)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.1 to 1.5.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.2.1...v1.5.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:51:35 +02:00
dependabot[bot] da6623b2e7
chore(deps): bump golangci/golangci-lint-action from 2 to 3.2.0 (#409)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 2 to 3.2.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2...v3.2.0)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:50:28 +02:00
Rick 112657a1f9
chore(docs): typo fixing in agent README file (#351)
Signed-off-by: rick <1450685+LinuxSuRen@users.noreply.github.com>
2022-08-04 14:29:48 +02:00
dependabot[bot] ab8fdc7dbd
chore(deps): bump sigs.k8s.io/yaml from 1.2.0 to 1.3.0 (#339)
Bumps [sigs.k8s.io/yaml](https://github.com/kubernetes-sigs/yaml) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/kubernetes-sigs/yaml/releases)
- [Changelog](https://github.com/kubernetes-sigs/yaml/blob/master/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/yaml/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/yaml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:23:51 +02:00
dependabot[bot] 2d495813b7
chore(deps): bump codecov/codecov-action from 1.5.0 to 3.1.0 (#405)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1.5.0 to 3.1.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.5.0...v3.1.0)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-04 14:22:38 +02:00
Josh Soref d8c17c206f
spelling: less than (#434)
Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2022-08-04 14:21:42 +02:00
Leonardo Luz Almeida 6cde7989d5
fix: structured-merge diff apply default values in live resource (#435)
* fix: structured-merge diff apply default values in live resource

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* address review comments

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-03 10:57:58 -04:00
Leonardo Luz Almeida 1c4ef33687
feat: Add server-side apply manager config (#418)
* feat: Add server-side apply manager config

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Force conflicts when SSA

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement strategic-merge patch in diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement structured merge diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement structured merge in diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix parseable type conversion

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Handle structured merge diff for create/delete operations

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* User NormalizeUnionsApply instead of Merge for structured-merge diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* NormalizeUnions

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* merge first than normalize union

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* calculate diff with fieldsets

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* extract managed fields

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove managed fields then merge

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Just remove fields if manager is found

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* remove config fieldset instead of using managed fields

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Structure merge diff with defaults

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Normalize union at the end

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Remove fields after merging

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* test

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* apply defaults when building diff result

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix default func call

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Fix diff default

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix merged object

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* keep diff order

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* apply default with patch

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* handle ssa diffs with resource annotations

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* use managed fields to calculate diff

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* Implement unit tests

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* fix bad merge

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* add test to validate service with multiple ports

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* resolveFromStaticParser optimization

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>

* try without reordering while patching default values

Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2022-08-02 14:48:09 -04:00
Daniel Urgell da6681916f
fix: Change wrong log level in cluster.go openAPISchema, gvkParser (#430)
* Update cluster.go

Fixes argoproj/gitops-engine#423

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

* Use c.log.Info instead of c.log.Warning

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

* Changed c.log.Info format to fix type string in argument

Signed-off-by: Daniel Urgell <urgell.d@gmail.com>

Co-authored-by: Daniel Urgell <daniel@bluelabs.eu>
2022-08-02 13:10:19 -04:00
Alexander Matyushentsev 67ddccd3cc
chore: upgrade k8s cliet to v0.24.2 (#427)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2022-07-12 16:42:57 -07:00
jannfis ed31317b27
fix: Only consider resources which supports appropriate verb for any given operation (#423)
* fix: Only consider resources which supports appropriate verb for any given operation

Signed-off-by: jannfis <jann@mistrust.net>

* Fix unit tests

Signed-off-by: jannfis <jann@mistrust.net>

* Return MethodNotSupported and add some tests

Signed-off-by: jannfis <jann@mistrust.net>
2022-07-06 20:25:44 +02:00
120 changed files with 98020 additions and 2850 deletions

10
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,10 @@
# All
** @argoproj/argocd-approvers
# Docs
/docs/** @argoproj/argocd-approvers @argoproj/argocd-approvers-docs
/README.md @argoproj/argocd-approvers @argoproj/argocd-approvers-docs
# CI
/.codecov.yml @argoproj/argocd-approvers @argoproj/argocd-approvers-ci
/.github/** @argoproj/argocd-approvers @argoproj/argocd-approvers-ci

View File

@ -4,7 +4,23 @@ updates:
directory: "/"
schedule:
interval: "daily"
commit-message:
prefix: "chore(deps)"
groups:
dependencies:
applies-to: version-updates
update-types:
- "minor"
- "patch"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
commit-message:
prefix: "chore(deps)"
groups:
dependencies:
applies-to: version-updates
update-types:
- "minor"
- "patch"

View File

@ -8,31 +8,23 @@ on:
pull_request:
branches:
- 'master'
env:
# Golang version to use across CI steps
GOLANG_VERSION: '1.17'
- 'release-*'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: actions/cache@v2.1.6
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- uses: actions/setup-go@v3.2.0
with:
go-version: ${{ env.GOLANG_VERSION }}
go-version-file: go.mod
- run: go mod tidy
- run: make test
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@4afd733a84b1f43292c63897423277bb7f4313a9 # v8.0.0
with:
version: v1.38.0
args: --timeout 5m
skip-go-installation: true
- uses: codecov/codecov-action@v1.5.0
version: v2.1.6
args: --verbose
- uses: codecov/codecov-action@ad3126e916f78f00edff4ed0317cf185271ccc2d # v5.4.2
with:
token: ${{ secrets.CODECOV_TOKEN }} #required
file: ./coverage.out
files: ./coverage.out

3
.gitignore vendored
View File

@ -3,4 +3,5 @@
.vscode
.idea
coverage.out
vendor/
vendor/
.tool-versions

129
.golangci.yaml Normal file
View File

@ -0,0 +1,129 @@
version: "2"
linters:
enable:
- errorlint
- gocritic
- gomodguard
- importas
- misspell
- perfsprint
- revive
- testifylint
- thelper
- unparam
- usestdlibvars
- whitespace
- wrapcheck
settings:
gocritic:
disabled-checks:
- appendAssign
- assignOp
- exitAfterDefer
- typeSwitchVar
importas:
alias:
- pkg: k8s.io/api/apps/v1
alias: appsv1
- pkg: k8s.io/api/core/v1
alias: corev1
- pkg: k8s.io/apimachinery/pkg/api/errors
alias: apierrors
- pkg: k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1
alias: apiextensionsv1
- pkg: k8s.io/apimachinery/pkg/apis/meta/v1
alias: metav1
- pkg: github.com/argoproj/gitops-engine/pkg/utils/testing
alias: testingutils
perfsprint:
int-conversion: true
err-error: true
errorf: true
sprintf1: true
strconcat: true
revive:
rules:
- name: bool-literal-in-expr
- name: blank-imports
disabled: true
- name: context-as-argument
arguments:
- allowTypesBefore: '*testing.T,testing.TB'
- name: context-keys-type
disabled: true
- name: dot-imports
- name: duplicated-imports
- name: early-return
arguments:
- preserveScope
- name: empty-block
disabled: true
- name: error-naming
disabled: true
- name: error-return
- name: error-strings
disabled: true
- name: errorf
- name: identical-branches
- name: if-return
- name: increment-decrement
- name: indent-error-flow
arguments:
- preserveScope
- name: modifies-parameter
- name: optimize-operands-order
- name: range
- name: receiver-naming
- name: redefines-builtin-id
disabled: true
- name: redundant-import-alias
- name: superfluous-else
arguments:
- preserveScope
- name: time-equal
- name: time-naming
disabled: true
- name: unexported-return
disabled: true
- name: unnecessary-stmt
- name: unreachable-code
- name: unused-parameter
- name: use-any
- name: useless-break
- name: var-declaration
- name: var-naming
disabled: true
testifylint:
enable-all: true
disable:
- go-require
exclusions:
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- pkg/diff/internal/fieldmanager/borrowed_.*\.go$
- internal/kubernetes_vendor
- third_party$
- builtin$
- examples$
issues:
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- github.com/argoproj/gitops-engine
exclusions:
paths:
- pkg/diff/internal/fieldmanager/borrowed_.*\.go$
- internal/kubernetes_vendor
- third_party$
- builtin$
- examples$

View File

@ -1,4 +1,4 @@
FROM golang:1.17 as builder
FROM golang:1.22 AS builder
WORKDIR /src
@ -12,5 +12,5 @@ COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /dist/gitops ./agent
FROM alpine/git:v2.24.3
FROM alpine/git:v2.45.2
COPY --from=builder /dist/gitops /usr/local/bin/gitops

View File

@ -31,10 +31,10 @@ The GitOps Engine follows the [CNCF Code of Conduct](https://github.com/cncf/fou
If you are as excited about GitOps and one common engine for it as much as we are, please get in touch. If you want to write code that's great, if you want to share feedback, ideas and use-cases, that's great too.
Find us on the [#gitops channel][gitops-slack] on Kubernetes Slack (get an [invite here][kube-slack]).
Find us on the [#argo-cd-contributors][argo-cd-contributors-slack] on CNCF Slack (get an [invite here][cncf-slack]).
[gitops-slack]: https://kubernetes.slack.com/archives/CBT6N1ASG
[kube-slack]: https://slack.k8s.io/
[argo-cd-contributors-slack]: https://cloud-native.slack.com/archives/C020XM04CUW
[cncf-slack]: https://slack.cncf.io/
### Contributing to the effort

View File

@ -7,7 +7,7 @@ The main difference is that the agent is syncing one Git repository into the sam
## Quick Start
By default the agent is configured to use manifests from [guestbook](https://github.com/argoproj/argocd-example-apps/tree/master/guestbook)
By default, the agent is configured to use manifests from [guestbook](https://github.com/argoproj/argocd-example-apps/tree/master/guestbook)
directory in https://github.com/argoproj/argocd-example-apps repository.
The agent supports two modes:
@ -24,7 +24,7 @@ kubectl apply -f https://raw.githubusercontent.com/argoproj/gitops-engine/master
kubectl rollout status deploy/gitops-agent
```
The the agent logs:
The agent logs:
```bash
kubectl logs -f deploy/gitops-agent gitops-agent
@ -56,4 +56,21 @@ Update the container env [variables](https://github.com/kubernetes/git-sync#para
### Demo Recording
[![asciicast](https://asciinema.org/a/FWbvVAiSsiI87wQx2TJbRMlxN.svg)](https://asciinema.org/a/FWbvVAiSsiI87wQx2TJbRMlxN)
[![asciicast](https://asciinema.org/a/FWbvVAiSsiI87wQx2TJbRMlxN.svg)](https://asciinema.org/a/FWbvVAiSsiI87wQx2TJbRMlxN)
### Profiling
Using env variables to enable profiling mode, the agent can be started with the following envs:
```bash
export GITOPS_ENGINE_PROFILE=web
# optional, default pprofile address is 127.0.0.1:6060
export GITOPS_ENGINE_PROFILE_HOST=127.0.0.1
export GITOPS_ENGINE_PROFILE_PORT=6060
```
And then you can open profile in the browser(or using [pprof](https://github.com/google/pprof) cmd to generate diagrams):
- http://127.0.0.1:6060/debug/pprof/goroutine?debug=2
- http://127.0.0.1:6060/debug/pprof/mutex?debug=2

View File

@ -5,33 +5,40 @@ import (
"crypto/sha256"
"encoding/base64"
"fmt"
"io/ioutil"
"net/http"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"text/tabwriter"
"time"
"github.com/argoproj/gitops-engine/pkg/utils/text"
"github.com/go-logr/logr"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2/klogr"
"k8s.io/klog/v2/textlogger"
"github.com/argoproj/gitops-engine/pkg/cache"
"github.com/argoproj/gitops-engine/pkg/engine"
"github.com/argoproj/gitops-engine/pkg/sync"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
_ "net/http/pprof"
)
const (
annotationGCMark = "gitops-agent.argoproj.io/gc-mark"
envProfile = "GITOPS_ENGINE_PROFILE"
envProfileHost = "GITOPS_ENGINE_PROFILE_HOST"
envProfilePort = "GITOPS_ENGINE_PROFILE_PORT"
)
func main() {
log := klogr.New() // Delegates to klog
log := textlogger.NewLogger(textlogger.NewConfig())
err := newCmd(log).Execute()
checkError(err, log)
}
@ -47,7 +54,7 @@ type settings struct {
func (s *settings) getGCMark(key kube.ResourceKey) string {
h := sha256.New()
_, _ = h.Write([]byte(fmt.Sprintf("%s/%s", s.repoPath, strings.Join(s.paths, ","))))
_, _ = fmt.Fprintf(h, "%s/%s", s.repoPath, strings.Join(s.paths, ","))
_, _ = h.Write([]byte(strings.Join([]string{key.Group, key.Kind, key.Name}, "/")))
return "sha256." + base64.RawURLEncoding.EncodeToString(h.Sum(nil))
}
@ -57,7 +64,7 @@ func (s *settings) parseManifests() ([]*unstructured.Unstructured, string, error
cmd.Dir = s.repoPath
revision, err := cmd.CombinedOutput()
if err != nil {
return nil, "", err
return nil, "", fmt.Errorf("failed to determine git revision: %w", err)
}
var res []*unstructured.Unstructured
for i := range s.paths {
@ -71,18 +78,18 @@ func (s *settings) parseManifests() ([]*unstructured.Unstructured, string, error
if ext := strings.ToLower(filepath.Ext(info.Name())); ext != ".json" && ext != ".yml" && ext != ".yaml" {
return nil
}
data, err := ioutil.ReadFile(path)
data, err := os.ReadFile(path)
if err != nil {
return err
return fmt.Errorf("failed to read file %s: %w", path, err)
}
items, err := kube.SplitYAML(data)
if err != nil {
return fmt.Errorf("failed to parse %s: %v", path, err)
return fmt.Errorf("failed to parse %s: %w", path, err)
}
res = append(res, items...)
return nil
}); err != nil {
return nil, "", err
return nil, "", fmt.Errorf("failed to parse %s: %w", s.paths[i], err)
}
}
for i := range res {
@ -96,6 +103,19 @@ func (s *settings) parseManifests() ([]*unstructured.Unstructured, string, error
return res, string(revision), nil
}
func StartProfiler(log logr.Logger) {
if os.Getenv(envProfile) == "web" {
go func() {
runtime.SetBlockProfileRate(1)
runtime.SetMutexProfileFraction(1)
profilePort := text.WithDefault(os.Getenv(envProfilePort), "6060")
profileHost := text.WithDefault(os.Getenv(envProfileHost), "127.0.0.1")
log.Info("pprof", "err", http.ListenAndServe(fmt.Sprintf("%s:%s", profileHost, profilePort), nil))
}()
}
}
func newCmd(log logr.Logger) *cobra.Command {
var (
clientConfig clientcmd.ClientConfig
@ -125,10 +145,12 @@ func newCmd(log logr.Logger) *cobra.Command {
if namespaced {
namespaces = []string{namespace}
}
StartProfiler(log)
clusterCache := cache.NewClusterCache(config,
cache.SetNamespaces(namespaces),
cache.SetLogr(log),
cache.SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, isRoot bool) (info interface{}, cacheManifest bool) {
cache.SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, _ bool) (info any, cacheManifest bool) {
// store gc mark of every resource
gcMark := un.GetAnnotations()[annotationGCMark]
info = &resourceInfo{gcMark: un.GetAnnotations()[annotationGCMark]}
@ -153,7 +175,7 @@ func newCmd(log logr.Logger) *cobra.Command {
resync <- true
}
}()
http.HandleFunc("/api/v1/sync", func(writer http.ResponseWriter, request *http.Request) {
http.HandleFunc("/api/v1/sync", func(_ http.ResponseWriter, _ *http.Request) {
log.Info("Synchronization triggered by API call")
resync <- true
})

206
go.mod
View File

@ -1,119 +1,135 @@
module github.com/argoproj/gitops-engine
go 1.17
go 1.24.0
require (
github.com/davecgh/go-spew v1.1.1
github.com/evanphx/json-patch v4.12.0+incompatible
github.com/go-logr/logr v1.2.2
github.com/golang/mock v1.5.0
github.com/spf13/cobra v1.2.1
github.com/stretchr/testify v1.7.0
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
k8s.io/api v0.23.1
k8s.io/apiextensions-apiserver v0.23.1
k8s.io/apimachinery v0.23.1
k8s.io/cli-runtime v0.23.1
k8s.io/client-go v0.23.1
k8s.io/klog/v2 v2.30.0
k8s.io/kube-aggregator v0.23.1
k8s.io/kubectl v0.23.1
k8s.io/kubernetes v1.23.1
sigs.k8s.io/yaml v1.2.0
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/evanphx/json-patch/v5 v5.9.11
github.com/go-logr/logr v1.4.3
github.com/google/gnostic-models v0.6.9
github.com/google/uuid v1.6.0
github.com/spf13/cobra v1.9.1
github.com/stretchr/testify v1.10.0
go.uber.org/mock v0.5.2
golang.org/x/sync v0.15.0
google.golang.org/protobuf v1.36.6
k8s.io/api v0.33.1
k8s.io/apiextensions-apiserver v0.33.1
k8s.io/apimachinery v0.33.1
k8s.io/cli-runtime v0.33.1
k8s.io/client-go v0.33.1
k8s.io/klog/v2 v2.130.1
k8s.io/kube-aggregator v0.33.1
k8s.io/kube-openapi v0.0.0-20250610211856-8b98d1ed966a
k8s.io/kubectl v0.33.1
k8s.io/kubernetes v1.33.1
sigs.k8s.io/structured-merge-diff/v4 v4.7.0
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.3 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/camelcase v1.0.0 // indirect
github.com/fvbommel/sortorder v1.0.1 // indirect
github.com/go-errors/errors v1.0.1 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/fxamacker/cbor/v2 v2.8.0 // indirect
github.com/go-errors/errors v1.5.1 // indirect
github.com/go-openapi/jsonpointer v0.21.1 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/go-cmp v0.5.5 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/imdario/mergo v0.3.5 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jonboulle/clockwork v0.2.2 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jonboulle/clockwork v0.5.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/russross/blackfriday v1.5.2 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.2.0 // indirect
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca // indirect
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 // indirect
golang.org/x/net v0.0.0-20211209124913-491a49abca63 // indirect
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e // indirect
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.27.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
go.opentelemetry.io/otel v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.36.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/term v0.32.0 // indirect
golang.org/x/text v0.26.0 // indirect
golang.org/x/time v0.12.0 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/apiserver v0.23.1 // indirect
k8s.io/component-base v0.23.1 // indirect
k8s.io/component-helpers v0.23.1 // indirect
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/utils v0.0.0-20210930125809-cb0fa318a74b // indirect
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
sigs.k8s.io/kustomize/api v0.10.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiserver v0.33.1 // indirect
k8s.io/component-base v0.33.1 // indirect
k8s.io/component-helpers v0.33.1 // indirect
k8s.io/controller-manager v0.33.1 // indirect
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/kustomize/api v0.19.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
)
replace (
// https://github.com/kubernetes/kubernetes/issues/79384#issuecomment-505627280
k8s.io/api => k8s.io/api v0.23.1
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.23.1 // indirect
k8s.io/apimachinery => k8s.io/apimachinery v0.23.1 // indirect
k8s.io/apiserver => k8s.io/apiserver v0.23.1
k8s.io/cli-runtime => k8s.io/cli-runtime v0.23.1
k8s.io/client-go => k8s.io/client-go v0.23.1
k8s.io/cloud-provider => k8s.io/cloud-provider v0.23.1
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.23.1
k8s.io/code-generator => k8s.io/code-generator v0.23.1
k8s.io/component-base => k8s.io/component-base v0.23.1
k8s.io/component-helpers => k8s.io/component-helpers v0.23.1
k8s.io/controller-manager => k8s.io/controller-manager v0.23.1
k8s.io/cri-api => k8s.io/cri-api v0.23.1
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.23.1
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.23.1
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.23.1
k8s.io/kube-proxy => k8s.io/kube-proxy v0.23.1
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.23.1
k8s.io/kubectl => k8s.io/kubectl v0.23.1
k8s.io/kubelet => k8s.io/kubelet v0.23.1
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.23.1
k8s.io/metrics => k8s.io/metrics v0.23.1
k8s.io/mount-utils => k8s.io/mount-utils v0.23.1
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.23.1
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.23.1
// After bumping these versions, run hack/update_static_schema.sh in case the schema has changed.
k8s.io/api => k8s.io/api v0.33.1
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.33.1
k8s.io/apimachinery => k8s.io/apimachinery v0.33.1
k8s.io/apiserver => k8s.io/apiserver v0.33.1
k8s.io/cli-runtime => k8s.io/cli-runtime v0.33.1
k8s.io/client-go => k8s.io/client-go v0.33.1
k8s.io/cloud-provider => k8s.io/cloud-provider v0.33.1
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.33.1
k8s.io/code-generator => k8s.io/code-generator v0.33.1
k8s.io/component-base => k8s.io/component-base v0.33.1
k8s.io/component-helpers => k8s.io/component-helpers v0.33.1
k8s.io/controller-manager => k8s.io/controller-manager v0.33.1
k8s.io/cri-api => k8s.io/cri-api v0.33.1
k8s.io/cri-client => k8s.io/cri-client v0.33.1
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.33.1
k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.33.1
k8s.io/endpointslice => k8s.io/endpointslice v0.33.1
k8s.io/externaljwt => k8s.io/externaljwt v0.33.1
k8s.io/kms => k8s.io/kms v0.33.1
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.33.1
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.33.1
k8s.io/kube-proxy => k8s.io/kube-proxy v0.33.1
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.33.1
k8s.io/kubectl => k8s.io/kubectl v0.33.1
k8s.io/kubelet => k8s.io/kubelet v0.33.1
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.33.1
k8s.io/metrics => k8s.io/metrics v0.33.1
k8s.io/mount-utils => k8s.io/mount-utils v0.33.1
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.33.1
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.33.1
k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.33.1
k8s.io/sample-controller => k8s.io/sample-controller v0.33.1
)

1281
go.sum

File diff suppressed because it is too large Load Diff

18
hack/update_static_schema.sh Executable file
View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -euox pipefail
# Get the k8s library version from go.mod, stripping the trailing newline
k8s_lib_version=$(grep "k8s.io/client-go" go.mod | awk '{print $2}' | head -n 1 | tr -d '\n')
# Download the parser file from the k8s library
curl -sL "https://raw.githubusercontent.com/kubernetes/client-go/$k8s_lib_version/applyconfigurations/internal/internal.go" -o pkg/utils/kube/scheme/parser.go
# Add a line to the beginning of the file saying that this is the script that generated it.
sed -i '' '1s/^/\/\/ Code generated by hack\/update_static_schema.sh; DO NOT EDIT.\n\/\/ Everything below is downloaded from applyconfigurations\/internal\/internal.go in kubernetes\/client-go.\n\n/' pkg/utils/kube/scheme/parser.go
# Replace "package internal" with "package scheme" in the parser file
sed -i '' 's/package internal/package scheme/' pkg/utils/kube/scheme/parser.go
# Replace "func Parser" with "func StaticParser"
sed -i '' 's/func Parser/func StaticParser/' pkg/utils/kube/scheme/parser.go

564
pkg/cache/cluster.go vendored
View File

@ -11,8 +11,9 @@ import (
"github.com/go-logr/logr"
"golang.org/x/sync/semaphore"
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
authorizationv1 "k8s.io/api/authorization/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@ -22,12 +23,14 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
authType1 "k8s.io/client-go/kubernetes/typed/authorization/v1"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/pager"
watchutil "k8s.io/client-go/tools/watch"
"k8s.io/client-go/util/retry"
"k8s.io/klog/v2/klogr"
"k8s.io/klog/v2/textlogger"
"k8s.io/kubectl/pkg/util/openapi"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
@ -53,13 +56,31 @@ const (
// Limit is required to avoid memory spikes during cache initialization.
// The default limit of 50 is chosen based on experiments.
defaultListSemaphoreWeight = 50
// defaultEventProcessingInterval is the default interval for processing events
defaultEventProcessingInterval = 100 * time.Millisecond
)
const (
// RespectRbacDisabled default value for respectRbac
RespectRbacDisabled = iota
// RespectRbacNormal checks only api response for forbidden/unauthorized errors
RespectRbacNormal
// RespectRbacStrict checks both api response for forbidden/unauthorized errors and SelfSubjectAccessReview
RespectRbacStrict
)
type apiMeta struct {
namespaced bool
namespaced bool
// watchCancel stops the watch of all resources for this API. This gets called when the cache is invalidated or when
// the watched API ceases to exist (e.g. a CRD gets deleted).
watchCancel context.CancelFunc
}
type eventMeta struct {
event watch.EventType
un *unstructured.Unstructured
}
// ClusterInfo holds cluster cache stats
type ClusterInfo struct {
// Server holds cluster API server URL
@ -81,12 +102,17 @@ type ClusterInfo struct {
// OnEventHandler is a function that handles Kubernetes event
type OnEventHandler func(event watch.EventType, un *unstructured.Unstructured)
// OnProcessEventsHandler handles process events event
type OnProcessEventsHandler func(duration time.Duration, processedEventsNumber int)
// OnPopulateResourceInfoHandler returns additional resource metadata that should be stored in cache
type OnPopulateResourceInfoHandler func(un *unstructured.Unstructured, isRoot bool) (info interface{}, cacheManifest bool)
type OnPopulateResourceInfoHandler func(un *unstructured.Unstructured, isRoot bool) (info any, cacheManifest bool)
// OnResourceUpdatedHandler handlers resource update event
type OnResourceUpdatedHandler func(newRes *Resource, oldRes *Resource, namespaceResources map[kube.ResourceKey]*Resource)
type Unsubscribe func()
type (
OnResourceUpdatedHandler func(newRes *Resource, oldRes *Resource, namespaceResources map[kube.ResourceKey]*Resource)
Unsubscribe func()
)
type ClusterCache interface {
// EnsureSynced checks cache state and synchronizes it if necessary
@ -107,6 +133,9 @@ type ClusterCache interface {
// IterateHierarchy iterates resource tree starting from the specified top level resource and executes callback for each resource in the tree.
// The action callback returns true if iteration should continue and false otherwise.
IterateHierarchy(key kube.ResourceKey, action func(resource *Resource, namespaceResources map[kube.ResourceKey]*Resource) bool)
// IterateHierarchyV2 iterates resource tree starting from the specified top level resources and executes callback for each resource in the tree.
// The action callback returns true if iteration should continue and false otherwise.
IterateHierarchyV2(keys []kube.ResourceKey, action func(resource *Resource, namespaceResources map[kube.ResourceKey]*Resource) bool)
// IsNamespaced answers if specified group/kind is a namespaced resource API or not
IsNamespaced(gk schema.GroupKind) (bool, error)
// GetManagedLiveObjs helps finding matching live K8S resources for a given resources list.
@ -119,6 +148,8 @@ type ClusterCache interface {
OnResourceUpdated(handler OnResourceUpdatedHandler) Unsubscribe
// OnEvent register event handler that is executed every time when new K8S event received
OnEvent(handler OnEventHandler) Unsubscribe
// OnProcessEventsHandler register event handler that is executed every time when events were processed
OnProcessEventsHandler(handler OnProcessEventsHandler) Unsubscribe
}
type WeightedSemaphore interface {
@ -131,10 +162,11 @@ type ListRetryFunc func(err error) bool
// NewClusterCache creates new instance of cluster cache
func NewClusterCache(config *rest.Config, opts ...UpdateSettingsFunc) *clusterCache {
log := klogr.New()
log := textlogger.NewLogger(textlogger.NewConfig())
cache := &clusterCache{
settings: Settings{ResourceHealthOverride: &noopSettings{}, ResourcesFilter: &noopSettings{}},
apisMeta: make(map[schema.GroupKind]*apiMeta),
eventMetaCh: nil,
listPageSize: defaultListPageSize,
listPageBufferSize: defaultListPageBufferSize,
listSemaphore: semaphore.NewWeighted(defaultListSemaphoreWeight),
@ -151,8 +183,10 @@ func NewClusterCache(config *rest.Config, opts ...UpdateSettingsFunc) *clusterCa
},
watchResyncTimeout: defaultWatchResyncTimeout,
clusterSyncRetryTimeout: ClusterRetryTimeout,
eventProcessingInterval: defaultEventProcessingInterval,
resourceUpdatedHandlers: map[uint64]OnResourceUpdatedHandler{},
eventHandlers: map[uint64]OnEventHandler{},
processEventsHandlers: map[uint64]OnProcessEventsHandler{},
log: log,
listRetryLimit: 1,
listRetryUseBackoff: false,
@ -167,9 +201,11 @@ func NewClusterCache(config *rest.Config, opts ...UpdateSettingsFunc) *clusterCa
type clusterCache struct {
syncStatus clusterCacheSync
apisMeta map[schema.GroupKind]*apiMeta
serverVersion string
apiResources []kube.APIResourceInfo
apisMeta map[schema.GroupKind]*apiMeta
batchEventsProcessing bool
eventMetaCh chan eventMeta
serverVersion string
apiResources []kube.APIResourceInfo
// namespacedResources is a simple map which indicates a groupKind is namespaced
namespacedResources map[schema.GroupKind]bool
@ -177,6 +213,8 @@ type clusterCache struct {
watchResyncTimeout time.Duration
// sync retry timeout for cluster when sync error happens
clusterSyncRetryTimeout time.Duration
// ticker interval for events processing
eventProcessingInterval time.Duration
// size of a page for list operations pager.
listPageSize int64
@ -206,8 +244,11 @@ type clusterCache struct {
populateResourceInfoHandler OnPopulateResourceInfoHandler
resourceUpdatedHandlers map[uint64]OnResourceUpdatedHandler
eventHandlers map[uint64]OnEventHandler
processEventsHandlers map[uint64]OnProcessEventsHandler
openAPISchema openapi.Resources
gvkParser *managedfields.GvkParser
respectRBAC int
}
type clusterCacheSync struct {
@ -222,12 +263,12 @@ type clusterCacheSync struct {
}
// ListRetryFuncNever never retries on errors
func ListRetryFuncNever(err error) bool {
func ListRetryFuncNever(_ error) bool {
return false
}
// ListRetryFuncAlways always retries on errors
func ListRetryFuncAlways(err error) bool {
func ListRetryFuncAlways(_ error) bool {
return true
}
@ -279,16 +320,41 @@ func (c *clusterCache) getEventHandlers() []OnEventHandler {
return handlers
}
// OnProcessEventsHandler register event handler that is executed every time when events were processed
func (c *clusterCache) OnProcessEventsHandler(handler OnProcessEventsHandler) Unsubscribe {
c.handlersLock.Lock()
defer c.handlersLock.Unlock()
key := c.handlerKey
c.handlerKey++
c.processEventsHandlers[key] = handler
return func() {
c.handlersLock.Lock()
defer c.handlersLock.Unlock()
delete(c.processEventsHandlers, key)
}
}
func (c *clusterCache) getProcessEventsHandlers() []OnProcessEventsHandler {
c.handlersLock.Lock()
defer c.handlersLock.Unlock()
handlers := make([]OnProcessEventsHandler, 0, len(c.processEventsHandlers))
for _, h := range c.processEventsHandlers {
handlers = append(handlers, h)
}
return handlers
}
// GetServerVersion returns observed cluster version
func (c *clusterCache) GetServerVersion() string {
return c.serverVersion
}
// GetAPIResources returns information about observed API resources
// This method is called frequently during reconciliation to pass API resource info to `helm template`
// NOTE: we do not provide any consistency guarantees about the returned list. The list might be
// updated in place (anytime new CRDs are introduced or removed). If necessary, a separate method
// would need to be introduced to return a copy of the list so it can be iterated consistently.
func (c *clusterCache) GetAPIResources() []kube.APIResourceInfo {
c.lock.RLock()
defer c.lock.RUnlock()
return c.apiResources
}
@ -356,7 +422,7 @@ func (c *clusterCache) newResource(un *unstructured.Unstructured) *Resource {
ownerRefs, isInferredParentOf := c.resolveResourceReferences(un)
cacheManifest := false
var info interface{}
var info any
if c.populateResourceInfoHandler != nil {
info, cacheManifest = c.populateResourceInfoHandler(un, len(ownerRefs) == 0)
}
@ -419,6 +485,10 @@ func (c *clusterCache) Invalidate(opts ...UpdateSettingsFunc) {
for i := range opts {
opts[i](c)
}
if c.batchEventsProcessing {
c.invalidateEventMeta()
}
c.apisMeta = nil
c.namespacedResources = nil
c.log.Info("Invalidated cluster")
@ -452,15 +522,19 @@ func (c *clusterCache) stopWatching(gk schema.GroupKind, ns string) {
}
}
// startMissingWatches lists supported cluster resources and start watching for changes unless watch is already running
// startMissingWatches lists supported cluster resources and starts watching for changes unless watch is already running
func (c *clusterCache) startMissingWatches() error {
apis, err := c.kubectl.GetAPIResources(c.config, true, c.settings.ResourcesFilter)
if err != nil {
return err
return fmt.Errorf("failed to get APIResources: %w", err)
}
client, err := c.kubectl.NewDynamicClient(c.config)
if err != nil {
return err
return fmt.Errorf("failed to create client: %w", err)
}
clientset, err := kubernetes.NewForConfig(c.config)
if err != nil {
return fmt.Errorf("failed to create clientset: %w", err)
}
namespacedResources := make(map[schema.GroupKind]bool)
for i := range apis {
@ -470,8 +544,25 @@ func (c *clusterCache) startMissingWatches() error {
ctx, cancel := context.WithCancel(context.Background())
c.apisMeta[api.GroupKind] = &apiMeta{namespaced: api.Meta.Namespaced, watchCancel: cancel}
err = c.processApi(client, api, func(resClient dynamic.ResourceInterface, ns string) error {
go c.watchEvents(ctx, api, resClient, ns, "")
err := c.processApi(client, api, func(resClient dynamic.ResourceInterface, ns string) error {
resourceVersion, err := c.loadInitialState(ctx, api, resClient, ns, false) // don't lock here, we are already in a lock before startMissingWatches is called inside watchEvents
if err != nil && c.isRestrictedResource(err) {
keep := false
if c.respectRBAC == RespectRbacStrict {
k, permErr := c.checkPermission(ctx, clientset.AuthorizationV1().SelfSubjectAccessReviews(), api)
if permErr != nil {
return fmt.Errorf("failed to check permissions for resource %s: %w, original error=%v", api.GroupKind.String(), permErr, err.Error())
}
keep = k
}
// if we are not allowed to list the resource, remove it from the watch list
if !keep {
delete(c.apisMeta, api.GroupKind)
delete(namespacedResources, api.GroupKind)
return nil
}
}
go c.watchEvents(ctx, api, resClient, ns, resourceVersion)
return nil
})
if err != nil {
@ -490,12 +581,14 @@ func runSynced(lock sync.Locker, action func() error) error {
}
// listResources creates list pager and enforces number of concurrent list requests
// The callback should not wait on any locks that may be held by other callers.
func (c *clusterCache) listResources(ctx context.Context, resClient dynamic.ResourceInterface, callback func(*pager.ListPager) error) (string, error) {
if err := c.listSemaphore.Acquire(ctx, 1); err != nil {
return "", err
return "", fmt.Errorf("failed to acquire list semaphore: %w", err)
}
defer c.listSemaphore.Release(1)
var retryCount int64 = 0
var retryCount int64
resourceVersion := ""
listPager := pager.New(func(ctx context.Context, opts metav1.ListOptions) (runtime.Object, error) {
var res *unstructured.UnstructuredList
@ -514,15 +607,19 @@ func (c *clusterCache) listResources(ctx context.Context, resClient dynamic.Reso
if ierr != nil {
// Log out a retry
if c.listRetryLimit > 1 && c.listRetryFunc(ierr) {
retryCount += 1
retryCount++
c.log.Info(fmt.Sprintf("Error while listing resources: %v (try %d/%d)", ierr, retryCount, c.listRetryLimit))
}
//nolint:wrapcheck // wrap outside the retry
return ierr
}
resourceVersion = res.GetResourceVersion()
return nil
})
return res, err
if err != nil {
return res, fmt.Errorf("failed to list resources: %w", err)
}
return res, nil
})
listPager.PageBufferSize = c.listPageBufferSize
listPager.PageSize = c.listPageSize
@ -530,53 +627,61 @@ func (c *clusterCache) listResources(ctx context.Context, resClient dynamic.Reso
return resourceVersion, callback(listPager)
}
// loadInitialState loads the state of all the resources retrieved by the given resource client.
func (c *clusterCache) loadInitialState(ctx context.Context, api kube.APIResourceInfo, resClient dynamic.ResourceInterface, ns string, lock bool) (string, error) {
var items []*Resource
resourceVersion, err := c.listResources(ctx, resClient, func(listPager *pager.ListPager) error {
return listPager.EachListItem(ctx, metav1.ListOptions{}, func(obj runtime.Object) error {
if un, ok := obj.(*unstructured.Unstructured); !ok {
return fmt.Errorf("object %s/%s has an unexpected type", un.GroupVersionKind().String(), un.GetName())
} else {
items = append(items, c.newResource(un))
}
return nil
})
})
if err != nil {
return "", fmt.Errorf("failed to load initial state of resource %s: %w", api.GroupKind.String(), err)
}
if lock {
return resourceVersion, runSynced(&c.lock, func() error {
c.replaceResourceCache(api.GroupKind, items, ns)
return nil
})
}
c.replaceResourceCache(api.GroupKind, items, ns)
return resourceVersion, nil
}
func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo, resClient dynamic.ResourceInterface, ns string, resourceVersion string) {
kube.RetryUntilSucceed(ctx, watchResourcesRetryTimeout, fmt.Sprintf("watch %s on %s", api.GroupKind, c.config.Host), c.log, func() (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Recovered from panic: %+v\n%s", r, debug.Stack())
err = fmt.Errorf("recovered from panic: %+v\n%s", r, debug.Stack())
}
}()
// load API initial state if no resource version provided
if resourceVersion == "" {
resourceVersion, err = c.listResources(ctx, resClient, func(listPager *pager.ListPager) error {
var items []*Resource
err := listPager.EachListItem(ctx, metav1.ListOptions{}, func(obj runtime.Object) error {
if un, ok := obj.(*unstructured.Unstructured); !ok {
return fmt.Errorf("object %s/%s has an unexpected type", un.GroupVersionKind().String(), un.GetName())
} else {
items = append(items, c.newResource(un))
}
return nil
})
if err != nil {
return fmt.Errorf("failed to load initial state of resource %s: %v", api.GroupKind.String(), err)
}
return runSynced(&c.lock, func() error {
c.replaceResourceCache(api.GroupKind, items, ns)
return nil
})
})
resourceVersion, err = c.loadInitialState(ctx, api, resClient, ns, true)
if err != nil {
return err
}
}
w, err := watchutil.NewRetryWatcher(resourceVersion, &cache.ListWatch{
w, err := watchutil.NewRetryWatcherWithContext(ctx, resourceVersion, &cache.ListWatch{
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
res, err := resClient.Watch(ctx, options)
if errors.IsNotFound(err) {
if apierrors.IsNotFound(err) {
c.stopWatching(api.GroupKind, ns)
}
//nolint:wrapcheck // wrap outside the retry
return res, err
},
})
if err != nil {
return err
return fmt.Errorf("failed to create resource watcher: %w", err)
}
defer func() {
@ -599,26 +704,26 @@ func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo
// re-synchronize API state and restart watch periodically
case <-watchResyncTimeoutCh:
return fmt.Errorf("Resyncing %s on %s during to timeout", api.GroupKind, c.config.Host)
return fmt.Errorf("resyncing %s on %s due to timeout", api.GroupKind, c.config.Host)
// re-synchronize API state and restart watch if retry watcher failed to continue watching using provided resource version
case <-w.Done():
return fmt.Errorf("Watch %s on %s has closed", api.GroupKind, c.config.Host)
return fmt.Errorf("watch %s on %s has closed", api.GroupKind, c.config.Host)
case event, ok := <-w.ResultChan():
if !ok {
return fmt.Errorf("Watch %s on %s has closed", api.GroupKind, c.config.Host)
return fmt.Errorf("watch %s on %s has closed", api.GroupKind, c.config.Host)
}
obj, ok := event.Object.(*unstructured.Unstructured)
if !ok {
return fmt.Errorf("Failed to convert to *unstructured.Unstructured: %v", event.Object)
return fmt.Errorf("failed to convert to *unstructured.Unstructured: %v", event.Object)
}
c.processEvent(event.Type, obj)
c.recordEvent(event.Type, obj)
if kube.IsCRD(obj) {
var resources []kube.APIResourceInfo
crd := v1.CustomResourceDefinition{}
crd := apiextensionsv1.CustomResourceDefinition{}
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &crd)
if err != nil {
c.log.Error(err, "Failed to extract CRD resources")
@ -626,13 +731,15 @@ func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo
for _, v := range crd.Spec.Versions {
resources = append(resources, kube.APIResourceInfo{
GroupKind: schema.GroupKind{
Group: crd.Spec.Group, Kind: crd.Spec.Names.Kind},
Group: crd.Spec.Group, Kind: crd.Spec.Names.Kind,
},
GroupVersionResource: schema.GroupVersionResource{
Group: crd.Spec.Group, Version: v.Name, Resource: crd.Spec.Names.Plural},
Group: crd.Spec.Group, Version: v.Name, Resource: crd.Spec.Names.Plural,
},
Meta: metav1.APIResource{
Group: crd.Spec.Group,
SingularName: crd.Spec.Names.Singular,
Namespaced: crd.Spec.Scope == v1.NamespaceScoped,
Namespaced: crd.Spec.Scope == apiextensionsv1.NamespaceScoped,
Name: crd.Spec.Names.Plural,
Kind: crd.Spec.Names.Singular,
Version: v.Name,
@ -646,6 +753,7 @@ func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo
c.deleteAPIResource(resources[i])
}
} else {
c.log.Info("Updating Kubernetes APIs, watches, and Open API schemas due to CRD event", "eventType", event.Type, "groupKind", crd.GroupVersionKind().GroupKind().String())
// add new CRD's groupkind to c.apigroups
if event.Type == watch.Added {
for i := range resources {
@ -662,11 +770,7 @@ func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo
err = runSynced(&c.lock, func() error {
openAPISchema, gvkParser, err := c.kubectl.LoadOpenAPISchema(c.config)
if err != nil {
e, ok := err.(*kube.CreateGVKParserError)
if !ok {
return err
}
c.log.Error(e, "warning loading openapi schema")
return fmt.Errorf("failed to load open api schema while handling CRD change: %w", err)
}
if gvkParser != nil {
c.gvkParser = gvkParser
@ -683,11 +787,14 @@ func (c *clusterCache) watchEvents(ctx context.Context, api kube.APIResourceInfo
})
}
// processApi processes all the resources for a given API. First we construct an API client for the given API. Then we
// call the callback. If we're managing the whole cluster, we call the callback with the client and an empty namespace.
// If we're managing specific namespaces, we call the callback for each namespace.
func (c *clusterCache) processApi(client dynamic.Interface, api kube.APIResourceInfo, callback func(resClient dynamic.ResourceInterface, ns string) error) error {
resClient := client.Resource(api.GroupVersionResource)
switch {
// if manage whole cluster or resource is cluster level and cluster resources enabled
case len(c.namespaces) == 0 || !api.Meta.Namespaced && c.clusterResources:
case len(c.namespaces) == 0 || (!api.Meta.Namespaced && c.clusterResources):
return callback(resClient, "")
// if manage some namespaces and resource is namespaced
case len(c.namespaces) != 0 && api.Meta.Namespaced:
@ -702,35 +809,97 @@ func (c *clusterCache) processApi(client dynamic.Interface, api kube.APIResource
return nil
}
// isRestrictedResource checks if the kube api call is unauthorized or forbidden
func (c *clusterCache) isRestrictedResource(err error) bool {
return c.respectRBAC != RespectRbacDisabled && (apierrors.IsForbidden(err) || apierrors.IsUnauthorized(err))
}
// checkPermission runs a self subject access review to check if the controller has permissions to list the resource
func (c *clusterCache) checkPermission(ctx context.Context, reviewInterface authType1.SelfSubjectAccessReviewInterface, api kube.APIResourceInfo) (keep bool, err error) {
sar := &authorizationv1.SelfSubjectAccessReview{
Spec: authorizationv1.SelfSubjectAccessReviewSpec{
ResourceAttributes: &authorizationv1.ResourceAttributes{
Namespace: "*",
Verb: "list", // uses list verb to check for permissions
Resource: api.GroupVersionResource.Resource,
},
},
}
switch {
// if manage whole cluster or resource is cluster level and cluster resources enabled
case len(c.namespaces) == 0 || (!api.Meta.Namespaced && c.clusterResources):
resp, err := reviewInterface.Create(ctx, sar, metav1.CreateOptions{})
if err != nil {
return false, fmt.Errorf("failed to create self subject access review: %w", err)
}
if resp != nil && resp.Status.Allowed {
return true, nil
}
// unsupported, remove from watch list
return false, nil
// if manage some namespaces and resource is namespaced
case len(c.namespaces) != 0 && api.Meta.Namespaced:
for _, ns := range c.namespaces {
sar.Spec.ResourceAttributes.Namespace = ns
resp, err := reviewInterface.Create(ctx, sar, metav1.CreateOptions{})
if err != nil {
return false, fmt.Errorf("failed to create self subject access review: %w", err)
}
if resp != nil && resp.Status.Allowed {
return true, nil
}
// unsupported, remove from watch list
//nolint:staticcheck //FIXME
return false, nil
}
}
// checkPermission follows the same logic of determining namespace/cluster resource as the processApi function
// so if neither of the cases match it means the controller will not watch for it so it is safe to return true.
return true, nil
}
// sync retrieves the current state of the cluster and stores relevant information in the clusterCache fields.
//
// First we get some metadata from the cluster, like the server version, OpenAPI document, and the list of all API
// resources.
//
// Then we get a list of the preferred versions of all API resources which are to be monitored (it's possible to exclude
// resources from monitoring). We loop through those APIs asynchronously and for each API we list all resources. We also
// kick off a goroutine to watch the resources for that API and update the cache constantly.
//
// When this function exits, the cluster cache is up to date, and the appropriate resources are being watched for
// changes.
func (c *clusterCache) sync() error {
c.log.Info("Start syncing cluster")
for i := range c.apisMeta {
c.apisMeta[i].watchCancel()
}
if c.batchEventsProcessing {
c.invalidateEventMeta()
c.eventMetaCh = make(chan eventMeta)
}
c.apisMeta = make(map[schema.GroupKind]*apiMeta)
c.resources = make(map[kube.ResourceKey]*Resource)
c.namespacedResources = make(map[schema.GroupKind]bool)
config := c.config
version, err := c.kubectl.GetServerVersion(config)
if err != nil {
return err
return fmt.Errorf("failed to get server version: %w", err)
}
c.serverVersion = version
apiResources, err := c.kubectl.GetAPIResources(config, false, NewNoopSettings())
if err != nil {
return err
return fmt.Errorf("failed to get api resources: %w", err)
}
c.apiResources = apiResources
openAPISchema, gvkParser, err := c.kubectl.LoadOpenAPISchema(config)
if err != nil {
e, ok := err.(*kube.CreateGVKParserError)
if !ok {
return err
}
c.log.Error(e, "warning loading openapi schema")
return fmt.Errorf("failed to load open api schema while syncing cluster cache: %w", err)
}
if gvkParser != nil {
@ -740,14 +909,23 @@ func (c *clusterCache) sync() error {
c.openAPISchema = openAPISchema
apis, err := c.kubectl.GetAPIResources(c.config, true, c.settings.ResourcesFilter)
if err != nil {
return err
return fmt.Errorf("failed to get api resources: %w", err)
}
client, err := c.kubectl.NewDynamicClient(c.config)
if err != nil {
return err
return fmt.Errorf("failed to create client: %w", err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return fmt.Errorf("failed to create clientset: %w", err)
}
if c.batchEventsProcessing {
go c.processEvents()
}
// Each API is processed in parallel, so we need to take out a lock when we update clusterCache fields.
lock := sync.Mutex{}
err = kube.RunAllAsync(len(apis), func(i int) error {
api := apis[i]
@ -765,15 +943,34 @@ func (c *clusterCache) sync() error {
if un, ok := obj.(*unstructured.Unstructured); !ok {
return fmt.Errorf("object %s/%s has an unexpected type", un.GroupVersionKind().String(), un.GetName())
} else {
newRes := c.newResource(un)
lock.Lock()
c.setNode(c.newResource(un))
c.setNode(newRes)
lock.Unlock()
}
return nil
})
})
if err != nil {
return fmt.Errorf("failed to load initial state of resource %s: %v", api.GroupKind.String(), err)
if c.isRestrictedResource(err) {
keep := false
if c.respectRBAC == RespectRbacStrict {
k, permErr := c.checkPermission(ctx, clientset.AuthorizationV1().SelfSubjectAccessReviews(), api)
if permErr != nil {
return fmt.Errorf("failed to check permissions for resource %s: %w, original error=%v", api.GroupKind.String(), permErr, err.Error())
}
keep = k
}
// if we are not allowed to list the resource, remove it from the watch list
if !keep {
lock.Lock()
delete(c.apisMeta, api.GroupKind)
delete(c.namespacedResources, api.GroupKind)
lock.Unlock()
return nil
}
}
return fmt.Errorf("failed to load initial state of resource %s: %w", api.GroupKind.String(), err)
}
go c.watchEvents(ctx, api, resClient, ns, resourceVersion)
@ -781,15 +978,23 @@ func (c *clusterCache) sync() error {
return nil
})
})
if err != nil {
return fmt.Errorf("failed to sync cluster %s: %v", c.config.Host, err)
c.log.Error(err, "Failed to sync cluster")
return fmt.Errorf("failed to sync cluster %s: %w", c.config.Host, err)
}
c.log.Info("Cluster successfully synced")
return nil
}
// invalidateEventMeta closes the eventMeta channel if it is open
func (c *clusterCache) invalidateEventMeta() {
if c.eventMetaCh != nil {
close(c.eventMetaCh)
c.eventMetaCh = nil
}
}
// EnsureSynced checks cache state and synchronizes it if necessary
func (c *clusterCache) EnsureSynced() error {
syncStatus := &c.syncStatus
@ -890,12 +1095,112 @@ func (c *clusterCache) IterateHierarchy(key kube.ResourceKey, action func(resour
}
}
// IterateHierarchy iterates resource tree starting from the specified top level resources and executes callback for each resource in the tree
func (c *clusterCache) IterateHierarchyV2(keys []kube.ResourceKey, action func(resource *Resource, namespaceResources map[kube.ResourceKey]*Resource) bool) {
c.lock.RLock()
defer c.lock.RUnlock()
keysPerNamespace := make(map[string][]kube.ResourceKey)
for _, key := range keys {
_, ok := c.resources[key]
if !ok {
continue
}
keysPerNamespace[key.Namespace] = append(keysPerNamespace[key.Namespace], key)
}
for namespace, namespaceKeys := range keysPerNamespace {
nsNodes := c.nsIndex[namespace]
graph := buildGraph(nsNodes)
visited := make(map[kube.ResourceKey]int)
for _, key := range namespaceKeys {
visited[key] = 0
}
for _, key := range namespaceKeys {
// The check for existence of key is done above.
res := c.resources[key]
if visited[key] == 2 || !action(res, nsNodes) {
continue
}
visited[key] = 1
if _, ok := graph[key]; ok {
for _, child := range graph[key] {
if visited[child.ResourceKey()] == 0 && action(child, nsNodes) {
child.iterateChildrenV2(graph, nsNodes, visited, func(err error, child *Resource, namespaceResources map[kube.ResourceKey]*Resource) bool {
if err != nil {
c.log.V(2).Info(err.Error())
return false
}
return action(child, namespaceResources)
})
}
}
}
visited[key] = 2
}
}
}
func buildGraph(nsNodes map[kube.ResourceKey]*Resource) map[kube.ResourceKey]map[types.UID]*Resource {
// Prepare to construct a graph
nodesByUID := make(map[types.UID][]*Resource, len(nsNodes))
for _, node := range nsNodes {
nodesByUID[node.Ref.UID] = append(nodesByUID[node.Ref.UID], node)
}
// In graph, they key is the parent and the value is a list of children.
graph := make(map[kube.ResourceKey]map[types.UID]*Resource)
// Loop through all nodes, calling each one "childNode," because we're only bothering with it if it has a parent.
for _, childNode := range nsNodes {
for i, ownerRef := range childNode.OwnerRefs {
// First, backfill UID of inferred owner child references.
if ownerRef.UID == "" {
group, err := schema.ParseGroupVersion(ownerRef.APIVersion)
if err != nil {
// APIVersion is invalid, so we couldn't find the parent.
continue
}
graphKeyNode, ok := nsNodes[kube.ResourceKey{Group: group.Group, Kind: ownerRef.Kind, Namespace: childNode.Ref.Namespace, Name: ownerRef.Name}]
if !ok {
// No resource found with the given graph key, so move on.
continue
}
ownerRef.UID = graphKeyNode.Ref.UID
childNode.OwnerRefs[i] = ownerRef
}
// Now that we have the UID of the parent, update the graph.
uidNodes, ok := nodesByUID[ownerRef.UID]
if ok {
for _, uidNode := range uidNodes {
// Update the graph for this owner to include the child.
if _, ok := graph[uidNode.ResourceKey()]; !ok {
graph[uidNode.ResourceKey()] = make(map[types.UID]*Resource)
}
r, ok := graph[uidNode.ResourceKey()][childNode.Ref.UID]
if !ok {
graph[uidNode.ResourceKey()][childNode.Ref.UID] = childNode
} else if r != nil {
// The object might have multiple children with the same UID (e.g. replicaset from apps and extensions group).
// It is ok to pick any object, but we need to make sure we pick the same child after every refresh.
key1 := r.ResourceKey()
key2 := childNode.ResourceKey()
if strings.Compare(key1.String(), key2.String()) > 0 {
graph[uidNode.ResourceKey()][childNode.Ref.UID] = childNode
}
}
}
}
}
}
return graph
}
// IsNamespaced answers if specified group/kind is a namespaced resource API or not
func (c *clusterCache) IsNamespaced(gk schema.GroupKind) (bool, error) {
if isNamespaced, ok := c.namespacedResources[gk]; ok {
return isNamespaced, nil
}
return false, errors.NewNotFound(schema.GroupResource{Group: gk.Group}, "")
return false, apierrors.NewNotFound(schema.GroupResource{Group: gk.Group}, "")
}
func (c *clusterCache) managesNamespace(namespace string) bool {
@ -917,9 +1222,9 @@ func (c *clusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructure
for _, o := range targetObjs {
if len(c.namespaces) > 0 {
if o.GetNamespace() == "" && !c.clusterResources {
return nil, fmt.Errorf("Cluster level %s %q can not be managed when in namespaced mode", o.GetKind(), o.GetName())
return nil, fmt.Errorf("cluster level %s %q can not be managed when in namespaced mode", o.GetKind(), o.GetName())
} else if o.GetNamespace() != "" && !c.managesNamespace(o.GetNamespace()) {
return nil, fmt.Errorf("Namespace %q for %s %q is not managed", o.GetNamespace(), o.GetKind(), o.GetName())
return nil, fmt.Errorf("namespace %q for %s %q is not managed", o.GetNamespace(), o.GetKind(), o.GetName())
}
}
}
@ -948,20 +1253,20 @@ func (c *clusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructure
var err error
managedObj, err = c.kubectl.GetResource(context.TODO(), c.config, targetObj.GroupVersionKind(), existingObj.Ref.Name, existingObj.Ref.Namespace)
if err != nil {
if errors.IsNotFound(err) {
if apierrors.IsNotFound(err) {
return nil
}
return err
return fmt.Errorf("unexpected error getting managed object: %w", err)
}
}
} else if _, watched := c.apisMeta[key.GroupKind()]; !watched {
var err error
managedObj, err = c.kubectl.GetResource(context.TODO(), c.config, targetObj.GroupVersionKind(), targetObj.GetName(), targetObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
if apierrors.IsNotFound(err) {
return nil
}
return err
return fmt.Errorf("unexpected error getting managed object: %w", err)
}
}
}
@ -973,10 +1278,10 @@ func (c *clusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructure
c.log.V(1).Info(fmt.Sprintf("Failed to convert resource: %v", err))
managedObj, err = c.kubectl.GetResource(context.TODO(), c.config, targetObj.GroupVersionKind(), managedObj.GetName(), managedObj.GetNamespace())
if err != nil {
if errors.IsNotFound(err) {
if apierrors.IsNotFound(err) {
return nil
}
return err
return fmt.Errorf("unexpected error getting managed object: %w", err)
}
} else {
managedObj = converted
@ -988,13 +1293,13 @@ func (c *clusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructure
return nil
})
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to get managed objects: %w", err)
}
return managedObjs, nil
}
func (c *clusterCache) processEvent(event watch.EventType, un *unstructured.Unstructured) {
func (c *clusterCache) recordEvent(event watch.EventType, un *unstructured.Unstructured) {
for _, h := range c.getEventHandlers() {
h(event, un)
}
@ -1003,15 +1308,74 @@ func (c *clusterCache) processEvent(event watch.EventType, un *unstructured.Unst
return
}
if c.batchEventsProcessing {
c.eventMetaCh <- eventMeta{event, un}
} else {
c.lock.Lock()
defer c.lock.Unlock()
c.processEvent(key, eventMeta{event, un})
}
}
func (c *clusterCache) processEvents() {
log := c.log.WithValues("functionName", "processItems")
log.V(1).Info("Start processing events")
c.lock.Lock()
defer c.lock.Unlock()
ch := c.eventMetaCh
c.lock.Unlock()
eventMetas := make([]eventMeta, 0)
ticker := time.NewTicker(c.eventProcessingInterval)
defer ticker.Stop()
for {
select {
case evMeta, ok := <-ch:
if !ok {
log.V(2).Info("Event processing channel closed, finish processing")
return
}
eventMetas = append(eventMetas, evMeta)
case <-ticker.C:
if len(eventMetas) > 0 {
c.processEventsBatch(eventMetas)
eventMetas = eventMetas[:0]
}
}
}
}
func (c *clusterCache) processEventsBatch(eventMetas []eventMeta) {
log := c.log.WithValues("functionName", "processEventsBatch")
start := time.Now()
c.lock.Lock()
log.V(1).Info("Lock acquired (ms)", "duration", time.Since(start).Milliseconds())
defer func() {
c.lock.Unlock()
duration := time.Since(start)
// Update the metric with the duration of the events processing
for _, handler := range c.getProcessEventsHandlers() {
handler(duration, len(eventMetas))
}
}()
for _, evMeta := range eventMetas {
key := kube.GetResourceKey(evMeta.un)
c.processEvent(key, evMeta)
}
log.V(1).Info("Processed events (ms)", "count", len(eventMetas), "duration", time.Since(start).Milliseconds())
}
func (c *clusterCache) processEvent(key kube.ResourceKey, evMeta eventMeta) {
existingNode, exists := c.resources[key]
if event == watch.Deleted {
if evMeta.event == watch.Deleted {
if exists {
c.onNodeRemoved(key)
}
} else if event != watch.Deleted {
c.onNodeUpdated(existingNode, c.newResource(un))
} else {
c.onNodeUpdated(existingNode, c.newResource(evMeta.un))
}
}
@ -1047,11 +1411,9 @@ func (c *clusterCache) onNodeRemoved(key kube.ResourceKey) {
}
}
var (
ignoredRefreshResources = map[string]bool{
"/" + kube.EndpointsKind: true,
}
)
var ignoredRefreshResources = map[string]bool{
"/" + kube.EndpointsKind: true,
}
// GetClusterInfo returns cluster cache statistics
func (c *clusterCache) GetClusterInfo() ClusterInfo {

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
// Code generated by mockery v1.0.0. DO NOT EDIT.
// Code generated by mockery v2.43.2. DO NOT EDIT.
package mocks
@ -26,6 +26,10 @@ type ClusterCache struct {
func (_m *ClusterCache) EnsureSynced() error {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for EnsureSynced")
}
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
@ -47,6 +51,10 @@ func (_m *ClusterCache) FindResources(namespace string, predicates ...func(*cach
_ca = append(_ca, _va...)
ret := _m.Called(_ca...)
if len(ret) == 0 {
panic("no return value specified for FindResources")
}
var r0 map[kube.ResourceKey]*cache.Resource
if rf, ok := ret.Get(0).(func(string, ...func(*cache.Resource) bool) map[kube.ResourceKey]*cache.Resource); ok {
r0 = rf(namespace, predicates...)
@ -63,6 +71,10 @@ func (_m *ClusterCache) FindResources(namespace string, predicates ...func(*cach
func (_m *ClusterCache) GetAPIResources() []kube.APIResourceInfo {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for GetAPIResources")
}
var r0 []kube.APIResourceInfo
if rf, ok := ret.Get(0).(func() []kube.APIResourceInfo); ok {
r0 = rf()
@ -79,6 +91,10 @@ func (_m *ClusterCache) GetAPIResources() []kube.APIResourceInfo {
func (_m *ClusterCache) GetClusterInfo() cache.ClusterInfo {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for GetClusterInfo")
}
var r0 cache.ClusterInfo
if rf, ok := ret.Get(0).(func() cache.ClusterInfo); ok {
r0 = rf()
@ -93,6 +109,10 @@ func (_m *ClusterCache) GetClusterInfo() cache.ClusterInfo {
func (_m *ClusterCache) GetGVKParser() *managedfields.GvkParser {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for GetGVKParser")
}
var r0 *managedfields.GvkParser
if rf, ok := ret.Get(0).(func() *managedfields.GvkParser); ok {
r0 = rf()
@ -109,7 +129,15 @@ func (_m *ClusterCache) GetGVKParser() *managedfields.GvkParser {
func (_m *ClusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructured, isManaged func(*cache.Resource) bool) (map[kube.ResourceKey]*unstructured.Unstructured, error) {
ret := _m.Called(targetObjs, isManaged)
if len(ret) == 0 {
panic("no return value specified for GetManagedLiveObjs")
}
var r0 map[kube.ResourceKey]*unstructured.Unstructured
var r1 error
if rf, ok := ret.Get(0).(func([]*unstructured.Unstructured, func(*cache.Resource) bool) (map[kube.ResourceKey]*unstructured.Unstructured, error)); ok {
return rf(targetObjs, isManaged)
}
if rf, ok := ret.Get(0).(func([]*unstructured.Unstructured, func(*cache.Resource) bool) map[kube.ResourceKey]*unstructured.Unstructured); ok {
r0 = rf(targetObjs, isManaged)
} else {
@ -118,7 +146,6 @@ func (_m *ClusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructur
}
}
var r1 error
if rf, ok := ret.Get(1).(func([]*unstructured.Unstructured, func(*cache.Resource) bool) error); ok {
r1 = rf(targetObjs, isManaged)
} else {
@ -132,6 +159,10 @@ func (_m *ClusterCache) GetManagedLiveObjs(targetObjs []*unstructured.Unstructur
func (_m *ClusterCache) GetOpenAPISchema() openapi.Resources {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for GetOpenAPISchema")
}
var r0 openapi.Resources
if rf, ok := ret.Get(0).(func() openapi.Resources); ok {
r0 = rf()
@ -148,6 +179,10 @@ func (_m *ClusterCache) GetOpenAPISchema() openapi.Resources {
func (_m *ClusterCache) GetServerVersion() string {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for GetServerVersion")
}
var r0 string
if rf, ok := ret.Get(0).(func() string); ok {
r0 = rf()
@ -173,14 +208,21 @@ func (_m *ClusterCache) Invalidate(opts ...cache.UpdateSettingsFunc) {
func (_m *ClusterCache) IsNamespaced(gk schema.GroupKind) (bool, error) {
ret := _m.Called(gk)
if len(ret) == 0 {
panic("no return value specified for IsNamespaced")
}
var r0 bool
var r1 error
if rf, ok := ret.Get(0).(func(schema.GroupKind) (bool, error)); ok {
return rf(gk)
}
if rf, ok := ret.Get(0).(func(schema.GroupKind) bool); ok {
r0 = rf(gk)
} else {
r0 = ret.Get(0).(bool)
}
var r1 error
if rf, ok := ret.Get(1).(func(schema.GroupKind) error); ok {
r1 = rf(gk)
} else {
@ -195,10 +237,19 @@ func (_m *ClusterCache) IterateHierarchy(key kube.ResourceKey, action func(*cach
_m.Called(key, action)
}
// IterateHierarchyV2 provides a mock function with given fields: keys, action
func (_m *ClusterCache) IterateHierarchyV2(keys []kube.ResourceKey, action func(*cache.Resource, map[kube.ResourceKey]*cache.Resource) bool) {
_m.Called(keys, action)
}
// OnEvent provides a mock function with given fields: handler
func (_m *ClusterCache) OnEvent(handler cache.OnEventHandler) cache.Unsubscribe {
ret := _m.Called(handler)
if len(ret) == 0 {
panic("no return value specified for OnEvent")
}
var r0 cache.Unsubscribe
if rf, ok := ret.Get(0).(func(cache.OnEventHandler) cache.Unsubscribe); ok {
r0 = rf(handler)
@ -211,10 +262,34 @@ func (_m *ClusterCache) OnEvent(handler cache.OnEventHandler) cache.Unsubscribe
return r0
}
// OnProcessEventsHandler provides a mock function with given fields: handler
func (_m *ClusterCache) OnProcessEventsHandler(handler cache.OnProcessEventsHandler) cache.Unsubscribe {
ret := _m.Called(handler)
if len(ret) == 0 {
panic("no return value specified for OnProcessEventsHandler")
}
var r0 cache.Unsubscribe
if rf, ok := ret.Get(0).(func(cache.OnProcessEventsHandler) cache.Unsubscribe); ok {
r0 = rf(handler)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(cache.Unsubscribe)
}
}
return r0
}
// OnResourceUpdated provides a mock function with given fields: handler
func (_m *ClusterCache) OnResourceUpdated(handler cache.OnResourceUpdatedHandler) cache.Unsubscribe {
ret := _m.Called(handler)
if len(ret) == 0 {
panic("no return value specified for OnResourceUpdated")
}
var r0 cache.Unsubscribe
if rf, ok := ret.Get(0).(func(cache.OnResourceUpdatedHandler) cache.Unsubscribe); ok {
r0 = rf(handler)
@ -226,3 +301,17 @@ func (_m *ClusterCache) OnResourceUpdated(handler cache.OnResourceUpdatedHandler
return r0
}
// NewClusterCache creates a new instance of ClusterCache. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewClusterCache(t interface {
mock.TestingT
Cleanup(func())
}) *ClusterCache {
mock := &ClusterCache{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@ -9,7 +9,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/rest"
@ -25,7 +25,7 @@ func TestResourceOfGroupKind(t *testing.T) {
Name: "deploy",
},
}
service := &v1.Service{
service := &corev1.Service{
TypeMeta: metav1.TypeMeta{
APIVersion: "",
Kind: "Service",
@ -82,12 +82,12 @@ func TestGetNamespaceResources(t *testing.T) {
resources := cluster.FindResources("default", TopLevelResource)
assert.Len(t, resources, 2)
assert.Equal(t, resources[getResourceKey(t, defaultNamespaceTopLevel1)].Ref.Name, "helm-guestbook1")
assert.Equal(t, resources[getResourceKey(t, defaultNamespaceTopLevel2)].Ref.Name, "helm-guestbook2")
assert.Equal(t, "helm-guestbook1", resources[getResourceKey(t, defaultNamespaceTopLevel1)].Ref.Name)
assert.Equal(t, "helm-guestbook2", resources[getResourceKey(t, defaultNamespaceTopLevel2)].Ref.Name)
resources = cluster.FindResources("kube-system", TopLevelResource)
assert.Len(t, resources, 1)
assert.Equal(t, resources[getResourceKey(t, kubesystemNamespaceTopLevel2)].Ref.Name, "helm-guestbook3")
assert.Equal(t, "helm-guestbook3", resources[getResourceKey(t, kubesystemNamespaceTopLevel2)].Ref.Name)
}
func ExampleNewClusterCache_inspectNamespaceResources() {
@ -98,7 +98,7 @@ func ExampleNewClusterCache_inspectNamespaceResources() {
// cache default namespace only
SetNamespaces([]string{"default", "kube-system"}),
// configure custom logic to cache resources manifest and additional metadata
SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, isRoot bool) (info interface{}, cacheManifest bool) {
SetPopulateResourceInfoHandler(func(un *unstructured.Unstructured, _ bool) (info any, cacheManifest bool) {
// if resource belongs to 'extensions' group then mark if with 'deprecated' label
if un.GroupVersionKind().Group == "extensions" {
info = []string{"deprecated"}

View File

@ -3,9 +3,9 @@ package cache
import (
"encoding/json"
"fmt"
"strings"
"regexp"
v1 "k8s.io/api/apps/v1"
appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
@ -24,9 +24,8 @@ func (c *clusterCache) resolveResourceReferences(un *unstructured.Unstructured)
gvk := un.GroupVersionKind()
switch {
// Special case for endpoint. Remove after https://github.com/kubernetes/kubernetes/issues/28483 is fixed
case gvk.Group == "" && gvk.Kind == kube.EndpointsKind && len(un.GetOwnerReferences()) == 0:
case gvk.Group == "" && gvk.Kind == kube.EndpointsKind && len(ownerRefs) == 0:
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetName(),
Kind: kube.ServiceKind,
@ -34,7 +33,7 @@ func (c *clusterCache) resolveResourceReferences(un *unstructured.Unstructured)
})
// Special case for Operator Lifecycle Manager ClusterServiceVersion:
case un.GroupVersionKind().Group == "operators.coreos.com" && un.GetKind() == "ClusterServiceVersion":
case gvk.Group == "operators.coreos.com" && gvk.Kind == "ClusterServiceVersion":
if un.GetAnnotations()["olm.operatorGroup"] != "" {
ownerRefs = append(ownerRefs, metav1.OwnerReference{
Name: un.GetAnnotations()["olm.operatorGroup"],
@ -44,12 +43,12 @@ func (c *clusterCache) resolveResourceReferences(un *unstructured.Unstructured)
}
// Edge case: consider auto-created service account tokens as a child of service account objects
case un.GetKind() == kube.SecretKind && un.GroupVersionKind().Group == "":
case gvk.Kind == kube.SecretKind && gvk.Group == "":
if yes, ref := isServiceAccountTokenSecret(un); yes {
ownerRefs = append(ownerRefs, ref)
}
case (un.GroupVersionKind().Group == "apps" || un.GroupVersionKind().Group == "extensions") && un.GetKind() == kube.StatefulSetKind:
case (gvk.Group == "apps" || gvk.Group == "extensions") && gvk.Kind == kube.StatefulSetKind:
if refs, err := isStatefulSetChild(un); err != nil {
c.log.Error(err, fmt.Sprintf("Failed to extract StatefulSet %s/%s PVC references", un.GetNamespace(), un.GetName()))
} else {
@ -61,21 +60,21 @@ func (c *clusterCache) resolveResourceReferences(un *unstructured.Unstructured)
}
func isStatefulSetChild(un *unstructured.Unstructured) (func(kube.ResourceKey) bool, error) {
sts := v1.StatefulSet{}
sts := appsv1.StatefulSet{}
data, err := json.Marshal(un)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to marshal unstructured object: %w", err)
}
err = json.Unmarshal(data, &sts)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to unmarshal statefulset: %w", err)
}
templates := sts.Spec.VolumeClaimTemplates
return func(key kube.ResourceKey) bool {
if key.Kind == kube.PersistentVolumeClaimKind && key.GroupKind().Group == "" {
for _, templ := range templates {
if strings.HasPrefix(key.Name, fmt.Sprintf("%s-%s-", templ.Name, un.GetName())) {
if match, _ := regexp.MatchString(fmt.Sprintf(`%s-%s-\d+$`, templ.Name, un.GetName()), key.Name); match {
return true
}
}

106
pkg/cache/references_test.go vendored Normal file
View File

@ -0,0 +1,106 @@
package cache
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func Test_isStatefulSetChild(t *testing.T) {
type args struct {
un *unstructured.Unstructured
}
statefulSet := &appsv1.StatefulSet{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "StatefulSet",
},
ObjectMeta: metav1.ObjectMeta{
Name: "sw-broker",
},
Spec: appsv1.StatefulSetSpec{
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{
{
ObjectMeta: metav1.ObjectMeta{
Name: "emqx-data",
},
},
},
},
}
// Create a new unstructured object from the JSON string
un, err := kube.ToUnstructured(statefulSet)
require.NoErrorf(t, err, "Failed to convert StatefulSet to unstructured: %v", err)
tests := []struct {
name string
args args
wantErr bool
checkFunc func(func(kube.ResourceKey) bool) bool
}{
{
name: "Valid PVC for sw-broker",
args: args{un: un},
wantErr: false,
checkFunc: func(fn func(kube.ResourceKey) bool) bool {
// Check a valid PVC name for "sw-broker"
return fn(kube.ResourceKey{Kind: "PersistentVolumeClaim", Name: "emqx-data-sw-broker-0"})
},
},
{
name: "Invalid PVC for sw-broker",
args: args{un: un},
wantErr: false,
checkFunc: func(fn func(kube.ResourceKey) bool) bool {
// Check an invalid PVC name that should belong to "sw-broker-internal"
return !fn(kube.ResourceKey{Kind: "PersistentVolumeClaim", Name: "emqx-data-sw-broker-internal-0"})
},
},
{
name: "Mismatch PVC for sw-broker",
args: args{un: &unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
"metadata": map[string]any{
"name": "sw-broker",
},
"spec": map[string]any{
"volumeClaimTemplates": []any{
map[string]any{
"metadata": map[string]any{
"name": "volume-2",
},
},
},
},
},
}},
wantErr: false,
checkFunc: func(fn func(kube.ResourceKey) bool) bool {
// Check an invalid PVC name for "api-test"
return !fn(kube.ResourceKey{Kind: "PersistentVolumeClaim", Name: "volume-2"})
},
},
}
// Execute test cases
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := isStatefulSetChild(tt.args.un)
assert.Equal(t, tt.wantErr, err != nil, "isStatefulSetChild() error = %v, wantErr %v", err, tt.wantErr)
if err == nil {
assert.True(t, tt.checkFunc(got), "Check function failed for %v", tt.name)
}
})
}
}

46
pkg/cache/resource.go vendored
View File

@ -3,7 +3,9 @@ package cache
import (
"fmt"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@ -15,13 +17,13 @@ type Resource struct {
// ResourceVersion holds most recent observed resource version
ResourceVersion string
// Resource reference
Ref v1.ObjectReference
Ref corev1.ObjectReference
// References to resource owners
OwnerRefs []metav1.OwnerReference
// Optional creation timestamp of the resource
CreationTimestamp *metav1.Time
// Optional additional information about the resource
Info interface{}
Info any
// Optional whole resource manifest
Resource *unstructured.Unstructured
@ -35,7 +37,6 @@ func (r *Resource) ResourceKey() kube.ResourceKey {
func (r *Resource) isParentOf(child *Resource) bool {
for i, ownerRef := range child.OwnerRefs {
// backfill UID of inferred owner child references
if ownerRef.UID == "" && r.Ref.Kind == ownerRef.Kind && r.Ref.APIVersion == ownerRef.APIVersion && r.Ref.Name == ownerRef.Name {
ownerRef.UID = r.Ref.UID
@ -91,10 +92,39 @@ func (r *Resource) iterateChildren(ns map[kube.ResourceKey]*Resource, parents ma
if parents[childKey] {
key := r.ResourceKey()
_ = action(fmt.Errorf("circular dependency detected. %s is child and parent of %s", childKey.String(), key.String()), child, ns)
} else {
if action(nil, child, ns) {
child.iterateChildren(ns, newResourceKeySet(parents, r.ResourceKey()), action)
}
} else if action(nil, child, ns) {
child.iterateChildren(ns, newResourceKeySet(parents, r.ResourceKey()), action)
}
}
}
}
// iterateChildrenV2 is a depth-first traversal of the graph of resources starting from the current resource.
func (r *Resource) iterateChildrenV2(graph map[kube.ResourceKey]map[types.UID]*Resource, ns map[kube.ResourceKey]*Resource, visited map[kube.ResourceKey]int, action func(err error, child *Resource, namespaceResources map[kube.ResourceKey]*Resource) bool) {
key := r.ResourceKey()
if visited[key] == 2 {
return
}
// this indicates that we've started processing this node's children
visited[key] = 1
defer func() {
// this indicates that we've finished processing this node's children
visited[key] = 2
}()
children, ok := graph[key]
if !ok || children == nil {
return
}
for _, c := range children {
childKey := c.ResourceKey()
child := ns[childKey]
switch visited[childKey] {
case 1:
// Since we encountered a node that we're currently processing, we know we have a circular dependency.
_ = action(fmt.Errorf("circular dependency detected. %s is child and parent of %s", childKey.String(), key.String()), child, ns)
case 0:
if action(nil, child, ns) {
child.iterateChildrenV2(graph, ns, visited, action)
}
}
}

View File

@ -7,12 +7,12 @@ import (
"k8s.io/client-go/rest"
)
var c = NewClusterCache(&rest.Config{})
var cacheTest = NewClusterCache(&rest.Config{})
func TestIsParentOf(t *testing.T) {
child := c.newResource(mustToUnstructured(testPod()))
parent := c.newResource(mustToUnstructured(testRS()))
grandParent := c.newResource(mustToUnstructured(testDeploy()))
child := cacheTest.newResource(mustToUnstructured(testPod1()))
parent := cacheTest.newResource(mustToUnstructured(testRS()))
grandParent := cacheTest.newResource(mustToUnstructured(testDeploy()))
assert.True(t, parent.isParentOf(child))
assert.False(t, grandParent.isParentOf(child))
@ -22,14 +22,14 @@ func TestIsParentOfSameKindDifferentGroupAndUID(t *testing.T) {
rs := testRS()
rs.APIVersion = "somecrd.io/v1"
rs.SetUID("123")
child := c.newResource(mustToUnstructured(testPod()))
invalidParent := c.newResource(mustToUnstructured(rs))
child := cacheTest.newResource(mustToUnstructured(testPod1()))
invalidParent := cacheTest.newResource(mustToUnstructured(rs))
assert.False(t, invalidParent.isParentOf(child))
}
func TestIsServiceParentOfEndPointWithTheSameName(t *testing.T) {
nonMatchingNameEndPoint := c.newResource(strToUnstructured(`
nonMatchingNameEndPoint := cacheTest.newResource(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
@ -37,7 +37,7 @@ metadata:
namespace: default
`))
matchingNameEndPoint := c.newResource(strToUnstructured(`
matchingNameEndPoint := cacheTest.newResource(strToUnstructured(`
apiVersion: v1
kind: Endpoints
metadata:
@ -45,7 +45,7 @@ metadata:
namespace: default
`))
parent := c.newResource(testService)
parent := cacheTest.newResource(testService)
assert.True(t, parent.isParentOf(matchingNameEndPoint))
assert.Equal(t, parent.Ref.UID, matchingNameEndPoint.OwnerRefs[0].UID)
@ -53,7 +53,7 @@ metadata:
}
func TestIsServiceAccountParentOfSecret(t *testing.T) {
serviceAccount := c.newResource(strToUnstructured(`
serviceAccount := cacheTest.newResource(strToUnstructured(`
apiVersion: v1
kind: ServiceAccount
metadata:
@ -63,7 +63,7 @@ metadata:
secrets:
- name: default-token-123
`))
tokenSecret := c.newResource(strToUnstructured(`
tokenSecret := cacheTest.newResource(strToUnstructured(`
apiVersion: v1
kind: Secret
metadata:

29
pkg/cache/settings.go vendored
View File

@ -17,8 +17,7 @@ func NewNoopSettings() *noopSettings {
return &noopSettings{}
}
type noopSettings struct {
}
type noopSettings struct{}
func (f *noopSettings) GetResourceHealth(_ *unstructured.Unstructured) (*health.HealthStatus, error) {
return nil, nil
@ -158,3 +157,29 @@ func SetRetryOptions(maxRetries int32, useBackoff bool, retryFunc ListRetryFunc)
cache.listRetryFunc = retryFunc
}
}
// SetRespectRBAC allows to set whether to respect the controller rbac in list/watches
func SetRespectRBAC(respectRBAC int) UpdateSettingsFunc {
return func(cache *clusterCache) {
// if invalid value is provided disable respect rbac
if respectRBAC < RespectRbacDisabled || respectRBAC > RespectRbacStrict {
cache.respectRBAC = RespectRbacDisabled
} else {
cache.respectRBAC = respectRBAC
}
}
}
// SetBatchEventsProcessing allows to set whether to process events in batch
func SetBatchEventsProcessing(batchProcessing bool) UpdateSettingsFunc {
return func(cache *clusterCache) {
cache.batchEventsProcessing = batchProcessing
}
}
// SetEventProcessingInterval allows to set the interval for processing events
func SetEventProcessingInterval(interval time.Duration) UpdateSettingsFunc {
return func(cache *clusterCache) {
cache.eventProcessingInterval = interval
}
}

View File

@ -55,3 +55,20 @@ func TestSetWatchResyncTimeout(t *testing.T) {
cache = NewClusterCache(&rest.Config{}, SetWatchResyncTimeout(timeout))
assert.Equal(t, timeout, cache.watchResyncTimeout)
}
func TestSetBatchEventsProcessing(t *testing.T) {
cache := NewClusterCache(&rest.Config{})
assert.False(t, cache.batchEventsProcessing)
cache.Invalidate(SetBatchEventsProcessing(true))
assert.True(t, cache.batchEventsProcessing)
}
func TestSetEventsProcessingInterval(t *testing.T) {
cache := NewClusterCache(&rest.Config{})
assert.Equal(t, defaultEventProcessingInterval, cache.eventProcessingInterval)
interval := 1 * time.Second
cache.Invalidate(SetEventProcessingInterval(interval))
assert.Equal(t, interval, cache.eventProcessingInterval)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,13 @@
package diff
import (
"context"
"github.com/go-logr/logr"
"k8s.io/klog/v2/klogr"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/managedfields"
"k8s.io/klog/v2/textlogger"
cmdutil "k8s.io/kubectl/pkg/cmd/util"
)
type Option func(*options)
@ -13,13 +18,20 @@ type options struct {
ignoreAggregatedRoles bool
normalizer Normalizer
log logr.Logger
structuredMergeDiff bool
gvkParser *managedfields.GvkParser
manager string
serverSideDiff bool
serverSideDryRunner ServerSideDryRunner
ignoreMutationWebhook bool
}
func applyOptions(opts []Option) options {
o := options{
ignoreAggregatedRoles: false,
ignoreMutationWebhook: true,
normalizer: GetNoopNormalizer(),
log: klogr.New(),
log: textlogger.NewLogger(textlogger.NewConfig()),
}
for _, opt := range opts {
opt(&o)
@ -27,6 +39,37 @@ func applyOptions(opts []Option) options {
return o
}
type KubeApplier interface {
ApplyResource(ctx context.Context, obj *unstructured.Unstructured, dryRunStrategy cmdutil.DryRunStrategy, force, validate, serverSideApply bool, manager string) (string, error)
}
// ServerSideDryRunner defines the contract to run a server-side apply in
// dryrun mode.
type ServerSideDryRunner interface {
Run(ctx context.Context, obj *unstructured.Unstructured, manager string) (string, error)
}
// K8sServerSideDryRunner is the Kubernetes implementation of ServerSideDryRunner.
type K8sServerSideDryRunner struct {
dryrunApplier KubeApplier
}
// NewK8sServerSideDryRunner will instantiate a new K8sServerSideDryRunner with
// the given kubeApplier.
func NewK8sServerSideDryRunner(kubeApplier KubeApplier) *K8sServerSideDryRunner {
return &K8sServerSideDryRunner{
dryrunApplier: kubeApplier,
}
}
// ServerSideApplyDryRun will invoke a kubernetes server-side apply with the given
// obj and the given manager in dryrun mode. Will return the predicted live state
// json as string.
func (kdr *K8sServerSideDryRunner) Run(ctx context.Context, obj *unstructured.Unstructured, manager string) (string, error) {
//nolint:wrapcheck // trivial function, don't bother wrapping
return kdr.dryrunApplier.ApplyResource(ctx, obj, cmdutil.DryRunServer, false, false, true, manager)
}
func IgnoreAggregatedRoles(ignore bool) Option {
return func(o *options) {
o.ignoreAggregatedRoles = ignore
@ -44,3 +87,39 @@ func WithLogr(log logr.Logger) Option {
o.log = log
}
}
func WithStructuredMergeDiff(smd bool) Option {
return func(o *options) {
o.structuredMergeDiff = smd
}
}
func WithGVKParser(parser *managedfields.GvkParser) Option {
return func(o *options) {
o.gvkParser = parser
}
}
func WithManager(manager string) Option {
return func(o *options) {
o.manager = manager
}
}
func WithServerSideDiff(ssd bool) Option {
return func(o *options) {
o.serverSideDiff = ssd
}
}
func WithIgnoreMutationWebhook(mw bool) Option {
return func(o *options) {
o.ignoreMutationWebhook = mw
}
}
func WithServerSideDryRunner(ssadr ServerSideDryRunner) Option {
return func(o *options) {
o.serverSideDryRunner = ssadr
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
Please check the doc.go file for more details about
how to use and maintain the code in this package.

View File

@ -0,0 +1,47 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldmanager
import (
"bytes"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
)
// EmptyFields represents a set with no paths
// It looks like metav1.Fields{Raw: []byte("{}")}
var EmptyFields = func() metav1.FieldsV1 {
f, err := SetToFields(*fieldpath.NewSet())
if err != nil {
panic("should never happen")
}
return f
}()
// FieldsToSet creates a set paths from an input trie of fields
func FieldsToSet(f metav1.FieldsV1) (s fieldpath.Set, err error) {
err = s.FromJSON(bytes.NewReader(f.Raw))
return s, err
}
// SetToFields creates a trie of fields from an input set of paths
func SetToFields(s fieldpath.Set) (f metav1.FieldsV1, err error) {
f.Raw, err = s.ToJSON()
return f, err
}

View File

@ -0,0 +1,248 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldmanager
import (
"encoding/json"
"fmt"
"sort"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
)
// ManagedInterface groups a fieldpath.ManagedFields together with the timestamps associated with each operation.
type ManagedInterface interface {
// Fields gets the fieldpath.ManagedFields.
Fields() fieldpath.ManagedFields
// Times gets the timestamps associated with each operation.
Times() map[string]*metav1.Time
}
type managedStruct struct {
fields fieldpath.ManagedFields
times map[string]*metav1.Time
}
var _ ManagedInterface = &managedStruct{}
// Fields implements ManagedInterface.
func (m *managedStruct) Fields() fieldpath.ManagedFields {
return m.fields
}
// Times implements ManagedInterface.
func (m *managedStruct) Times() map[string]*metav1.Time {
return m.times
}
// NewEmptyManaged creates an empty ManagedInterface.
func NewEmptyManaged() ManagedInterface {
return NewManaged(fieldpath.ManagedFields{}, map[string]*metav1.Time{})
}
// NewManaged creates a ManagedInterface from a fieldpath.ManagedFields and the timestamps associated with each operation.
func NewManaged(f fieldpath.ManagedFields, t map[string]*metav1.Time) ManagedInterface {
return &managedStruct{
fields: f,
times: t,
}
}
// RemoveObjectManagedFields removes the ManagedFields from the object
// before we merge so that it doesn't appear in the ManagedFields
// recursively.
func RemoveObjectManagedFields(obj runtime.Object) {
accessor, err := meta.Accessor(obj)
if err != nil {
panic(fmt.Sprintf("couldn't get accessor: %v", err))
}
accessor.SetManagedFields(nil)
}
// EncodeObjectManagedFields converts and stores the fieldpathManagedFields into the objects ManagedFields
func EncodeObjectManagedFields(obj runtime.Object, managed ManagedInterface) error {
accessor, err := meta.Accessor(obj)
if err != nil {
panic(fmt.Sprintf("couldn't get accessor: %v", err))
}
encodedManagedFields, err := encodeManagedFields(managed)
if err != nil {
return fmt.Errorf("failed to convert back managed fields to API: %v", err)
}
accessor.SetManagedFields(encodedManagedFields)
return nil
}
// DecodeManagedFields converts ManagedFields from the wire format (api format)
// to the format used by sigs.k8s.io/structured-merge-diff
func DecodeManagedFields(encodedManagedFields []metav1.ManagedFieldsEntry) (ManagedInterface, error) {
managed := managedStruct{}
managed.fields = make(fieldpath.ManagedFields, len(encodedManagedFields))
managed.times = make(map[string]*metav1.Time, len(encodedManagedFields))
for i, encodedVersionedSet := range encodedManagedFields {
switch encodedVersionedSet.Operation {
case metav1.ManagedFieldsOperationApply, metav1.ManagedFieldsOperationUpdate:
default:
return nil, fmt.Errorf("operation must be `Apply` or `Update`")
}
if len(encodedVersionedSet.APIVersion) < 1 {
return nil, fmt.Errorf("apiVersion must not be empty")
}
switch encodedVersionedSet.FieldsType {
case "FieldsV1":
// Valid case.
case "":
return nil, fmt.Errorf("missing fieldsType in managed fields entry %d", i)
default:
return nil, fmt.Errorf("invalid fieldsType %q in managed fields entry %d", encodedVersionedSet.FieldsType, i)
}
manager, err := BuildManagerIdentifier(&encodedVersionedSet)
if err != nil {
return nil, fmt.Errorf("error decoding manager from %v: %v", encodedVersionedSet, err)
}
managed.fields[manager], err = decodeVersionedSet(&encodedVersionedSet)
if err != nil {
return nil, fmt.Errorf("error decoding versioned set from %v: %v", encodedVersionedSet, err)
}
managed.times[manager] = encodedVersionedSet.Time
}
return &managed, nil
}
// BuildManagerIdentifier creates a manager identifier string from a ManagedFieldsEntry
func BuildManagerIdentifier(encodedManager *metav1.ManagedFieldsEntry) (manager string, err error) {
encodedManagerCopy := *encodedManager
// Never include fields type in the manager identifier
encodedManagerCopy.FieldsType = ""
// Never include the fields in the manager identifier
encodedManagerCopy.FieldsV1 = nil
// Never include the time in the manager identifier
encodedManagerCopy.Time = nil
// For appliers, don't include the APIVersion in the manager identifier,
// so it will always have the same manager identifier each time it applied.
if encodedManager.Operation == metav1.ManagedFieldsOperationApply {
encodedManagerCopy.APIVersion = ""
}
// Use the remaining fields to build the manager identifier
b, err := json.Marshal(&encodedManagerCopy)
if err != nil {
return "", fmt.Errorf("error marshalling manager identifier: %v", err)
}
return string(b), nil
}
func decodeVersionedSet(encodedVersionedSet *metav1.ManagedFieldsEntry) (versionedSet fieldpath.VersionedSet, err error) {
fields := EmptyFields
if encodedVersionedSet.FieldsV1 != nil {
fields = *encodedVersionedSet.FieldsV1
}
set, err := FieldsToSet(fields)
if err != nil {
return nil, fmt.Errorf("error decoding set: %v", err)
}
return fieldpath.NewVersionedSet(&set, fieldpath.APIVersion(encodedVersionedSet.APIVersion), encodedVersionedSet.Operation == metav1.ManagedFieldsOperationApply), nil
}
// encodeManagedFields converts ManagedFields from the format used by
// sigs.k8s.io/structured-merge-diff to the wire format (api format)
func encodeManagedFields(managed ManagedInterface) (encodedManagedFields []metav1.ManagedFieldsEntry, err error) {
if len(managed.Fields()) == 0 {
return nil, nil
}
encodedManagedFields = []metav1.ManagedFieldsEntry{}
for manager := range managed.Fields() {
versionedSet := managed.Fields()[manager]
v, err := encodeManagerVersionedSet(manager, versionedSet)
if err != nil {
return nil, fmt.Errorf("error encoding versioned set for %v: %v", manager, err)
}
if t, ok := managed.Times()[manager]; ok {
v.Time = t
}
encodedManagedFields = append(encodedManagedFields, *v)
}
return sortEncodedManagedFields(encodedManagedFields)
}
func sortEncodedManagedFields(encodedManagedFields []metav1.ManagedFieldsEntry) (sortedManagedFields []metav1.ManagedFieldsEntry, err error) {
sort.Slice(encodedManagedFields, func(i, j int) bool {
p, q := encodedManagedFields[i], encodedManagedFields[j]
if p.Operation != q.Operation {
return p.Operation < q.Operation
}
pSeconds, qSeconds := int64(0), int64(0)
if p.Time != nil {
pSeconds = p.Time.Unix()
}
if q.Time != nil {
qSeconds = q.Time.Unix()
}
if pSeconds != qSeconds {
return pSeconds < qSeconds
}
if p.Manager != q.Manager {
return p.Manager < q.Manager
}
if p.APIVersion != q.APIVersion {
return p.APIVersion < q.APIVersion
}
return p.Subresource < q.Subresource
})
return encodedManagedFields, nil
}
func encodeManagerVersionedSet(manager string, versionedSet fieldpath.VersionedSet) (encodedVersionedSet *metav1.ManagedFieldsEntry, err error) {
encodedVersionedSet = &metav1.ManagedFieldsEntry{}
// Get as many fields as we can from the manager identifier
err = json.Unmarshal([]byte(manager), encodedVersionedSet)
if err != nil {
return nil, fmt.Errorf("error unmarshalling manager identifier %v: %v", manager, err)
}
// Get the APIVersion, Operation, and Fields from the VersionedSet
encodedVersionedSet.APIVersion = string(versionedSet.APIVersion())
if versionedSet.Applied() {
encodedVersionedSet.Operation = metav1.ManagedFieldsOperationApply
}
encodedVersionedSet.FieldsType = "FieldsV1"
fields, err := SetToFields(*versionedSet.Set())
if err != nil {
return nil, fmt.Errorf("error encoding set: %v", err)
}
encodedVersionedSet.FieldsV1 = &fields
return encodedVersionedSet, nil
}

View File

@ -0,0 +1,130 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldmanager
import (
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/managedfields"
"k8s.io/kube-openapi/pkg/util/proto"
"sigs.k8s.io/structured-merge-diff/v4/typed"
"sigs.k8s.io/structured-merge-diff/v4/value"
)
// TypeConverter allows you to convert from runtime.Object to
// typed.TypedValue and the other way around.
type TypeConverter interface {
ObjectToTyped(runtime.Object) (*typed.TypedValue, error)
TypedToObject(*typed.TypedValue) (runtime.Object, error)
}
// DeducedTypeConverter is a TypeConverter for CRDs that don't have a
// schema. It does implement the same interface though (and create the
// same types of objects), so that everything can still work the same.
// CRDs are merged with all their fields being "atomic" (lists
// included).
//
// Note that this is not going to be sufficient for converting to/from
// CRDs that have a schema defined (we don't support that schema yet).
// TODO(jennybuckley): Use the schema provided by a CRD if it exists.
type DeducedTypeConverter struct{}
var _ TypeConverter = DeducedTypeConverter{}
// ObjectToTyped converts an object into a TypedValue with a "deduced type".
func (DeducedTypeConverter) ObjectToTyped(obj runtime.Object) (*typed.TypedValue, error) {
switch o := obj.(type) {
case *unstructured.Unstructured:
return typed.DeducedParseableType.FromUnstructured(o.UnstructuredContent())
default:
return typed.DeducedParseableType.FromStructured(obj)
}
}
// TypedToObject transforms the typed value into a runtime.Object. That
// is not specific to deduced type.
func (DeducedTypeConverter) TypedToObject(value *typed.TypedValue) (runtime.Object, error) {
return valueToObject(value.AsValue())
}
type typeConverter struct {
parser *managedfields.GvkParser
}
var _ TypeConverter = &typeConverter{}
// NewTypeConverter builds a TypeConverter from a proto.Models. This
// will automatically find the proper version of the object, and the
// corresponding schema information.
func NewTypeConverter(models proto.Models, preserveUnknownFields bool) (TypeConverter, error) {
parser, err := managedfields.NewGVKParser(models, preserveUnknownFields)
if err != nil {
return nil, err
}
return &typeConverter{parser: parser}, nil
}
func (c *typeConverter) ObjectToTyped(obj runtime.Object) (*typed.TypedValue, error) {
gvk := obj.GetObjectKind().GroupVersionKind()
t := c.parser.Type(gvk)
if t == nil {
return nil, newNoCorrespondingTypeError(gvk)
}
switch o := obj.(type) {
case *unstructured.Unstructured:
return t.FromUnstructured(o.UnstructuredContent())
default:
return t.FromStructured(obj)
}
}
func (c *typeConverter) TypedToObject(value *typed.TypedValue) (runtime.Object, error) {
return valueToObject(value.AsValue())
}
func valueToObject(val value.Value) (runtime.Object, error) {
vu := val.Unstructured()
switch o := vu.(type) {
case map[string]any:
return &unstructured.Unstructured{Object: o}, nil
default:
return nil, fmt.Errorf("failed to convert value to unstructured for type %T", vu)
}
}
type noCorrespondingTypeErr struct {
gvk schema.GroupVersionKind
}
func newNoCorrespondingTypeError(gvk schema.GroupVersionKind) error {
return &noCorrespondingTypeErr{gvk: gvk}
}
func (k *noCorrespondingTypeErr) Error() string {
return fmt.Sprintf("no corresponding type for %v", k.gvk)
}
func isNoCorrespondingTypeError(err error) bool {
if err == nil {
return false
}
_, ok := err.(*noCorrespondingTypeErr)
return ok
}

View File

@ -0,0 +1,101 @@
/*
Copyright 2018 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package fieldmanager
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/structured-merge-diff/v4/fieldpath"
"sigs.k8s.io/structured-merge-diff/v4/merge"
"sigs.k8s.io/structured-merge-diff/v4/typed"
)
// versionConverter is an implementation of
// sigs.k8s.io/structured-merge-diff/merge.Converter
type versionConverter struct {
typeConverter TypeConverter
objectConvertor runtime.ObjectConvertor
hubGetter func(from schema.GroupVersion) schema.GroupVersion
}
var _ merge.Converter = &versionConverter{}
// NewVersionConverter builds a VersionConverter from a TypeConverter and an ObjectConvertor.
func newVersionConverter(t TypeConverter, o runtime.ObjectConvertor, h schema.GroupVersion) merge.Converter {
return &versionConverter{
typeConverter: t,
objectConvertor: o,
hubGetter: func(from schema.GroupVersion) schema.GroupVersion {
return schema.GroupVersion{
Group: from.Group,
Version: h.Version,
}
},
}
}
// NewCRDVersionConverter builds a VersionConverter for CRDs from a TypeConverter and an ObjectConvertor.
func newCRDVersionConverter(t TypeConverter, o runtime.ObjectConvertor, h schema.GroupVersion) merge.Converter {
return &versionConverter{
typeConverter: t,
objectConvertor: o,
hubGetter: func(from schema.GroupVersion) schema.GroupVersion {
return h
},
}
}
// Convert implements sigs.k8s.io/structured-merge-diff/merge.Converter
func (v *versionConverter) Convert(object *typed.TypedValue, version fieldpath.APIVersion) (*typed.TypedValue, error) {
// Convert the smd typed value to a kubernetes object.
objectToConvert, err := v.typeConverter.TypedToObject(object)
if err != nil {
return object, err
}
// Parse the target groupVersion.
groupVersion, err := schema.ParseGroupVersion(string(version))
if err != nil {
return object, err
}
// If attempting to convert to the same version as we already have, just return it.
fromVersion := objectToConvert.GetObjectKind().GroupVersionKind().GroupVersion()
if fromVersion == groupVersion {
return object, nil
}
// Convert to internal
internalObject, err := v.objectConvertor.ConvertToVersion(objectToConvert, v.hubGetter(fromVersion))
if err != nil {
return object, err
}
// Convert the object into the target version
convertedObject, err := v.objectConvertor.ConvertToVersion(internalObject, groupVersion)
if err != nil {
return object, err
}
// Convert the object back to a smd typed value and return it.
return v.typeConverter.ObjectToTyped(convertedObject)
}
// IsMissingVersionError
func (v *versionConverter) IsMissingVersionError(err error) bool {
return runtime.IsNotRegisteredError(err) || isNoCorrespondingTypeError(err)
}

View File

@ -0,0 +1,25 @@
/*
Package fieldmanager is a special package as its main purpose
is to expose the dependencies required by structured-merge-diff
library to calculate diffs when server-side apply option is enabled.
The dependency tree necessary to have a `merge.Updater` instance
isn't trivial to implement and the strategy used is borrowing a copy
from Kubernetes apiserver codebase in order to expose the required
functionality.
Below there is a list of borrowed files and a reference to which
package/file in Kubernetes they were copied from:
- borrowed_fields.go: k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/internal/fields.go
- borrowed_managedfields.go: k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/internal/managedfields.go
- borrowed_typeconverter.go: k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/typeconverter.go
- borrowed_versionconverter.go: k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/versionconverter.go
In order to keep maintenance as minimal as possible the borrowed
files are verbatim copy from Kubernetes. The private objects that
need to be exposed are wrapped in the wrapper.go file. Updating
the borrowed files should be trivial in most cases but must be done
manually as we have no control over future refactorings Kubernetes
might do.
*/
package fieldmanager

View File

@ -0,0 +1,22 @@
package fieldmanager
/*
In order to keep maintenance as minimal as possible the borrowed
files in this package are verbatim copy from Kubernetes. The
private objects that need to be exposed are wrapped and exposed
in this file.
*/
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/managedfields"
"sigs.k8s.io/structured-merge-diff/v4/merge"
)
// NewVersionConverter will expose the version converter from the
// borrowed private function from k8s apiserver handler.
func NewVersionConverter(gvkParser *managedfields.GvkParser, o runtime.ObjectConvertor, h schema.GroupVersion) merge.Converter {
tc := &typeConverter{parser: gvkParser}
return newVersionConverter(tc, o, h)
}

View File

@ -0,0 +1,58 @@
// Code generated by mockery v2.38.0. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
// ServerSideDryRunner is an autogenerated mock type for the ServerSideDryRunner type
type ServerSideDryRunner struct {
mock.Mock
}
// Run provides a mock function with given fields: ctx, obj, manager
func (_m *ServerSideDryRunner) Run(ctx context.Context, obj *unstructured.Unstructured, manager string) (string, error) {
ret := _m.Called(ctx, obj, manager)
if len(ret) == 0 {
panic("no return value specified for Run")
}
var r0 string
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *unstructured.Unstructured, string) (string, error)); ok {
return rf(ctx, obj, manager)
}
if rf, ok := ret.Get(0).(func(context.Context, *unstructured.Unstructured, string) string); ok {
r0 = rf(ctx, obj, manager)
} else {
r0 = ret.Get(0).(string)
}
if rf, ok := ret.Get(1).(func(context.Context, *unstructured.Unstructured, string) error); ok {
r1 = rf(ctx, obj, manager)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// NewServerSideDryRunner creates a new instance of ServerSideDryRunner. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewServerSideDryRunner(t interface {
mock.TestingT
Cleanup(func())
}) *ServerSideDryRunner {
mock := &ServerSideDryRunner{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

80
pkg/diff/testdata/data.go vendored Normal file
View File

@ -0,0 +1,80 @@
package testdata
import _ "embed"
var (
//go:embed smd-service-config.yaml
ServiceConfigYAML string
//go:embed smd-service-live.yaml
ServiceLiveYAML string
//go:embed smd-service-config-2-ports.yaml
ServiceConfigWith2Ports string
//go:embed smd-service-live-with-type.yaml
LiveServiceWithTypeYAML string
//go:embed smd-service-config-ports.yaml
ServiceConfigWithSamePortsYAML string
//go:embed smd-deploy-live.yaml
DeploymentLiveYAML string
//go:embed smd-deploy-config.yaml
DeploymentConfigYAML string
//go:embed smd-deploy2-live.yaml
Deployment2LiveYAML string
//go:embed smd-deploy2-config.yaml
Deployment2ConfigYAML string
//go:embed smd-deploy2-predicted-live.json
Deployment2PredictedLiveJSONSSD string
// OpenAPIV2Doc is a binary representation of the openapi
// document available in a given k8s instance. To update
// this file the following commands can be executed:
// kubectl proxy --port=7777 &
// curl -s -H Accept:application/com.github.proto-openapi.spec.v2@v1.0+protobuf http://localhost:7777/openapi/v2 > openapiv2.bin
//
//go:embed openapiv2.bin
OpenAPIV2Doc []byte
//go:embed ssd-service-config.yaml
ServiceConfigYAMLSSD string
//go:embed ssd-service-live.yaml
ServiceLiveYAMLSSD string
//go:embed ssd-service-predicted-live.json
ServicePredictedLiveJSONSSD string
//go:embed ssd-deploy-nested-config.yaml
DeploymentNestedConfigYAMLSSD string
//go:embed ssd-deploy-nested-live.yaml
DeploymentNestedLiveYAMLSSD string
//go:embed ssd-deploy-nested-predicted-live.json
DeploymentNestedPredictedLiveJSONSSD string
//go:embed ssd-deploy-with-manual-apply-config.yaml
DeploymentApplyConfigYAMLSSD string
//go:embed ssd-deploy-with-manual-apply-live.yaml
DeploymentApplyLiveYAMLSSD string
//go:embed ssd-deploy-with-manual-apply-predicted-live.json
DeploymentApplyPredictedLiveJSONSSD string
//go:embed ssd-svc-label-live.yaml
ServiceLiveLabelYAMLSSD string
//go:embed ssd-svc-no-label-config.yaml
ServiceConfigNoLabelYAMLSSD string
//go:embed ssd-svc-no-label-predicted-live.json
ServicePredictedLiveNoLabelJSONSSD string
)

72907
pkg/diff/testdata/openapiv2.bin vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: missing
applications.argoproj.io/app-name: nginx
something-else: bla
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
applications.argoproj.io/app-name: nginx
spec:
containers:
- image: 'nginx:1.23.1'
imagePullPolicy: Never
livenessProbe:
exec:
command:
- cat
- non-existent-file
initialDelaySeconds: 5
periodSeconds: 180
name: nginx
ports:
- containerPort: 80

149
pkg/diff/testdata/smd-deploy-live.yaml vendored Normal file
View File

@ -0,0 +1,149 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '1'
creationTimestamp: '2022-09-18T23:50:25Z'
generation: 1
labels:
app: missing
applications.argoproj.io/app-name: nginx
something-else: bla
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
'f:app': {}
'f:applications.argoproj.io/app-name': {}
'f:something-else': {}
'f:spec':
'f:replicas': {}
'f:selector': {}
'f:template':
'f:metadata':
'f:labels':
'f:app': {}
'f:applications.argoproj.io/app-name': {}
'f:spec':
'f:containers':
'k:{"name":"nginx"}':
.: {}
'f:image': {}
'f:imagePullPolicy': {}
'f:livenessProbe':
'f:exec':
'f:command': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:name': {}
'f:ports':
'k:{"containerPort":80,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
manager: argocd-controller
operation: Apply
time: '2022-09-18T23:50:25Z'
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:deployment.kubernetes.io/revision': {}
'f:status':
'f:availableReplicas': {}
'f:conditions':
.: {}
'k:{"type":"Available"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'k:{"type":"Progressing"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'f:observedGeneration': {}
'f:readyReplicas': {}
'f:replicas': {}
'f:updatedReplicas': {}
manager: kube-controller-manager
operation: Update
subresource: status
time: '2022-09-23T18:30:59Z'
name: nginx-deployment
namespace: default
resourceVersion: '7492752'
uid: 731f7434-d3d9-47fa-b179-d9368a84f7c9
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
applications.argoproj.io/app-name: nginx
spec:
containers:
- image: 'nginx:1.23.1'
imagePullPolicy: Never
livenessProbe:
exec:
command:
- cat
- non-existent-file
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 180
successThreshold: 1
timeoutSeconds: 1
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: '2022-09-18T23:50:25Z'
lastUpdateTime: '2022-09-18T23:50:26Z'
message: ReplicaSet "nginx-deployment-6d68ff5f86" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2022-09-23T18:30:59Z'
lastUpdateTime: '2022-09-23T18:30:59Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2

View File

@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: missing
applications.argoproj.io/app-name: nginx
something-else: bla
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
applications.argoproj.io/app-name: nginx
spec:
containers:
- image: 'nginx:1.23.1'
imagePullPolicy: Never
livenessProbe:
exec:
command:
- cat
- non-existent-file
initialDelaySeconds: 5
periodSeconds: 180
name: nginx
ports:
- containerPort: 8081
protocol: UDP
- containerPort: 80
protocol: TCP

161
pkg/diff/testdata/smd-deploy2-live.yaml vendored Normal file
View File

@ -0,0 +1,161 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '1'
creationTimestamp: '2022-09-18T23:50:25Z'
generation: 1
labels:
app: missing
applications.argoproj.io/app-name: nginx
something-else: bla
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
'f:app': {}
'f:applications.argoproj.io/app-name': {}
'f:something-else': {}
'f:spec':
'f:replicas': {}
'f:selector': {}
'f:template':
'f:metadata':
'f:labels':
'f:app': {}
'f:applications.argoproj.io/app-name': {}
'f:spec':
'f:containers':
'k:{"name":"nginx"}':
.: {}
'f:image': {}
'f:imagePullPolicy': {}
'f:livenessProbe':
'f:exec':
'f:command': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:name': {}
'f:ports':
'k:{"containerPort":80,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:protocol': {}
'f:resources':
'f:requests':
'f:cpu': {}
'f:memory': {}
manager: argocd-controller
operation: Apply
time: '2022-09-18T23:50:25Z'
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:deployment.kubernetes.io/revision': {}
'f:status':
'f:availableReplicas': {}
'f:conditions':
.: {}
'k:{"type":"Available"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'k:{"type":"Progressing"}':
.: {}
'f:lastTransitionTime': {}
'f:lastUpdateTime': {}
'f:message': {}
'f:reason': {}
'f:status': {}
'f:type': {}
'f:observedGeneration': {}
'f:readyReplicas': {}
'f:replicas': {}
'f:updatedReplicas': {}
manager: kube-controller-manager
operation: Update
subresource: status
time: '2022-09-23T18:30:59Z'
name: nginx-deployment
namespace: default
resourceVersion: '7492752'
uid: 731f7434-d3d9-47fa-b179-d9368a84f7c9
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
applications.argoproj.io/app-name: nginx
spec:
containers:
- image: 'nginx:1.23.1'
imagePullPolicy: Never
livenessProbe:
exec:
command:
- cat
- non-existent-file
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 180
successThreshold: 1
timeoutSeconds: 1
name: nginx
ports:
- containerPort: 80
protocol: TCP
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: UDP
resources:
requests:
memory: 512Mi
cpu: 500m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: '2022-09-18T23:50:25Z'
lastUpdateTime: '2022-09-18T23:50:26Z'
message: ReplicaSet "nginx-deployment-6d68ff5f86" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
- lastTransitionTime: '2022-09-23T18:30:59Z'
lastUpdateTime: '2022-09-23T18:30:59Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2

View File

@ -0,0 +1,124 @@
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"labels": {
"app": "missing",
"applications.argoproj.io/app-name": "nginx",
"something-else": "bla"
},
"name": "nginx-deployment",
"namespace": "default",
"managedFields": [
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
"f:app": {},
"f:applications.argoproj.io/app-name": {},
"f:something-else": {}
}
},
"f:spec": {
"f:replicas": {},
"f:selector": {},
"f:template": {
"f:metadata": {
"f:labels": {
"f:app": {},
"f:applications.argoproj.io/app-name": {}
}
},
"f:spec": {
"f:containers": {
"k:{\"name\":\"nginx\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:livenessProbe": {
"f:exec": {
"f:command": {}
},
"f:initialDelaySeconds": {},
"f:periodSeconds": {}
},
"f:name": {},
"f:ports": {
"k:{\"containerPort\":80,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:protocol": {}
}
},
"f:resources": {
"f:requests": {
"f:cpu": {},
"f:memory": {}
}
}
}
}
}
}
}
},
"manager": "argocd-controller",
"operation": "Apply",
"time": "2022-09-18T23:50:25Z"
}
]
},
"spec": {
"replicas": 2,
"selector": {
"matchLabels": {
"app": "nginx"
}
},
"template": {
"metadata": {
"labels": {
"app": "nginx",
"applications.argoproj.io/app-name": "nginx"
}
},
"spec": {
"containers": [
{
"image": "nginx:1.23.1",
"imagePullPolicy": "Never",
"livenessProbe": {
"exec": {
"command": [
"cat",
"non-existent-file"
]
},
"initialDelaySeconds": 5,
"periodSeconds": 180
},
"name": "nginx",
"ports": [
{
"containerPort": 8081,
"protocol": "UDP"
},
{
"containerPort": 80,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"memory": "512Mi",
"cpu": "500m"
}
}
}
]
}
}
}
}

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-options: ServerSideApply=true
labels:
app.kubernetes.io/instance: big-crd
name: multiple-protocol-port-svc
namespace: default
spec:
ports:
- name: rtmpk
port: 1986
protocol: UDP
targetPort: 1986
- name: rtmp
port: 1935
targetPort: 1935

View File

@ -0,0 +1,29 @@
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-options: ServerSideApply=true
labels:
app.kubernetes.io/instance: big-crd
name: multiple-protocol-port-svc
namespace: default
spec:
ports:
- name: rtmpk
port: 1986
protocol: UDP
targetPort: 1986
- name: rtmp
port: 1935
targetPort: 1935
- name: rtmpq
port: 1935
protocol: UDP
targetPort: 1935
- name: https
port: 443
targetPort: 443
- name: http3
port: 443
protocol: UDP
targetPort: 443

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-options: ServerSideApply=true
labels:
app.kubernetes.io/instance: big-crd
name: multiple-protocol-port-svc
namespace: default
spec:
ports:
- name: rtmpk
port: 1986
protocol: UDP
targetPort: 1986
- name: rtmp
port: 1935
targetPort: 1936
- name: https
port: 443
targetPort: 443

View File

@ -0,0 +1,110 @@
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-options: ServerSideApply=true
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"ServerSideApply=true"},"name":"multiple-protocol-port-svc","namespace":"default"},"spec":{"ports":[{"name":"rtmpk","port":1986,"protocol":"UDP","targetPort":1986},{"name":"rtmp","port":1935,"protocol":"TCP","targetPort":1935},{"name":"rtmpq","port":1935,"protocol":"UDP","targetPort":1935}]}}
creationTimestamp: '2022-06-24T19:37:02Z'
labels:
app.kubernetes.io/instance: big-crd
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:argocd.argoproj.io/sync-options': {}
'f:labels':
'f:app.kubernetes.io/instance': {}
'f:spec':
'f:ports':
'k:{"port":1935,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:targetPort': {}
'k:{"port":1986,"protocol":"UDP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:targetPort': {}
'f:type': {}
manager: argocd-controller
operation: Apply
time: '2022-06-30T16:28:09Z'
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:spec':
'f:internalTrafficPolicy': {}
'f:ports':
.: {}
'k:{"port":1935,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":1986,"protocol":"UDP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'f:sessionAffinity': {}
manager: kubectl-client-side-apply
operation: Update
time: '2022-06-25T04:18:10Z'
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
manager: kube-vpnkit-forwarder
operation: Update
subresource: status
time: '2022-06-29T12:36:34Z'
name: multiple-protocol-port-svc
namespace: default
resourceVersion: '2138591'
uid: af42e800-bd33-4412-bc77-d204d298613d
spec:
clusterIP: 10.111.193.74
clusterIPs:
- 10.111.193.74
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: rtmpk
nodePort: 31648
port: 1986
protocol: UDP
targetPort: 1986
- name: rtmp
nodePort: 30018
port: 1935
protocol: TCP
targetPort: 1935
- name: https
nodePort: 31975
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

83
pkg/diff/testdata/smd-service-live.yaml vendored Normal file
View File

@ -0,0 +1,83 @@
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-options: ServerSideApply=true
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"argocd.argoproj.io/sync-options":"ServerSideApply=true"},"name":"multiple-protocol-port-svc","namespace":"default"},"spec":{"ports":[{"name":"rtmpk","port":1986,"protocol":"UDP","targetPort":1986},{"name":"rtmp","port":1935,"targetPort":1935},{"name":"https","port":443,"targetPort":443}]}}
creationTimestamp: '2022-06-24T19:37:02Z'
labels:
app.kubernetes.io/instance: big-crd
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:argocd.argoproj.io/sync-options': {}
'f:labels':
'f:app.kubernetes.io/instance': {}
'f:spec':
'f:ports':
'k:{"port":1935,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:targetPort': {}
'k:{"port":1986,"protocol":"UDP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:targetPort': {}
manager: argocd-controller
operation: Apply
time: '2022-06-24T19:45:02Z'
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:argocd.argoproj.io/sync-options': {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:spec':
'f:internalTrafficPolicy': {}
'f:sessionAffinity': {}
'f:type': {}
manager: kubectl-client-side-apply
operation: Update
time: '2022-06-24T19:37:02Z'
name: multiple-protocol-port-svc
namespace: default
resourceVersion: '1825080'
uid: af42e800-bd33-4412-bc77-d204d298613d
spec:
clusterIP: 10.111.193.74
clusterIPs:
- 10.111.193.74
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: rtmpk
port: 1986
protocol: UDP
targetPort: 1986
- name: rtmp
port: 1935
protocol: TCP
targetPort: 1935
- name: https
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

View File

@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nested-test-deployment
namespace: default
labels:
app: nested-test
applications.argoproj.io/app-name: nested-app
spec:
replicas: 1
selector:
matchLabels:
app: nested-test
template:
metadata:
labels:
app: nested-test
spec:
automountServiceAccountToken: false
containers:
- name: main-container
image: 'nginx:latest'
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
env:
- name: ENV_VAR1
value: "value1"
- name: ENV_VAR2
value: "value2"
resources:
limits:
memory: 100Mi

View File

@ -0,0 +1,70 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nested-test-deployment
namespace: default
labels:
app: nested-test
applications.argoproj.io/app-name: nested-app
annotations:
deployment.kubernetes.io/revision: '1'
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:app: {}
f:applications.argoproj.io/app-name: {}
f:spec:
f:replicas: {}
f:selector: {}
f:template:
f:metadata:
f:labels:
f:app: {}
f:spec:
f:containers:
k:{"name":"main-container"}:
.: {}
f:image: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":80,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:env:
.: {}
k:{"name":"ENV_VAR1"}:
.: {}
f:name: {}
f:value: {}
manager: argocd-controller
operation: Apply
spec:
replicas: 1
selector:
matchLabels:
app: nested-test
template:
metadata:
labels:
app: nested-test
spec:
automountServiceAccountToken: false
containers:
- name: main-container
image: 'nginx:latest'
ports:
- containerPort: 80
name: http
protocol: TCP
env:
- name: ENV_VAR1
value: "value1"
resources:
limits:
memory: "100Mi"

View File

@ -0,0 +1,131 @@
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "nested-test-deployment",
"namespace": "default",
"labels": {
"app": "nested-test",
"applications.argoproj.io/app-name": "nested-app"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
},
"managedFields": [
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
"f:app": {},
"f:applications.argoproj.io/app-name": {}
}
},
"f:spec": {
"f:replicas": {},
"f:selector": {},
"f:template": {
"f:metadata": {
"f:labels": {
"f:app": {}
}
},
"f:spec": {
"f:containers": {
"k:{\"name\":\"main-container\"}": {
".": {},
"f:image": {},
"f:name": {},
"f:ports": {
".": {},
"k:{\"containerPort\":80,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {},
"f:protocol": {}
},
"k:{\"containerPort\":443,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {},
"f:protocol": {}
}
},
"f:env": {
".": {},
"k:{\"name\":\"ENV_VAR1\"}": {
".": {},
"f:name": {},
"f:value": {}
},
"k:{\"name\":\"ENV_VAR2\"}": {
".": {},
"f:name": {},
"f:value": {}
}
}
}
}
}
}
}
},
"manager": "argocd-controller",
"operation": "Apply",
"time": "2023-12-19T00:00:00Z"
}
]
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "nested-test"
}
},
"template": {
"metadata": {
"labels": {
"app": "nested-test"
}
},
"spec": {
"automountServiceAccountToken": false,
"containers": [
{
"name": "main-container",
"image": "nginx:latest",
"ports": [
{
"containerPort": 80,
"name": "http",
"protocol": "TCP"
},
{
"containerPort": 443,
"name": "https",
"protocol": "TCP"
}
],
"env": [
{
"name": "ENV_VAR1",
"value": "value1"
},
{
"name": "ENV_VAR2",
"value": "value2"
}
],
"resources": {
"limits": {
"memory": "100Mi"
}
}
}
]
}
}
}
}

View File

@ -0,0 +1,30 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: manual-apply-test-deployment
namespace: default
labels:
app: manual-apply-app
applications.argoproj.io/app-name: manual-apply-app
spec:
replicas: 1
selector:
matchLabels:
app: manual-apply-test
template:
metadata:
labels:
app: manual-apply-test
spec:
automountServiceAccountToken: false
containers:
- name: main-container
image: 'nginx:latest'
ports:
- containerPort: 80
name: http
- containerPort: 40
name: https
resources:
limits:
memory: "100Mi"

View File

@ -0,0 +1,181 @@
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2025-02-25T00:20:45Z"
generation: 4
labels:
app: manual-apply-app
applications.argoproj.io/app-name: manual-apply-app
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:app: {}
f:applications.argoproj.io/app-name: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:automountServiceAccountToken: {}
f:containers:
k:{"name":"main-container"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":80,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:resources:
.: {}
f:limits:
.: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: argocd-controller
operation: Update
time: "2025-02-25T01:19:32Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"idle"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":8080,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
manager: kubectl-client-side-apply
operation: Update
time: "2025-02-25T01:29:34Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2025-02-25T01:29:44Z"
name: manual-apply-test-deployment
namespace: default
resourceVersion: "46835"
uid: c2ff066f-cbbd-408d-a015-85f1b6332193
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: manual-apply-test
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: manual-apply-test
spec:
automountServiceAccountToken: false
containers:
- image: nginx:latest
imagePullPolicy: Always
name: main-container
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
limits:
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- image: spurin/idle:latest
imagePullPolicy: Always
name: idle
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
limits:
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

View File

@ -0,0 +1,310 @@
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "4",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"manual-apply-app\",\"applications.argoproj.io/app-name\":\"manual-apply-app\"},\"name\":\"manual-apply-test-deployment\",\"namespace\":\"default\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"app\":\"manual-apply-test\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"manual-apply-test\"}},\"spec\":{\"automountServiceAccountToken\":false,\"containers\":[{\"image\":\"nginx:latest\",\"name\":\"main-container\",\"ports\":[{\"containerPort\":80,\"name\":\"http\"}],\"resources\":{\"limits\":{\"memory\":\"100Mi\"}}},{\"image\":\"spurin/idle:latest\",\"name\":\"idle\",\"ports\":[{\"containerPort\":8080,\"name\":\"web\",\"protocol\":\"TCP\"}]}]}}}}\n"
},
"creationTimestamp": "2025-02-25T00:20:45Z",
"generation": 5,
"labels": {
"app": "manual-apply-app",
"applications.argoproj.io/app-name": "manual-apply-app",
"mutation-test": "FROM-MUTATION-WEBHOOK"
},
"managedFields": [
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
"f:app": {},
"f:applications.argoproj.io/app-name": {}
}
},
"f:spec": {
"f:replicas": {},
"f:selector": {},
"f:template": {
"f:metadata": {
"f:labels": {
"f:app": {}
}
},
"f:spec": {
"f:automountServiceAccountToken": {},
"f:containers": {
"k:{\"name\":\"main-container\"}": {
".": {},
"f:image": {},
"f:name": {},
"f:ports": {
"k:{\"containerPort\":40,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {}
},
"k:{\"containerPort\":80,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {}
}
},
"f:resources": {
"f:limits": {
"f:memory": {}
}
}
}
}
}
}
}
},
"manager": "argocd-controller",
"operation": "Apply",
"time": "2025-02-25T01:31:03Z"
},
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {},
"f:labels": {
".": {},
"f:app": {},
"f:applications.argoproj.io/app-name": {}
}
},
"f:spec": {
"f:progressDeadlineSeconds": {},
"f:replicas": {},
"f:revisionHistoryLimit": {},
"f:selector": {},
"f:strategy": {
"f:rollingUpdate": {
".": {},
"f:maxSurge": {},
"f:maxUnavailable": {}
},
"f:type": {}
},
"f:template": {
"f:metadata": {
"f:labels": {
".": {},
"f:app": {}
}
},
"f:spec": {
"f:automountServiceAccountToken": {},
"f:containers": {
"k:{\"name\":\"main-container\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:name": {},
"f:ports": {
".": {},
"k:{\"containerPort\":80,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {},
"f:protocol": {}
}
},
"f:resources": {
".": {},
"f:limits": {
".": {},
"f:memory": {}
}
},
"f:terminationMessagePath": {},
"f:terminationMessagePolicy": {}
}
},
"f:dnsPolicy": {},
"f:restartPolicy": {},
"f:schedulerName": {},
"f:securityContext": {},
"f:terminationGracePeriodSeconds": {}
}
}
}
},
"manager": "argocd-controller",
"operation": "Update",
"time": "2025-02-25T01:19:32Z"
},
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
},
"f:spec": {
"f:template": {
"f:spec": {
"f:containers": {
"k:{\"name\":\"idle\"}": {
".": {},
"f:image": {},
"f:imagePullPolicy": {},
"f:name": {},
"f:ports": {
".": {},
"k:{\"containerPort\":8080,\"protocol\":\"TCP\"}": {
".": {},
"f:containerPort": {},
"f:name": {},
"f:protocol": {}
}
},
"f:resources": {},
"f:terminationMessagePath": {},
"f:terminationMessagePolicy": {}
}
}
}
}
}
},
"manager": "kubectl-client-side-apply",
"operation": "Update",
"time": "2025-02-25T01:29:34Z"
},
{
"apiVersion": "apps/v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
"f:deployment.kubernetes.io/revision": {}
}
},
"f:status": {
"f:availableReplicas": {},
"f:conditions": {
".": {},
"k:{\"type\":\"Available\"}": {
".": {},
"f:lastTransitionTime": {},
"f:lastUpdateTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
},
"k:{\"type\":\"Progressing\"}": {
".": {},
"f:lastTransitionTime": {},
"f:lastUpdateTime": {},
"f:message": {},
"f:reason": {},
"f:status": {},
"f:type": {}
}
},
"f:observedGeneration": {},
"f:readyReplicas": {},
"f:replicas": {},
"f:updatedReplicas": {}
}
},
"manager": "kube-controller-manager",
"operation": "Update",
"subresource": "status",
"time": "2025-02-25T01:29:44Z"
}
],
"name": "manual-apply-test-deployment",
"namespace": "default",
"resourceVersion": "46835",
"uid": "c2ff066f-cbbd-408d-a015-85f1b6332193"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 1,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"app": "manual-apply-test"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "manual-apply-test"
}
},
"spec": {
"automountServiceAccountToken": false,
"containers": [
{
"image": "nginx:latest",
"imagePullPolicy": "Always",
"name": "main-container",
"ports": [
{
"containerPort": 80,
"name": "http",
"protocol": "TCP"
},
{
"containerPort": 40,
"name": "https",
"protocol": "TCP"
}
],
"resources": {
"limits": {
"memory": "100Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
},
{
"image": "spurin/idle:latest",
"imagePullPolicy": "Always",
"name": "idle",
"ports": [
{
"containerPort": 8080,
"name": "web",
"protocol": "TCP"
}
],
"resources": {
"limits": {
"memory": "100Mi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30
}
}
}
}

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: httpbin
name: httpbin-svc
namespace: httpbin
spec:
ports:
- name: http-port
port: 7777
targetPort: 80
- name: test
port: 333
selector:
app: httpbin

55
pkg/diff/testdata/ssd-service-live.yaml vendored Normal file
View File

@ -0,0 +1,55 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2023-12-18T00:34:22Z'
labels:
app.kubernetes.io/instance: httpbin
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:labels':
'f:app.kubernetes.io/instance': {}
'f:spec':
'f:ports':
'k:{"port":333,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'k:{"port":7777,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:targetPort': {}
'f:selector': {}
manager: argocd-controller
operation: Apply
time: '2023-12-18T00:34:22Z'
name: httpbin-svc
namespace: httpbin
resourceVersion: '2836'
uid: 0e898e6f-c275-476d-9b4f-5e96072cc129
spec:
clusterIP: 10.43.223.115
clusterIPs:
- 10.43.223.115
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http-port
port: 7777
protocol: TCP
targetPort: 80
- name: test
port: 333
protocol: TCP
targetPort: 333
selector:
app: httpbin
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

View File

@ -0,0 +1,74 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2023-12-18T00:34:22Z",
"labels": {
"event": "FROM-MUTATION-WEBHOOK"
},
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:spec": {
"f:ports": {
"k:{\"port\":333,\"protocol\":\"TCP\"}": {
".": {},
"f:name": {},
"f:port": {}
},
"k:{\"port\":7777,\"protocol\":\"TCP\"}": {
".": {},
"f:name": {},
"f:port": {},
"f:targetPort": {}
}
},
"f:selector": {}
}
},
"manager": "argocd-controller",
"operation": "Apply",
"time": "2023-12-18T00:38:28Z"
}
],
"name": "httpbin-svc",
"namespace": "httpbin",
"resourceVersion": "2836",
"uid": "0e898e6f-c275-476d-9b4f-5e96072cc129"
},
"spec": {
"clusterIP": "10.43.223.115",
"clusterIPs": [
"10.43.223.115"
],
"internalTrafficPolicy": "Cluster",
"ipFamilies": [
"IPv4"
],
"ipFamilyPolicy": "SingleStack",
"ports": [
{
"name": "http-port",
"port": 7777,
"protocol": "TCP",
"targetPort": 80
},
{
"name": "test",
"port": 333,
"protocol": "TCP",
"targetPort": 333
}
],
"selector": {
"app": "httpbin"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}

View File

@ -0,0 +1,50 @@
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2025-05-16T19:01:22Z"
labels:
app.kubernetes.io/instance: httpbin
delete-me: delete-value
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:app.kubernetes.io/instance: {}
f:delete-me: {}
f:spec:
f:ports:
k:{"port":7777,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
manager: argocd-controller
operation: Apply
time: "2025-05-16T19:01:22Z"
name: httpbin-svc
namespace: httpbin
resourceVersion: "159005"
uid: 61a7a0c2-d973-4333-bbd6-c06ba1c00190
spec:
clusterIP: 10.96.59.144
clusterIPs:
- 10.96.59.144
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http-port
port: 7777
protocol: TCP
targetPort: 80
selector:
app: httpbin
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: httpbin
name: httpbin-svc
namespace: httpbin
spec:
ports:
- name: http-port
port: 7777
protocol: TCP
targetPort: 80
selector:
app: httpbin

View File

@ -0,0 +1,69 @@
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2025-05-16T19:01:22Z",
"labels": {
"app.kubernetes.io/instance": "httpbin"
},
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
"f:app.kubernetes.io/instance": {}
}
},
"f:spec": {
"f:ports": {
"k:{\"port\":7777,\"protocol\":\"TCP\"}": {
".": {},
"f:name": {},
"f:port": {},
"f:protocol": {},
"f:targetPort": {}
}
},
"f:selector": {}
}
},
"manager": "argocd-controller",
"operation": "Apply",
"time": "2025-05-16T19:02:57Z"
}
],
"name": "httpbin-svc",
"namespace": "httpbin",
"resourceVersion": "159005",
"uid": "61a7a0c2-d973-4333-bbd6-c06ba1c00190"
},
"spec": {
"clusterIP": "10.96.59.144",
"clusterIPs": [
"10.96.59.144"
],
"internalTrafficPolicy": "Cluster",
"ipFamilies": [
"IPv4"
],
"ipFamilyPolicy": "SingleStack",
"ports": [
{
"name": "http-port",
"port": 7777,
"protocol": "TCP",
"targetPort": 80
}
],
"selector": {
"app": "httpbin"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}

View File

@ -59,7 +59,7 @@ func NewEngine(config *rest.Config, clusterCache cache.ClusterCache, opts ...Opt
func (e *gitOpsEngine) Run() (StopFunc, error) {
err := e.cache.EnsureSynced()
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to ensure the cache is synced: %w", err)
}
return func() {
@ -76,23 +76,23 @@ func (e *gitOpsEngine) Sync(ctx context.Context,
) ([]common.ResourceSyncResult, error) {
managedResources, err := e.cache.GetManagedLiveObjs(resources, isManaged)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to get managed live objects: %w", err)
}
result := sync.Reconcile(resources, managedResources, namespace, e.cache)
diffRes, err := diff.DiffArray(result.Target, result.Live, diff.WithLogr(e.log))
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to diff objects: %w", err)
}
opts = append(opts, sync.WithSkipHooks(!diffRes.Modified))
syncCtx, cleanup, err := sync.NewSyncContext(revision, result, e.config, e.config, e.kubectl, namespace, e.cache.GetOpenAPISchema(), opts...)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to create sync context: %w", err)
}
defer cleanup()
resUpdated := make(chan bool)
resIgnore := make(chan struct{})
unsubscribe := e.cache.OnResourceUpdated(func(newRes *cache.Resource, oldRes *cache.Resource, namespaceResources map[kube.ResourceKey]*cache.Resource) {
unsubscribe := e.cache.OnResourceUpdated(func(newRes *cache.Resource, oldRes *cache.Resource, _ map[kube.ResourceKey]*cache.Resource) {
var key kube.ResourceKey
if newRes != nil {
key = newRes.ResourceKey()
@ -120,6 +120,7 @@ func (e *gitOpsEngine) Sync(ctx context.Context,
select {
case <-ctx.Done():
syncCtx.Terminate()
//nolint:wrapcheck // don't wrap context errors
return resources, ctx.Err()
case <-time.After(operationRefreshTimeout):
case <-resUpdated:

View File

@ -2,7 +2,7 @@ package engine
import (
"github.com/go-logr/logr"
"k8s.io/klog/v2/klogr"
"k8s.io/klog/v2/textlogger"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/gitops-engine/pkg/utils/tracing"
@ -16,7 +16,7 @@ type options struct {
}
func applyOptions(opts []Option) options {
log := klogr.New()
log := textlogger.NewLogger(textlogger.NewConfig())
o := options{
log: log,
kubectl: &kube.KubectlCmd{

View File

@ -1,9 +1,13 @@
package health
import (
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
// Represents resource health status
@ -64,7 +68,7 @@ func IsWorse(current, new HealthStatusCode) bool {
// GetResourceHealth returns the health of a k8s resource
func GetResourceHealth(obj *unstructured.Unstructured, healthOverride HealthOverride) (health *HealthStatus, err error) {
if obj.GetDeletionTimestamp() != nil {
if obj.GetDeletionTimestamp() != nil && !hook.HasHookFinalizer(obj) {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: "Pending deletion",
@ -78,7 +82,7 @@ func GetResourceHealth(obj *unstructured.Unstructured, healthOverride HealthOver
Status: HealthStatusUnknown,
Message: err.Error(),
}
return health, err
return health, fmt.Errorf("failed to get resource health for %s/%s: %w", obj.GetNamespace(), obj.GetName(), err)
}
if health != nil {
return health, nil
@ -94,7 +98,6 @@ func GetResourceHealth(obj *unstructured.Unstructured, healthOverride HealthOver
}
}
return health, err
}
// GetHealthCheckFunc returns built-in health check function or nil if health check is not supported
@ -112,23 +115,19 @@ func GetHealthCheckFunc(gvk schema.GroupVersionKind) func(obj *unstructured.Unst
return getDaemonSetHealth
}
case "extensions":
switch gvk.Kind {
case kube.IngressKind:
if gvk.Kind == kube.IngressKind {
return getIngressHealth
}
case "argoproj.io":
switch gvk.Kind {
case "Workflow":
if gvk.Kind == "Workflow" {
return getArgoWorkflowHealth
}
case "apiregistration.k8s.io":
switch gvk.Kind {
case kube.APIServiceKind:
if gvk.Kind == kube.APIServiceKind {
return getAPIServiceHealth
}
case "networking.k8s.io":
switch gvk.Kind {
case kube.IngressKind:
if gvk.Kind == kube.IngressKind {
return getIngressHealth
}
case "":
@ -141,13 +140,11 @@ func GetHealthCheckFunc(gvk schema.GroupVersionKind) func(obj *unstructured.Unst
return getPodHealth
}
case "batch":
switch gvk.Kind {
case kube.JobKind:
if gvk.Kind == kube.JobKind {
return getJobHealth
}
case "autoscaling":
switch gvk.Kind {
case kube.HorizontalPodAutoscalerKind:
if gvk.Kind == kube.HorizontalPodAutoscalerKind {
return getHPAHealth
}
}

View File

@ -3,11 +3,12 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
apiregistrationv1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1"
apiregistrationv1beta1 "k8s.io/kube-aggregator/pkg/apis/apiregistration/v1beta1"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getAPIServiceHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -17,14 +18,14 @@ func getAPIServiceHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
var apiService apiregistrationv1.APIService
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &apiService)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured APIService to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured APIService to typed: %w", err)
}
return getApiregistrationv1APIServiceHealth(&apiService)
case apiregistrationv1beta1.SchemeGroupVersion.WithKind(kube.APIServiceKind):
var apiService apiregistrationv1beta1.APIService
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &apiService)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured APIService to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured APIService to typed: %w", err)
}
return getApiregistrationv1beta1APIServiceHealth(&apiService)
default:
@ -34,19 +35,17 @@ func getAPIServiceHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
func getApiregistrationv1APIServiceHealth(apiservice *apiregistrationv1.APIService) (*HealthStatus, error) {
for _, c := range apiservice.Status.Conditions {
switch c.Type {
case apiregistrationv1.Available:
if c.Type == apiregistrationv1.Available {
if c.Status == apiregistrationv1.ConditionTrue {
return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
} else {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
}
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
}
}
return &HealthStatus{
@ -57,19 +56,17 @@ func getApiregistrationv1APIServiceHealth(apiservice *apiregistrationv1.APIServi
func getApiregistrationv1beta1APIServiceHealth(apiservice *apiregistrationv1beta1.APIService) (*HealthStatus, error) {
for _, c := range apiservice.Status.Conditions {
switch c.Type {
case apiregistrationv1beta1.Available:
if c.Type == apiregistrationv1beta1.Available {
if c.Status == apiregistrationv1beta1.ConditionTrue {
return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
} else {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
}
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("%s: %s", c.Reason, c.Message),
}, nil
}
}
return &HealthStatus{

View File

@ -1,6 +1,8 @@
package health
import (
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
)
@ -30,7 +32,7 @@ func getArgoWorkflowHealth(obj *unstructured.Unstructured) (*HealthStatus, error
var wf argoWorkflow
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &wf)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to convert unstructured to argoworkflow: %w", err)
}
switch wf.Status.Phase {
case "", nodePending, nodeRunning:

View File

@ -3,10 +3,11 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getDaemonSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +17,7 @@ func getDaemonSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
var daemon appsv1.DaemonSet
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &daemon)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured DaemonSet to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured DaemonSet to typed: %w", err)
}
return getAppsv1DaemonSetHealth(&daemon)
default:
@ -26,29 +27,28 @@ func getDaemonSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
func getAppsv1DaemonSetHealth(daemon *appsv1.DaemonSet) (*HealthStatus, error) {
// Borrowed at kubernetes/kubectl/rollout_status.go https://github.com/kubernetes/kubernetes/blob/5232ad4a00ec93942d0b2c6359ee6cd1201b46bc/pkg/kubectl/rollout_status.go#L110
if daemon.Generation <= daemon.Status.ObservedGeneration {
if daemon.Spec.UpdateStrategy.Type == appsv1.OnDeleteDaemonSetStrategyType {
return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("daemon set %d out of %d new pods have been updated", daemon.Status.UpdatedNumberScheduled, daemon.Status.DesiredNumberScheduled),
}, nil
}
if daemon.Status.UpdatedNumberScheduled < daemon.Status.DesiredNumberScheduled {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d out of %d new pods have been updated...", daemon.Name, daemon.Status.UpdatedNumberScheduled, daemon.Status.DesiredNumberScheduled),
}, nil
}
if daemon.Status.NumberAvailable < daemon.Status.DesiredNumberScheduled {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d of %d updated pods are available...", daemon.Name, daemon.Status.NumberAvailable, daemon.Status.DesiredNumberScheduled),
}, nil
}
} else {
if daemon.Generation > daemon.Status.ObservedGeneration {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting for rollout to finish: observed daemon set generation less then desired generation",
Message: "Waiting for rollout to finish: observed daemon set generation less than desired generation",
}, nil
}
if daemon.Spec.UpdateStrategy.Type == appsv1.OnDeleteDaemonSetStrategyType {
return &HealthStatus{
Status: HealthStatusHealthy,
Message: fmt.Sprintf("daemon set %d out of %d new pods have been updated", daemon.Status.UpdatedNumberScheduled, daemon.Status.DesiredNumberScheduled),
}, nil
}
if daemon.Status.UpdatedNumberScheduled < daemon.Status.DesiredNumberScheduled {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d out of %d new pods have been updated...", daemon.Name, daemon.Status.UpdatedNumberScheduled, daemon.Status.DesiredNumberScheduled),
}, nil
}
if daemon.Status.NumberAvailable < daemon.Status.DesiredNumberScheduled {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d of %d updated pods are available...", daemon.Name, daemon.Status.NumberAvailable, daemon.Status.DesiredNumberScheduled),
}, nil
}
return &HealthStatus{

View File

@ -3,10 +3,11 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getDeploymentHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +17,7 @@ func getDeploymentHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
var deployment appsv1.Deployment
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &deployment)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured Deployment to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured Deployment to typed: %w", err)
}
return getAppsv1DeploymentHealth(&deployment)
default:
@ -34,22 +35,23 @@ func getAppsv1DeploymentHealth(deployment *appsv1.Deployment) (*HealthStatus, er
// Borrowed at kubernetes/kubectl/rollout_status.go https://github.com/kubernetes/kubernetes/blob/5232ad4a00ec93942d0b2c6359ee6cd1201b46bc/pkg/kubectl/rollout_status.go#L80
if deployment.Generation <= deployment.Status.ObservedGeneration {
cond := getAppsv1DeploymentCondition(deployment.Status, appsv1.DeploymentProgressing)
if cond != nil && cond.Reason == "ProgressDeadlineExceeded" {
switch {
case cond != nil && cond.Reason == "ProgressDeadlineExceeded":
return &HealthStatus{
Status: HealthStatusDegraded,
Message: fmt.Sprintf("Deployment %q exceeded its progress deadline", deployment.Name),
}, nil
} else if deployment.Spec.Replicas != nil && deployment.Status.UpdatedReplicas < *deployment.Spec.Replicas {
case deployment.Spec.Replicas != nil && deployment.Status.UpdatedReplicas < *deployment.Spec.Replicas:
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for rollout to finish: %d out of %d new replicas have been updated...", deployment.Status.UpdatedReplicas, *deployment.Spec.Replicas),
}, nil
} else if deployment.Status.Replicas > deployment.Status.UpdatedReplicas {
case deployment.Status.Replicas > deployment.Status.UpdatedReplicas:
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for rollout to finish: %d old replicas are pending termination...", deployment.Status.Replicas-deployment.Status.UpdatedReplicas),
}, nil
} else if deployment.Status.AvailableReplicas < deployment.Status.UpdatedReplicas {
case deployment.Status.AvailableReplicas < deployment.Status.UpdatedReplicas:
return &HealthStatus{
Status: HealthStatusProgressing,
Message: fmt.Sprintf("Waiting for rollout to finish: %d of %d updated replicas are available...", deployment.Status.AvailableReplicas, deployment.Status.UpdatedReplicas),
@ -58,7 +60,7 @@ func getAppsv1DeploymentHealth(deployment *appsv1.Deployment) (*HealthStatus, er
} else {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting for rollout to finish: observed deployment generation less then desired generation",
Message: "Waiting for rollout to finish: observed deployment generation less than desired generation",
}, nil
}

View File

@ -14,12 +14,10 @@ import (
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
var (
progressingStatus = &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting to Autoscale",
}
)
var progressingStatus = &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting to Autoscale",
}
type hpaCondition struct {
Type string

View File

@ -3,10 +3,13 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
corev1 "k8s.io/api/core/v1"
batchv1 "k8s.io/api/batch/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getJobHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +19,7 @@ func getJobHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
var job batchv1.Job
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &job)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured Job to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured Job to typed: %w", err)
}
return getBatchv1JobHealth(&job)
default:
@ -29,6 +32,7 @@ func getBatchv1JobHealth(job *batchv1.Job) (*HealthStatus, error) {
var failMsg string
complete := false
var message string
isSuspended := false
for _, condition := range job.Status.Conditions {
switch condition.Type {
case batchv1.JobFailed:
@ -38,19 +42,31 @@ func getBatchv1JobHealth(job *batchv1.Job) (*HealthStatus, error) {
case batchv1.JobComplete:
complete = true
message = condition.Message
case batchv1.JobSuspended:
complete = true
message = condition.Message
if condition.Status == corev1.ConditionTrue {
isSuspended = true
}
}
}
if !complete {
switch {
case !complete:
return &HealthStatus{
Status: HealthStatusProgressing,
Message: message,
}, nil
} else if failed {
case failed:
return &HealthStatus{
Status: HealthStatusDegraded,
Message: failMsg,
}, nil
} else {
case isSuspended:
return &HealthStatus{
Status: HealthStatusSuspended,
Message: failMsg,
}, nil
default:
return &HealthStatus{
Status: HealthStatusHealthy,
Message: message,

View File

@ -4,11 +4,12 @@ import (
"fmt"
"strings"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/kubectl/pkg/util/podutils"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getPodHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -18,7 +19,7 @@ func getPodHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
var pod corev1.Pod
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &pod)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured Pod to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured Pod to typed: %w", err)
}
return getCorev1PodHealth(&pod)
default:

View File

@ -3,10 +3,11 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getPVCHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +17,7 @@ func getPVCHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
var pvc corev1.PersistentVolumeClaim
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &pvc)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured PersistentVolumeClaim to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured PersistentVolumeClaim to typed: %w", err)
}
return getCorev1PVCHealth(&pvc)
default:

View File

@ -3,11 +3,12 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getReplicaSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -17,7 +18,7 @@ func getReplicaSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
var replicaSet appsv1.ReplicaSet
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &replicaSet)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured ReplicaSet to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured ReplicaSet to typed: %w", err)
}
return getAppsv1ReplicaSetHealth(&replicaSet)
default:
@ -42,7 +43,7 @@ func getAppsv1ReplicaSetHealth(replicaSet *appsv1.ReplicaSet) (*HealthStatus, er
} else {
return &HealthStatus{
Status: HealthStatusProgressing,
Message: "Waiting for rollout to finish: observed replica set generation less then desired generation",
Message: "Waiting for rollout to finish: observed replica set generation less than desired generation",
}, nil
}

View File

@ -3,10 +3,11 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getServiceHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +17,7 @@ func getServiceHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
var service corev1.Service
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &service)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured Service to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured Service to typed: %w", err)
}
return getCorev1ServiceHealth(&service)
default:

View File

@ -3,10 +3,11 @@ package health
import (
"fmt"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
func getStatefulSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error) {
@ -16,7 +17,7 @@ func getStatefulSetHealth(obj *unstructured.Unstructured) (*HealthStatus, error)
var sts appsv1.StatefulSet
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, &sts)
if err != nil {
return nil, fmt.Errorf("failed to convert unstructured StatefulSet to typed: %v", err)
return nil, fmt.Errorf("failed to convert unstructured StatefulSet to typed: %w", err)
}
return getAppsv1StatefulSetHealth(&sts)
default:

View File

@ -5,7 +5,7 @@ Package provides functionality that allows assessing the health state of a Kuber
package health
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
@ -15,13 +15,15 @@ import (
)
func assertAppHealth(t *testing.T, yamlPath string, expectedStatus HealthStatusCode) {
health := getHealthStatus(yamlPath, t)
t.Helper()
health := getHealthStatus(t, yamlPath)
assert.NotNil(t, health)
assert.Equal(t, expectedStatus, health.Status)
}
func getHealthStatus(yamlPath string, t *testing.T) *HealthStatus {
yamlBytes, err := ioutil.ReadFile(yamlPath)
func getHealthStatus(t *testing.T, yamlPath string) *HealthStatus {
t.Helper()
yamlBytes, err := os.ReadFile(yamlPath)
require.NoError(t, err)
var obj unstructured.Unstructured
err = yaml.Unmarshal(yamlBytes, &obj)
@ -49,6 +51,7 @@ func TestStatefulSetOnDeleteHealth(t *testing.T) {
func TestDaemonSetOnDeleteHealth(t *testing.T) {
assertAppHealth(t, "./testdata/daemonset-ondelete.yaml", HealthStatusHealthy)
}
func TestPVCHealth(t *testing.T) {
assertAppHealth(t, "./testdata/pvc-bound.yaml", HealthStatusHealthy)
assertAppHealth(t, "./testdata/pvc-pending.yaml", HealthStatusProgressing)
@ -68,13 +71,14 @@ func TestIngressHealth(t *testing.T) {
}
func TestCRD(t *testing.T) {
assert.Nil(t, getHealthStatus("./testdata/knative-service.yaml", t))
assert.Nil(t, getHealthStatus(t, "./testdata/knative-service.yaml"))
}
func TestJob(t *testing.T) {
assertAppHealth(t, "./testdata/job-running.yaml", HealthStatusProgressing)
assertAppHealth(t, "./testdata/job-failed.yaml", HealthStatusDegraded)
assertAppHealth(t, "./testdata/job-succeeded.yaml", HealthStatusHealthy)
assertAppHealth(t, "./testdata/job-suspended.yaml", HealthStatusSuspended)
}
func TestHPA(t *testing.T) {
@ -106,8 +110,8 @@ func TestPod(t *testing.T) {
}
func TestApplication(t *testing.T) {
assert.Nil(t, getHealthStatus("./testdata/application-healthy.yaml", t))
assert.Nil(t, getHealthStatus("./testdata/application-degraded.yaml", t))
assert.Nil(t, getHealthStatus(t, "./testdata/application-healthy.yaml"))
assert.Nil(t, getHealthStatus(t, "./testdata/application-degraded.yaml"))
}
func TestAPIService(t *testing.T) {
@ -118,16 +122,17 @@ func TestAPIService(t *testing.T) {
}
func TestGetArgoWorkflowHealth(t *testing.T) {
sampleWorkflow := unstructured.Unstructured{Object: map[string]interface{}{
"spec": map[string]interface{}{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
sampleWorkflow := unstructured.Unstructured{
Object: map[string]any{
"spec": map[string]any{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
},
"status": map[string]any{
"phase": "Running",
"message": "This node is running",
},
},
"status": map[string]interface{}{
"phase": "Running",
"message": "This node is running",
},
},
}
health, err := getArgoWorkflowHealth(&sampleWorkflow)
@ -135,16 +140,17 @@ func TestGetArgoWorkflowHealth(t *testing.T) {
assert.Equal(t, HealthStatusProgressing, health.Status)
assert.Equal(t, "This node is running", health.Message)
sampleWorkflow = unstructured.Unstructured{Object: map[string]interface{}{
"spec": map[string]interface{}{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
sampleWorkflow = unstructured.Unstructured{
Object: map[string]any{
"spec": map[string]any{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
},
"status": map[string]any{
"phase": "Succeeded",
"message": "This node is has succeeded",
},
},
"status": map[string]interface{}{
"phase": "Succeeded",
"message": "This node is has succeeded",
},
},
}
health, err = getArgoWorkflowHealth(&sampleWorkflow)
@ -152,17 +158,17 @@ func TestGetArgoWorkflowHealth(t *testing.T) {
assert.Equal(t, HealthStatusHealthy, health.Status)
assert.Equal(t, "This node is has succeeded", health.Message)
sampleWorkflow = unstructured.Unstructured{Object: map[string]interface{}{
"spec": map[string]interface{}{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
sampleWorkflow = unstructured.Unstructured{
Object: map[string]any{
"spec": map[string]any{
"entrypoint": "sampleEntryPoint",
"extraneousKey": "we are agnostic to extraneous keys",
},
},
},
}
health, err = getArgoWorkflowHealth(&sampleWorkflow)
require.NoError(t, err)
assert.Equal(t, HealthStatusProgressing, health.Status)
assert.Equal(t, "", health.Message)
assert.Empty(t, health.Message)
}

51
pkg/health/testdata/job-suspended.yaml vendored Normal file
View File

@ -0,0 +1,51 @@
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: 2018-12-02T08:19:13Z
labels:
controller-uid: f3fe3a46-f60a-11e8-aa53-42010a80021b
job-name: succeed
name: succeed
namespace: argoci-workflows
resourceVersion: "46535949"
selfLink: /apis/batch/v1/namespaces/argoci-workflows/jobs/succeed
uid: f3fe3a46-f60a-11e8-aa53-42010a80021b
spec:
backoffLimit: 0
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: f3fe3a46-f60a-11e8-aa53-42010a80021b
suspend: true
template:
metadata:
creationTimestamp: null
labels:
controller-uid: f3fe3a46-f60a-11e8-aa53-42010a80021b
job-name: succeed
spec:
containers:
- command:
- sh
- -c
- sleep 10
image: alpine:latest
imagePullPolicy: Always
name: succeed
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastProbeTime: "2022-12-08T22:27:20Z"
lastTransitionTime: "2022-12-08T22:27:20Z"
message: Job suspended
reason: JobSuspended
status: "True"
type: Suspended

View File

@ -16,6 +16,7 @@ const (
AnnotationKeyHook = "argocd.argoproj.io/hook"
// AnnotationKeyHookDeletePolicy is the policy of deleting a hook
AnnotationKeyHookDeletePolicy = "argocd.argoproj.io/hook-delete-policy"
AnnotationDeletionApproved = "argocd.argoproj.io/deletion-approved"
// Sync option that disables dry run in resource is missing in the cluster
SyncOptionSkipDryRunOnMissingResource = "SkipDryRunOnMissingResource=true"
@ -27,8 +28,27 @@ const (
SyncOptionPruneLast = "PruneLast=true"
// Sync option that enables use of replace or create command instead of apply
SyncOptionReplace = "Replace=true"
// Sync option that enables use of --force flag, delete and re-create
SyncOptionForce = "Force=true"
// Sync option that enables use of --server-side flag instead of client-side
SyncOptionServerSideApply = "ServerSideApply=true"
// Sync option that disables use of --server-side flag instead of client-side
SyncOptionDisableServerSideApply = "ServerSideApply=false"
// Sync option that disables resource deletion
SyncOptionDisableDeletion = "Delete=false"
// Sync option that sync only out of sync resources
SyncOptionApplyOutOfSyncOnly = "ApplyOutOfSyncOnly=true"
// Sync option that requires confirmation before deleting the resource
SyncOptionDeleteRequireConfirm = "Delete=confirm"
// Sync option that requires confirmation before deleting the resource
SyncOptionPruneRequireConfirm = "Prune=confirm"
// Sync option that enables client-side apply migration
SyncOptionClientSideApplyMigration = "ClientSideApplyMigration=true"
// Sync option that disables client-side apply migration
SyncOptionDisableClientSideApplyMigration = "ClientSideApplyMigration=false"
// Default field manager for client-side apply migration
DefaultClientSideApplyMigrationManager = "kubectl-client-side-apply"
)
type PermissionValidator func(un *unstructured.Unstructured, res *metav1.APIResource) error
@ -103,7 +123,6 @@ func NewHookType(t string) (HookType, bool) {
t == string(HookTypePostSync) ||
t == string(HookTypeSyncFail) ||
t == string(HookTypeSkip)
}
type HookDeletePolicy string
@ -124,6 +143,10 @@ func NewHookDeletePolicy(p string) (HookDeletePolicy, bool) {
type ResourceSyncResult struct {
// holds associated resource key
ResourceKey kube.ResourceKey
// Images holds the images associated with the resource. These images are collected on a best-effort basis
// from fields used by known workload resources. This does not necessarily reflect the exact list of images
// used by workloads in the application.
Images []string
// holds resource version
Version string
// holds the execution order

View File

@ -6,58 +6,58 @@ Package implements Kubernetes resources synchronization and provides the followi
- sync waves
- sync options
Basic Syncing
# Basic Syncing
Executes equivalent of `kubectl apply` for each specified resource. The apply operations are executed in the predefined
order depending of resource type: namespaces, custom resource definitions first and workload resources last.
Resource Pruning
# Resource Pruning
An ability to delete resources that no longer should exist in the cluster. By default obsolete resources are not deleted
and only reported in the sync operation result.
Resource Hooks
# Resource Hooks
Hooks provide an ability to create resources such as Pod, Job or any other resource, that are 'executed' before, after
or even during the synchronization process. Hooks enable use-cases such as database migration and post sync notifications.
Hooks are regular Kubernetes resources that have `argocd.argoproj.io/hook` annotation:
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync
The annotation value indicates the sync operation phase:
- PreSync - executes prior to the apply of the manifests.
- PostSync - executes after all Sync hooks completed and were successful, a successful apply, and all resources in a Healthy state.
- SyncFail - executes when the sync operation fails.
- Sync - executes after all PreSync hooks completed and were successful, at the same time as the apply of the manifests.
- PreSync - executes prior to the apply of the manifests.
- PostSync - executes after all Sync hooks completed and were successful, a successful apply, and all resources in a Healthy state.
- SyncFail - executes when the sync operation fails.
- Sync - executes after all PreSync hooks completed and were successful, at the same time as the apply of the manifests.
Named hooks (i.e. ones with /metadata/name) will only be created once. If you want a hook to be re-created each time
either use BeforeHookCreation policy (see below) or /metadata/generateName.
The same resource hook might be executed in several sync phases:
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync,PostSync
apiVersion: batch/v1
kind: Job
metadata:
generateName: schema-migrate-
annotations:
argocd.argoproj.io/hook: PreSync,PostSync
Hooks can be deleted in an automatic fashion using the annotation: argocd.argoproj.io/hook-delete-policy.
apiVersion: batch/v1
kind: Job
metadata:
generateName: integration-test-
annotations:
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
apiVersion: batch/v1
kind: Job
metadata:
generateName: integration-test-
annotations:
argocd.argoproj.io/hook: PostSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
The following policies define when the hook will be deleted.
@ -65,17 +65,17 @@ The following policies define when the hook will be deleted.
- HookFailed - the hook resource is deleted after the hook failed.
- BeforeHookCreation - any existing hook resource is deleted before the new one is created
Sync Waves
# Sync Waves
The waves allow to group sync execution of syncing process into batches when each batch is executed sequentially one after
another. Hooks and resources are assigned to wave zero by default. The wave can be negative, so you can create a wave
that runs before all other resources. The `argocd.argoproj.io/sync-wave` annotation assign resource to a wave:
metadata:
annotations:
argocd.argoproj.io/sync-wave: "5"
metadata:
annotations:
argocd.argoproj.io/sync-wave: "5"
Sync Options
# Sync Options
The sync options allows customizing the synchronization of selected resources. The options are specified using the
annotation 'argocd.argoproj.io/sync-options'. Following sync options are supported:
@ -97,9 +97,8 @@ It then determines which the number of the next wave to apply. This is the first
out-of-sync or unhealthy. It applies resources in that wave. It repeats this process until all phases and waves are in
in-sync and healthy.
Example
# Example
Find real-life example in https://github.com/argoproj/gitops-engine/blob/master/pkg/engine/engine.go
*/
package sync

View File

@ -6,15 +6,15 @@ import (
"github.com/stretchr/testify/assert"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestDeletePolicies(t *testing.T) {
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(NewPod()))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "garbage")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookSucceeded}, DeletePolicies(Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookFailed}, DeletePolicies(Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "HookFailed")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(testingutils.NewPod()))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "garbage")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyBeforeHookCreation}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookSucceeded}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookFailed}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "HookFailed")))
// Helm test
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookSucceeded}, DeletePolicies(Annotate(NewPod(), "helm.sh/hook-delete-policy", "hook-succeeded")))
assert.Equal(t, []common.HookDeletePolicy{common.HookDeletePolicyHookSucceeded}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-delete-policy", "hook-succeeded")))
}

View File

@ -6,14 +6,14 @@ import (
"github.com/stretchr/testify/assert"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestDeletePolicies(t *testing.T) {
assert.Nil(t, DeletePolicies(NewPod()))
assert.Equal(t, []DeletePolicy{BeforeHookCreation}, DeletePolicies(Annotate(NewPod(), "helm.sh/hook-delete-policy", "before-hook-creation")))
assert.Equal(t, []DeletePolicy{HookSucceeded}, DeletePolicies(Annotate(NewPod(), "helm.sh/hook-delete-policy", "hook-succeeded")))
assert.Equal(t, []DeletePolicy{HookFailed}, DeletePolicies(Annotate(NewPod(), "helm.sh/hook-delete-policy", "hook-failed")))
assert.Nil(t, DeletePolicies(testingutils.NewPod()))
assert.Equal(t, []DeletePolicy{BeforeHookCreation}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-delete-policy", "before-hook-creation")))
assert.Equal(t, []DeletePolicy{HookSucceeded}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-delete-policy", "hook-succeeded")))
assert.Equal(t, []DeletePolicy{HookFailed}, DeletePolicies(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-delete-policy", "hook-failed")))
}
func TestDeletePolicy_DeletePolicy(t *testing.T) {

View File

@ -5,12 +5,12 @@ import (
"github.com/stretchr/testify/assert"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestIsHook(t *testing.T) {
assert.False(t, IsHook(NewPod()))
assert.True(t, IsHook(Annotate(NewPod(), "helm.sh/hook", "anything")))
assert.False(t, IsHook(testingutils.NewPod()))
assert.True(t, IsHook(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "anything")))
// helm calls "crd-install" a hook, but it really can't be treated as such
assert.False(t, IsHook(Annotate(NewCRD(), "helm.sh/hook", "crd-install")))
assert.False(t, IsHook(testingutils.Annotate(testingutils.NewCRD(), "helm.sh/hook", "crd-install")))
}

View File

@ -6,22 +6,22 @@ import (
"github.com/stretchr/testify/assert"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestTypes(t *testing.T) {
assert.Nil(t, Types(NewPod()))
assert.Equal(t, []Type{PreInstall}, Types(Annotate(NewPod(), "helm.sh/hook", "pre-install")))
assert.Equal(t, []Type{PreUpgrade}, Types(Annotate(NewPod(), "helm.sh/hook", "pre-upgrade")))
assert.Equal(t, []Type{PostUpgrade}, Types(Annotate(NewPod(), "helm.sh/hook", "post-upgrade")))
assert.Equal(t, []Type{PostInstall}, Types(Annotate(NewPod(), "helm.sh/hook", "post-install")))
assert.Nil(t, Types(testingutils.NewPod()))
assert.Equal(t, []Type{PreInstall}, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "pre-install")))
assert.Equal(t, []Type{PreUpgrade}, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "pre-upgrade")))
assert.Equal(t, []Type{PostUpgrade}, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "post-upgrade")))
assert.Equal(t, []Type{PostInstall}, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "post-install")))
// helm calls "crd-install" a hook, but it really can't be treated as such
assert.Empty(t, Types(Annotate(NewPod(), "helm.sh/hook", "crd-install")))
assert.Empty(t, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "crd-install")))
// we do not consider these supported hooks
assert.Nil(t, Types(Annotate(NewPod(), "helm.sh/hook", "pre-rollback")))
assert.Nil(t, Types(Annotate(NewPod(), "helm.sh/hook", "post-rollback")))
assert.Nil(t, Types(Annotate(NewPod(), "helm.sh/hook", "test-success")))
assert.Nil(t, Types(Annotate(NewPod(), "helm.sh/hook", "test-failure")))
assert.Nil(t, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "pre-rollback")))
assert.Nil(t, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "post-rollback")))
assert.Nil(t, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "test-success")))
assert.Nil(t, Types(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "test-failure")))
}
func TestType_HookType(t *testing.T) {

View File

@ -3,12 +3,12 @@ package helm
import (
"testing"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
"github.com/stretchr/testify/assert"
)
func TestWeight(t *testing.T) {
assert.Equal(t, Weight(NewPod()), 0)
assert.Equal(t, Weight(Annotate(NewPod(), "helm.sh/hook-weight", "1")), 1)
assert.Equal(t, 0, Weight(testingutils.NewPod()))
assert.Equal(t, 1, Weight(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-weight", "1")))
}

View File

@ -8,6 +8,21 @@ import (
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
)
const (
// HookFinalizer is the finalizer added to hooks to ensure they are deleted only after the sync phase is completed.
HookFinalizer = "argocd.argoproj.io/hook-finalizer"
)
func HasHookFinalizer(obj *unstructured.Unstructured) bool {
finalizers := obj.GetFinalizers()
for _, finalizer := range finalizers {
if finalizer == HookFinalizer {
return true
}
}
return false
}
func IsHook(obj *unstructured.Unstructured) bool {
_, ok := obj.GetAnnotations()[common.AnnotationKeyHook]
if ok {

View File

@ -7,7 +7,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestNoHooks(t *testing.T) {
@ -74,14 +74,14 @@ func TestGarbageAndHook(t *testing.T) {
}
func TestHelmHook(t *testing.T) {
obj := Annotate(NewPod(), "helm.sh/hook", "pre-install")
obj := testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "pre-install")
assert.True(t, IsHook(obj))
assert.False(t, Skip(obj))
assert.Equal(t, []common.HookType{common.HookTypePreSync}, Types(obj))
}
func TestGarbageHelmHook(t *testing.T) {
obj := Annotate(NewPod(), "helm.sh/hook", "garbage")
obj := testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", "garbage")
assert.True(t, IsHook(obj))
assert.False(t, Skip(obj))
assert.Nil(t, Types(obj))
@ -89,10 +89,10 @@ func TestGarbageHelmHook(t *testing.T) {
// we should ignore Helm hooks if we have an Argo CD hook
func TestBothHooks(t *testing.T) {
obj := Annotate(example("Sync"), "helm.sh/hook", "pre-install")
obj := testingutils.Annotate(example("Sync"), "helm.sh/hook", "pre-install")
assert.Equal(t, []common.HookType{common.HookTypeSync}, Types(obj))
}
func example(hook string) *unstructured.Unstructured {
return Annotate(NewPod(), "argocd.argoproj.io/hook", hook)
return testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", hook)
}

View File

@ -9,17 +9,17 @@ import (
"github.com/stretchr/testify/assert"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func newHook(obj *unstructured.Unstructured, hookType common.HookType) *unstructured.Unstructured {
return Annotate(obj, "argocd.argoproj.io/hook", string(hookType))
return testingutils.Annotate(obj, "argocd.argoproj.io/hook", string(hookType))
}
func TestIgnore(t *testing.T) {
assert.False(t, Ignore(NewPod()))
assert.False(t, Ignore(newHook(NewPod(), "Sync")))
assert.True(t, Ignore(newHook(NewPod(), "garbage")))
assert.False(t, Ignore(HelmHook(NewPod(), "pre-install")))
assert.True(t, Ignore(HelmHook(NewPod(), "garbage")))
assert.False(t, Ignore(testingutils.NewPod()))
assert.False(t, Ignore(newHook(testingutils.NewPod(), "Sync")))
assert.True(t, Ignore(newHook(testingutils.NewPod(), "garbage")))
assert.False(t, Ignore(testingutils.HelmHook(testingutils.NewPod(), "pre-install")))
assert.True(t, Ignore(testingutils.HelmHook(testingutils.NewPod(), "garbage")))
}

View File

@ -6,7 +6,6 @@ import (
hookutil "github.com/argoproj/gitops-engine/pkg/sync/hook"
"github.com/argoproj/gitops-engine/pkg/sync/ignore"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
"github.com/argoproj/gitops-engine/pkg/utils/text"
)
@ -69,25 +68,43 @@ type ReconciliationResult struct {
Hooks []*unstructured.Unstructured
}
func Reconcile(targetObjs []*unstructured.Unstructured, liveObjByKey map[kube.ResourceKey]*unstructured.Unstructured, namespace string, resInfo kubeutil.ResourceInfoProvider) ReconciliationResult {
func Reconcile(targetObjs []*unstructured.Unstructured, liveObjByKey map[kubeutil.ResourceKey]*unstructured.Unstructured, namespace string, resInfo kubeutil.ResourceInfoProvider) ReconciliationResult {
targetObjs, hooks := splitHooks(targetObjs)
dedupLiveResources(targetObjs, liveObjByKey)
managedLiveObj := make([]*unstructured.Unstructured, len(targetObjs))
for i, obj := range targetObjs {
gvk := obj.GroupVersionKind()
ns := text.FirstNonEmpty(obj.GetNamespace(), namespace)
if namespaced := kubeutil.IsNamespacedOrUnknown(resInfo, obj.GroupVersionKind().GroupKind()); !namespaced {
ns = ""
namespaced, err := resInfo.IsNamespaced(gvk.GroupKind())
unknownScope := err != nil
var keysToCheck []kubeutil.ResourceKey
// If we get an error, we don't know whether the resource is namespaced. So we need to check for both in the
// live objects. If we don't check for both, then we risk missing the object and deleting it.
if namespaced || unknownScope {
keysToCheck = append(keysToCheck, kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName()))
}
key := kubeutil.NewResourceKey(gvk.Group, gvk.Kind, ns, obj.GetName())
if liveObj, ok := liveObjByKey[key]; ok {
managedLiveObj[i] = liveObj
delete(liveObjByKey, key)
} else {
if !namespaced || unknownScope {
keysToCheck = append(keysToCheck, kubeutil.NewResourceKey(gvk.Group, gvk.Kind, "", obj.GetName()))
}
found := false
for _, key := range keysToCheck {
if liveObj, ok := liveObjByKey[key]; ok {
managedLiveObj[i] = liveObj
delete(liveObjByKey, key)
found = true
break
}
}
if !found {
managedLiveObj[i] = nil
}
}
for _, obj := range liveObjByKey {
targetObjs = append(targetObjs, nil)
managedLiveObj = append(managedLiveObj, obj)

View File

@ -0,0 +1,52 @@
package sync
import (
"errors"
"testing"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
)
type unknownResourceInfoProvider struct{}
func (e *unknownResourceInfoProvider) IsNamespaced(_ schema.GroupKind) (bool, error) {
return false, errors.New("unknown")
}
func TestReconcileWithUnknownDiscoveryDataForClusterScopedResources(t *testing.T) {
targetObjs := []*unstructured.Unstructured{
{
Object: map[string]any{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": map[string]any{
"name": "my-namespace",
},
},
},
}
liveNS := &unstructured.Unstructured{
Object: map[string]any{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": map[string]any{
"name": "my-namespace",
"uid": "c99ff56d-1921-495d-8512-d66cdfcb5740",
},
},
}
liveObjByKey := map[kube.ResourceKey]*unstructured.Unstructured{
kube.NewResourceKey("", "Namespace", "", "my-namespace"): liveNS,
}
result := Reconcile(targetObjs, liveObjByKey, "some-namespace", &unknownResourceInfoProvider{})
require.Len(t, result.Target, 1)
require.Equal(t, result.Target[0], targetObjs[0])
require.Len(t, result.Live, 1)
require.Equal(t, result.Live[0], liveNS)
}

View File

@ -2,11 +2,18 @@ package resource
import (
"strings"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func GetAnnotationCSVs(obj *unstructured.Unstructured, key string) []string {
// AnnotationGetter defines the operations required to inspect if a resource
// has annotations
type AnnotationGetter interface {
GetAnnotations() map[string]string
}
// GetAnnotationCSVs will return the value of the annotation identified by
// the given key. If the annotation has comma separated values, the returned
// list will contain all deduped values.
func GetAnnotationCSVs(obj AnnotationGetter, key string) []string {
// may for de-duping
valuesToBool := make(map[string]bool)
for _, item := range strings.Split(obj.GetAnnotations()[key], ",") {
@ -22,7 +29,9 @@ func GetAnnotationCSVs(obj *unstructured.Unstructured, key string) []string {
return values
}
func HasAnnotationOption(obj *unstructured.Unstructured, key, val string) bool {
// HasAnnotationOption will return if the given obj has an annotation defined
// as the given key and has in its values, the occurrence of val.
func HasAnnotationOption(obj AnnotationGetter, key, val string) bool {
for _, item := range GetAnnotationCSVs(obj, key) {
if item == val {
return true

View File

@ -6,7 +6,7 @@ import (
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestHasAnnotationOption(t *testing.T) {
@ -21,7 +21,7 @@ func TestHasAnnotationOption(t *testing.T) {
wantVals []string
want bool
}{
{"Nil", args{NewPod(), "foo", "bar"}, nil, false},
{"Nil", args{testingutils.NewPod(), "foo", "bar"}, nil, false},
{"Empty", args{example(""), "foo", "bar"}, nil, false},
{"Single", args{example("bar"), "foo", "bar"}, []string{"bar"}, true},
{"DeDup", args{example("bar,bar"), "foo", "bar"}, []string{"bar"}, true},
@ -37,5 +37,5 @@ func TestHasAnnotationOption(t *testing.T) {
}
func example(val string) *unstructured.Unstructured {
return Annotate(NewPod(), "foo", val)
return testingutils.Annotate(testingutils.NewPod(), "foo", val)
}

View File

@ -5,23 +5,24 @@ import (
"encoding/json"
"fmt"
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/go-logr/logr"
v1 "k8s.io/api/core/v1"
v1extensions "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
apierr "k8s.io/apimachinery/pkg/api/errors"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/klog/v2/klogr"
"k8s.io/client-go/util/retry"
"k8s.io/klog/v2/textlogger"
cmdutil "k8s.io/kubectl/pkg/cmd/util"
"k8s.io/kubectl/pkg/util/openapi"
@ -30,7 +31,6 @@ import (
"github.com/argoproj/gitops-engine/pkg/sync/common"
"github.com/argoproj/gitops-engine/pkg/sync/hook"
resourceutil "github.com/argoproj/gitops-engine/pkg/sync/resource"
"github.com/argoproj/gitops-engine/pkg/utils/kube"
kubeutil "github.com/argoproj/gitops-engine/pkg/utils/kube"
)
@ -39,11 +39,11 @@ type reconciledResource struct {
Live *unstructured.Unstructured
}
func (r *reconciledResource) key() kube.ResourceKey {
func (r *reconciledResource) key() kubeutil.ResourceKey {
if r.Live != nil {
return kube.GetResourceKey(r.Live)
return kubeutil.GetResourceKey(r.Live)
}
return kube.GetResourceKey(r.Target)
return kubeutil.GetResourceKey(r.Target)
}
// SyncContext defines an interface that allows to execute sync operation step or terminate it.
@ -95,7 +95,7 @@ func WithInitialState(phase common.OperationPhase, message string, results []com
}
// WithResourcesFilter sets sync operation resources filter
func WithResourcesFilter(resourcesFilter func(key kube.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool) SyncOpt {
func WithResourcesFilter(resourcesFilter func(key kubeutil.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool) SyncOpt {
return func(ctx *syncContext) {
ctx.resourcesFilter = resourcesFilter
}
@ -115,6 +115,13 @@ func WithPrune(prune bool) SyncOpt {
}
}
// WithPruneConfirmed specifies if prune is confirmed for resources that require confirmation
func WithPruneConfirmed(confirmed bool) SyncOpt {
return func(ctx *syncContext) {
ctx.pruneConfirmed = confirmed
}
}
// WithOperationSettings allows to set sync operation settings
func WithOperationSettings(dryRun bool, prune bool, force bool, skipHooks bool) SyncOpt {
return func(ctx *syncContext) {
@ -151,11 +158,13 @@ func WithResourceModificationChecker(enabled bool, diffResults *diff.DiffResultL
}
}
// WithNamespaceCreation will create non-exist namespace
func WithNamespaceCreation(createNamespace bool, namespaceModifier func(*unstructured.Unstructured) bool) SyncOpt {
// WithNamespaceModifier will create a namespace with the metadata passed in the `*unstructured.Unstructured` argument
// of the `namespaceModifier` function, in the case it returns `true`. If the namespace already exists, the metadata
// will overwrite what is already present if `namespaceModifier` returns `true`. If `namespaceModifier` returns `false`,
// this will be a no-op.
func WithNamespaceModifier(namespaceModifier func(*unstructured.Unstructured, *unstructured.Unstructured) (bool, error)) SyncOpt {
return func(ctx *syncContext) {
ctx.createNamespace = createNamespace
ctx.namespaceModifier = namespaceModifier
ctx.syncNamespace = namespaceModifier
}
}
@ -179,12 +188,36 @@ func WithReplace(replace bool) SyncOpt {
}
}
func WithSkipDryRunOnMissingResource(skipDryRunOnMissingResource bool) SyncOpt {
return func(ctx *syncContext) {
ctx.skipDryRunOnMissingResource = skipDryRunOnMissingResource
}
}
func WithServerSideApply(serverSideApply bool) SyncOpt {
return func(ctx *syncContext) {
ctx.serverSideApply = serverSideApply
}
}
func WithServerSideApplyManager(manager string) SyncOpt {
return func(ctx *syncContext) {
ctx.serverSideApplyManager = manager
}
}
// WithClientSideApplyMigration configures client-side apply migration for server-side apply.
// When enabled, fields managed by the specified manager will be migrated to server-side apply.
// Defaults to enabled=true with manager="kubectl-client-side-apply" if not configured.
func WithClientSideApplyMigration(enabled bool, manager string) SyncOpt {
return func(ctx *syncContext) {
ctx.enableClientSideApplyMigration = enabled
if enabled && manager != "" {
ctx.clientSideApplyMigrationManager = manager
}
}
}
// NewSyncContext creates new instance of a SyncContext
func NewSyncContext(
revision string,
@ -198,36 +231,38 @@ func NewSyncContext(
) (SyncContext, func(), error) {
dynamicIf, err := dynamic.NewForConfig(restConfig)
if err != nil {
return nil, nil, err
return nil, nil, fmt.Errorf("failed to create dynamic client: %w", err)
}
disco, err := discovery.NewDiscoveryClientForConfig(restConfig)
if err != nil {
return nil, nil, err
return nil, nil, fmt.Errorf("failed to create discovery client: %w", err)
}
extensionsclientset, err := clientset.NewForConfig(restConfig)
if err != nil {
return nil, nil, err
return nil, nil, fmt.Errorf("failed to create extensions client: %w", err)
}
resourceOps, cleanup, err := kubectl.ManageResources(rawConfig, openAPISchema)
if err != nil {
return nil, nil, err
return nil, nil, fmt.Errorf("failed to manage resources: %w", err)
}
ctx := &syncContext{
revision: revision,
resources: groupResources(reconciliationResult),
hooks: reconciliationResult.Hooks,
config: restConfig,
rawConfig: rawConfig,
dynamicIf: dynamicIf,
disco: disco,
extensionsclientset: extensionsclientset,
kubectl: kubectl,
resourceOps: resourceOps,
namespace: namespace,
log: klogr.New(),
validate: true,
startedAt: time.Now(),
syncRes: map[string]common.ResourceSyncResult{},
revision: revision,
resources: groupResources(reconciliationResult),
hooks: reconciliationResult.Hooks,
config: restConfig,
rawConfig: rawConfig,
dynamicIf: dynamicIf,
disco: disco,
extensionsclientset: extensionsclientset,
kubectl: kubectl,
resourceOps: resourceOps,
namespace: namespace,
log: textlogger.NewLogger(textlogger.NewConfig()),
validate: true,
startedAt: time.Now(),
syncRes: map[string]common.ResourceSyncResult{},
clientSideApplyMigrationManager: common.DefaultClientSideApplyMigrationManager,
enableClientSideApplyMigration: true,
permissionValidator: func(_ *unstructured.Unstructured, _ *metav1.APIResource) error {
return nil
},
@ -239,7 +274,7 @@ func NewSyncContext(
}
func groupResources(reconciliationResult ReconciliationResult) map[kubeutil.ResourceKey]reconciledResource {
resources := make(map[kube.ResourceKey]reconciledResource)
resources := make(map[kubeutil.ResourceKey]reconciledResource)
for i := 0; i < len(reconciliationResult.Target); i++ {
res := reconciledResource{
Target: reconciliationResult.Target[i],
@ -252,14 +287,14 @@ func groupResources(reconciliationResult ReconciliationResult) map[kubeutil.Reso
} else {
obj = res.Target
}
resources[kube.GetResourceKey(obj)] = res
resources[kubeutil.GetResourceKey(obj)] = res
}
return resources
}
// generates a map of resource and its modification result based on diffResultList
func groupDiffResults(diffResultList *diff.DiffResultList) map[kubeutil.ResourceKey]bool {
modifiedResources := make(map[kube.ResourceKey]bool)
modifiedResources := make(map[kubeutil.ResourceKey]bool)
for _, res := range diffResultList.Diffs {
var obj unstructured.Unstructured
var err error
@ -271,7 +306,7 @@ func groupDiffResults(diffResultList *diff.DiffResultList) map[kubeutil.Resource
if err != nil {
continue
}
modifiedResources[kube.GetResourceKey(&obj)] = res.Modified
modifiedResources[kubeutil.GetResourceKey(&obj)] = res.Modified
}
return modifiedResources
}
@ -280,14 +315,14 @@ const (
crdReadinessTimeout = time.Duration(3) * time.Second
)
// getOperationPhase returns a hook status from an _live_ unstructured object
func (sc *syncContext) getOperationPhase(hook *unstructured.Unstructured) (common.OperationPhase, string, error) {
// getOperationPhase returns a health status from a _live_ unstructured object
func (sc *syncContext) getOperationPhase(obj *unstructured.Unstructured) (common.OperationPhase, string, error) {
phase := common.OperationSucceeded
message := fmt.Sprintf("%s created", hook.GetName())
message := obj.GetName() + " created"
resHealth, err := health.GetResourceHealth(hook, sc.healthOverride)
resHealth, err := health.GetResourceHealth(obj, sc.healthOverride)
if err != nil {
return "", "", err
return "", "", fmt.Errorf("failed to get resource health: %w", err)
}
if resHealth != nil {
switch resHealth.Status {
@ -308,27 +343,32 @@ func (sc *syncContext) getOperationPhase(hook *unstructured.Unstructured) (commo
type syncContext struct {
healthOverride health.HealthOverride
permissionValidator common.PermissionValidator
resources map[kube.ResourceKey]reconciledResource
resources map[kubeutil.ResourceKey]reconciledResource
hooks []*unstructured.Unstructured
config *rest.Config
rawConfig *rest.Config
dynamicIf dynamic.Interface
disco discovery.DiscoveryInterface
extensionsclientset *clientset.Clientset
kubectl kube.Kubectl
resourceOps kube.ResourceOperations
kubectl kubeutil.Kubectl
resourceOps kubeutil.ResourceOperations
namespace string
dryRun bool
force bool
validate bool
skipHooks bool
resourcesFilter func(key kube.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool
prune bool
replace bool
serverSideApply bool
pruneLast bool
prunePropagationPolicy *metav1.DeletionPropagation
dryRun bool
skipDryRunOnMissingResource bool
force bool
validate bool
skipHooks bool
resourcesFilter func(key kubeutil.ResourceKey, target *unstructured.Unstructured, live *unstructured.Unstructured) bool
prune bool
replace bool
serverSideApply bool
serverSideApplyManager string
pruneLast bool
prunePropagationPolicy *metav1.DeletionPropagation
pruneConfirmed bool
clientSideApplyMigrationManager string
enableClientSideApplyMigration bool
syncRes map[string]common.ResourceSyncResult
startedAt time.Time
@ -340,14 +380,15 @@ type syncContext struct {
// lock to protect concurrent updates of the result list
lock sync.Mutex
createNamespace bool
namespaceModifier func(*unstructured.Unstructured) bool
// syncNamespace is a function that will determine if the managed
// namespace should be synced
syncNamespace func(*unstructured.Unstructured, *unstructured.Unstructured) (bool, error)
syncWaveHook common.SyncWaveHook
applyOutOfSyncOnly bool
// stores whether the resource is modified or not
modificationResult map[kube.ResourceKey]bool
modificationResult map[kubeutil.ResourceKey]bool
}
func (sc *syncContext) setRunningPhase(tasks []*syncTask, isPendingDeletion bool) {
@ -389,10 +430,25 @@ func (sc *syncContext) Sync() {
// to perform the sync. we only wish to do this once per operation, performing additional dry-runs
// is harmless, but redundant. The indicator we use to detect if we have already performed
// the dry-run for this operation, is if the resource or hook list is empty.
dryRunTasks := tasks
// Before doing any validation, we have to create the application namespace if it does not exist.
// The validation is expected to fail in multiple scenarios if a namespace does not exist.
if nsCreateTask := sc.getNamespaceCreationTask(dryRunTasks); nsCreateTask != nil {
nsSyncTasks := syncTasks{nsCreateTask}
// No need to perform a dry-run on the namespace creation, because if it fails we stop anyway
sc.log.WithValues("task", nsCreateTask).Info("Creating namespace")
if sc.runTasks(nsSyncTasks, false) == failed {
sc.setOperationFailed(syncTasks{}, nsSyncTasks, "the namespace failed to apply")
return
}
// The namespace was created, we can remove this task from the dry-run
dryRunTasks = tasks.Filter(func(t *syncTask) bool { return t != nsCreateTask })
}
if sc.applyOutOfSyncOnly {
dryRunTasks = sc.filterOutOfSyncTasks(tasks)
dryRunTasks = sc.filterOutOfSyncTasks(dryRunTasks)
}
sc.log.WithValues("tasks", dryRunTasks).Info("Tasks (dry-run)")
@ -445,6 +501,27 @@ func (sc *syncContext) Sync() {
return
}
// if pruned tasks pending deletion, then wait...
prunedTasksPendingDelete := tasks.Filter(func(t *syncTask) bool {
if t.pruned() && t.liveObj != nil {
return t.liveObj.GetDeletionTimestamp() != nil
}
return false
})
if prunedTasksPendingDelete.Len() > 0 {
sc.setRunningPhase(prunedTasksPendingDelete, true)
return
}
hooksCompleted := tasks.Filter(func(task *syncTask) bool {
return task.isHook() && task.completed()
})
for _, task := range hooksCompleted {
if err := sc.removeHookFinalizer(task); err != nil {
sc.setResourceResult(task, task.syncStatus, common.OperationError, fmt.Sprintf("Failed to remove hook finalizer: %v", err))
}
}
// collect all completed hooks which have appropriate delete policy
hooksPendingDeletionSuccessful := tasks.Filter(func(task *syncTask) bool {
return task.isHook() && task.liveObj != nil && !task.running() && task.deleteOnPhaseSuccessful()
@ -546,10 +623,87 @@ func (sc *syncContext) filterOutOfSyncTasks(tasks syncTasks) syncTasks {
})
}
// getNamespaceCreationTask returns a task that will create the current namespace
// or nil if the syncTasks does not contain one
func (sc *syncContext) getNamespaceCreationTask(tasks syncTasks) *syncTask {
creationTasks := tasks.Filter(func(task *syncTask) bool {
return task.liveObj == nil && isNamespaceWithName(task.targetObj, sc.namespace)
})
if len(creationTasks) > 0 {
return creationTasks[0]
}
return nil
}
func (sc *syncContext) removeHookFinalizer(task *syncTask) error {
if task.liveObj == nil {
return nil
}
removeFinalizerMutation := func(obj *unstructured.Unstructured) bool {
finalizers := obj.GetFinalizers()
for i, finalizer := range finalizers {
if finalizer == hook.HookFinalizer {
obj.SetFinalizers(append(finalizers[:i], finalizers[i+1:]...))
return true
}
}
return false
}
// The cached live object may be stale in the controller cache, and the actual object may have been updated in the meantime,
// and Kubernetes API will return a conflict error on the Update call.
// In that case, we need to get the latest version of the object and retry the update.
//nolint:wrapcheck // wrap inside the retried function instead
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
mutated := removeFinalizerMutation(task.liveObj)
if !mutated {
return nil
}
updateErr := sc.updateResource(task)
if apierrors.IsConflict(updateErr) {
sc.log.WithValues("task", task).V(1).Info("Retrying hook finalizer removal due to conflict on update")
resIf, err := sc.getResourceIf(task, "get")
if err != nil {
return fmt.Errorf("failed to get resource interface: %w", err)
}
liveObj, err := resIf.Get(context.TODO(), task.liveObj.GetName(), metav1.GetOptions{})
if apierrors.IsNotFound(err) {
sc.log.WithValues("task", task).V(1).Info("Resource is already deleted")
return nil
} else if err != nil {
return fmt.Errorf("failed to get resource: %w", err)
}
task.liveObj = liveObj
} else if apierrors.IsNotFound(updateErr) {
// If the resource is already deleted, it is a no-op
sc.log.WithValues("task", task).V(1).Info("Resource is already deleted")
return nil
}
if updateErr != nil {
return fmt.Errorf("failed to update resource: %w", updateErr)
}
return nil
})
}
func (sc *syncContext) updateResource(task *syncTask) error {
sc.log.WithValues("task", task).V(1).Info("Updating resource")
resIf, err := sc.getResourceIf(task, "update")
if err != nil {
return err
}
_, err = resIf.Update(context.TODO(), task.liveObj, metav1.UpdateOptions{})
if err != nil {
return fmt.Errorf("failed to update resource: %w", err)
}
return nil
}
func (sc *syncContext) deleteHooks(hooksPendingDeletion syncTasks) {
for _, task := range hooksPendingDeletion {
err := sc.deleteResource(task)
if err != nil && !apierr.IsNotFound(err) {
if err != nil && !apierrors.IsNotFound(err) {
sc.setResourceResult(task, "", common.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
}
@ -567,8 +721,8 @@ func (sc *syncContext) GetState() (common.OperationPhase, string, []common.Resou
}
func (sc *syncContext) setOperationFailed(syncFailTasks, syncFailedTasks syncTasks, message string) {
errorMessageFactory := func(tasks []*syncTask, message string) string {
messages := syncFailedTasks.Map(func(task *syncTask) string {
errorMessageFactory := func(tasks syncTasks, message string) string {
messages := tasks.Map(func(task *syncTask) string {
return task.message
})
if len(messages) > 0 {
@ -589,7 +743,9 @@ func (sc *syncContext) setOperationFailed(syncFailTasks, syncFailedTasks syncTas
// the phase, so we make sure we have at least one more sync
sc.log.WithValues("syncFailTasks", syncFailTasks).V(1).Info("Running sync fail tasks")
if sc.runTasks(syncFailTasks, false) == failed {
sc.setOperationPhase(common.OperationFailed, errorMessage)
failedSyncFailTasks := syncFailTasks.Filter(func(t *syncTask) bool { return t.syncStatus == common.ResultCodeSyncFailed })
syncFailTasksMessage := errorMessageFactory(failedSyncFailTasks, "one or more SyncFail hooks failed")
sc.setOperationPhase(common.OperationFailed, fmt.Sprintf("%s\n%s", errorMessage, syncFailTasksMessage))
}
} else {
sc.setOperationPhase(common.OperationFailed, errorMessage)
@ -601,7 +757,7 @@ func (sc *syncContext) started() bool {
}
func (sc *syncContext) containsResource(resource reconciledResource) bool {
return sc.resourcesFilter == nil || sc.resourcesFilter(resource.key(), resource.Live, resource.Target)
return sc.resourcesFilter == nil || sc.resourcesFilter(resource.key(), resource.Target, resource.Live)
}
// generates the list of sync tasks we will be performing during this sync.
@ -649,7 +805,9 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
generateName := obj.GetGenerateName()
targetObj.SetName(fmt.Sprintf("%s%s", generateName, postfix))
}
if !hook.HasHookFinalizer(targetObj) {
targetObj.SetFinalizers(append(targetObj.GetFinalizers(), hook.HookFinalizer))
}
hookTasks = append(hookTasks, &syncTask{phase: phase, targetObj: targetObj})
}
}
@ -676,7 +834,7 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
}
}
if sc.createNamespace && sc.namespace != "" {
if sc.syncNamespace != nil && sc.namespace != "" {
tasks = sc.autoCreateNamespace(tasks)
}
@ -688,20 +846,48 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
task.liveObj = sc.liveObj(task.targetObj)
}
isRetryable := apierrors.IsUnauthorized
serverResCache := make(map[schema.GroupVersionKind]*metav1.APIResource)
// check permissions
for _, task := range tasks {
serverRes, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
var serverRes *metav1.APIResource
var err error
if val, ok := serverResCache[task.groupVersionKind()]; ok {
serverRes = val
err = nil
} else {
err = retry.OnError(retry.DefaultRetry, isRetryable, func() error {
serverRes, err = kubeutil.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind(), "get")
//nolint:wrapcheck // complicated function, not wrapping to avoid failure of error type checks
return err
})
if serverRes != nil {
serverResCache[task.groupVersionKind()] = serverRes
}
}
shouldSkipDryRunOnMissingResource := func() bool {
// skip dry run on missing resource error for all application resources
if sc.skipDryRunOnMissingResource {
return true
}
return (task.targetObj != nil && resourceutil.HasAnnotationOption(task.targetObj, common.AnnotationSyncOptions, common.SyncOptionSkipDryRunOnMissingResource)) ||
sc.hasCRDOfGroupKind(task.group(), task.kind())
}
if err != nil {
// Special case for custom resources: if CRD is not yet known by the K8s API server,
// and the CRD is part of this sync or the resource is annotated with SkipDryRunOnMissingResource=true,
// then skip verification during `kubectl apply --dry-run` since we expect the CRD
// to be created during app synchronization.
if apierr.IsNotFound(err) &&
((task.targetObj != nil && resourceutil.HasAnnotationOption(task.targetObj, common.AnnotationSyncOptions, common.SyncOptionSkipDryRunOnMissingResource)) ||
sc.hasCRDOfGroupKind(task.group(), task.kind())) {
switch {
case apierrors.IsNotFound(err) && shouldSkipDryRunOnMissingResource():
// Special case for custom resources: if CRD is not yet known by the K8s API server,
// and the CRD is part of this sync or the resource is annotated with SkipDryRunOnMissingResource=true,
// then skip verification during `kubectl apply --dry-run` since we expect the CRD
// to be created during app synchronization.
sc.log.WithValues("task", task).V(1).Info("Skip dry-run for custom resource")
task.skipDryRun = true
} else {
default:
sc.setResourceResult(task, common.ResultCodeSyncFailed, "", err.Error())
successful = false
}
@ -713,11 +899,42 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
}
}
// for pruneLast tasks, modify the wave to sync phase last wave of non prune task +1
// for prune tasks, modify the waves for proper cleanup i.e reverse of sync wave (creation order)
pruneTasks := make(map[int][]*syncTask)
for _, task := range tasks {
if task.isPrune() {
pruneTasks[task.wave()] = append(pruneTasks[task.wave()], task)
}
}
var uniquePruneWaves []int
for k := range pruneTasks {
uniquePruneWaves = append(uniquePruneWaves, k)
}
sort.Ints(uniquePruneWaves)
// reorder waves for pruning tasks using symmetric swap on prune waves
n := len(uniquePruneWaves)
for i := 0; i < n/2; i++ {
// waves to swap
startWave := uniquePruneWaves[i]
endWave := uniquePruneWaves[n-1-i]
for _, task := range pruneTasks[startWave] {
task.waveOverride = &endWave
}
for _, task := range pruneTasks[endWave] {
task.waveOverride = &startWave
}
}
// for pruneLast tasks, modify the wave to sync phase last wave of tasks + 1
// to ensure proper cleanup, syncPhaseLastWave should also consider prune tasks to determine last wave
syncPhaseLastWave := 0
for _, task := range tasks {
if task.phase == common.SyncPhaseSync {
if task.wave() > syncPhaseLastWave && !task.isPrune() {
if task.wave() > syncPhaseLastWave {
syncPhaseLastWave = task.wave()
}
}
@ -727,12 +944,7 @@ func (sc *syncContext) getSyncTasks() (_ syncTasks, successful bool) {
for _, task := range tasks {
if task.isPrune() &&
(sc.pruneLast || resourceutil.HasAnnotationOption(task.liveObj, common.AnnotationSyncOptions, common.SyncOptionPruneLast)) {
annotations := task.liveObj.GetAnnotations()
if annotations == nil {
annotations = make(map[string]string)
}
annotations[common.AnnotationSyncWave] = strconv.Itoa(syncPhaseLastWave)
task.liveObj.SetAnnotations(annotations)
task.waveOverride = &syncPhaseLastWave
}
}
@ -768,36 +980,49 @@ func (sc *syncContext) autoCreateNamespace(tasks syncTasks) syncTasks {
}
if isNamespaceCreationNeeded {
nsSpec := &v1.Namespace{TypeMeta: metav1.TypeMeta{APIVersion: "v1", Kind: kube.NamespaceKind}, ObjectMeta: metav1.ObjectMeta{Name: sc.namespace}}
unstructuredObj, err := kube.ToUnstructured(nsSpec)
nsSpec := &corev1.Namespace{TypeMeta: metav1.TypeMeta{APIVersion: "v1", Kind: kubeutil.NamespaceKind}, ObjectMeta: metav1.ObjectMeta{Name: sc.namespace}}
managedNs, err := kubeutil.ToUnstructured(nsSpec)
if err == nil {
liveObj, err := sc.kubectl.GetResource(context.TODO(), sc.config, unstructuredObj.GroupVersionKind(), unstructuredObj.GetName(), metav1.NamespaceNone)
if err == nil {
nsTask := &syncTask{phase: common.SyncPhasePreSync, targetObj: unstructuredObj, liveObj: liveObj}
liveObj, err := sc.kubectl.GetResource(context.TODO(), sc.config, managedNs.GroupVersionKind(), managedNs.GetName(), metav1.NamespaceNone)
switch {
case err == nil:
nsTask := &syncTask{phase: common.SyncPhasePreSync, targetObj: managedNs, liveObj: liveObj}
_, ok := sc.syncRes[nsTask.resultKey()]
if ok {
tasks = append(tasks, nsTask)
} else {
if !ok && liveObj != nil {
sc.log.WithValues("namespace", sc.namespace).Info("Namespace already exists")
liveObjCopy := liveObj.DeepCopy()
if sc.namespaceModifier(liveObjCopy) {
tasks = append(tasks, &syncTask{phase: common.SyncPhasePreSync, targetObj: liveObjCopy, liveObj: liveObj})
}
}
} else if apierr.IsNotFound(err) {
tasks = append(tasks, &syncTask{phase: common.SyncPhasePreSync, targetObj: unstructuredObj, liveObj: nil})
} else {
task := &syncTask{phase: common.SyncPhasePreSync, targetObj: unstructuredObj}
sc.setResourceResult(task, common.ResultCodeSyncFailed, common.OperationError, fmt.Sprintf("Namespace auto creation failed: %s", err))
tasks = append(tasks, task)
tasks = sc.appendNsTask(tasks, nsTask, managedNs, liveObj)
case apierrors.IsNotFound(err):
tasks = sc.appendNsTask(tasks, &syncTask{phase: common.SyncPhasePreSync, targetObj: managedNs, liveObj: nil}, managedNs, nil)
default:
tasks = sc.appendFailedNsTask(tasks, managedNs, fmt.Errorf("namespace auto creation failed: %w", err))
}
} else {
sc.setOperationPhase(common.OperationFailed, fmt.Sprintf("Namespace auto creation failed: %s", err))
sc.setOperationPhase(common.OperationFailed, fmt.Sprintf("namespace auto creation failed: %s", err))
}
}
return tasks
}
func (sc *syncContext) appendNsTask(tasks syncTasks, preTask *syncTask, managedNs, liveNs *unstructured.Unstructured) syncTasks {
modified, err := sc.syncNamespace(managedNs, liveNs)
if err != nil {
tasks = sc.appendFailedNsTask(tasks, managedNs, fmt.Errorf("namespaceModifier error: %w", err))
} else if modified {
tasks = append(tasks, preTask)
}
return tasks
}
func (sc *syncContext) appendFailedNsTask(tasks syncTasks, unstructuredObj *unstructured.Unstructured, err error) syncTasks {
task := &syncTask{phase: common.SyncPhasePreSync, targetObj: unstructuredObj}
sc.setResourceResult(task, common.ResultCodeSyncFailed, common.OperationError, err.Error())
tasks = append(tasks, task)
return tasks
}
func isNamespaceWithName(res *unstructured.Unstructured, ns string) bool {
return isNamespaceKind(res) &&
res.GetName() == ns
@ -806,15 +1031,14 @@ func isNamespaceWithName(res *unstructured.Unstructured, ns string) bool {
func isNamespaceKind(res *unstructured.Unstructured) bool {
return res != nil &&
res.GetObjectKind().GroupVersionKind().Group == "" &&
res.GetKind() == kube.NamespaceKind
res.GetKind() == kubeutil.NamespaceKind
}
func obj(a, b *unstructured.Unstructured) *unstructured.Unstructured {
if a != nil {
return a
} else {
return b
}
return b
}
func (sc *syncContext) liveObj(obj *unstructured.Unstructured) *unstructured.Unstructured {
@ -840,34 +1064,121 @@ func (sc *syncContext) setOperationPhase(phase common.OperationPhase, message st
// ensureCRDReady waits until specified CRD is ready (established condition is true).
func (sc *syncContext) ensureCRDReady(name string) error {
return wait.PollImmediate(time.Duration(100)*time.Millisecond, crdReadinessTimeout, func() (bool, error) {
err := wait.PollUntilContextTimeout(context.Background(), time.Duration(100)*time.Millisecond, crdReadinessTimeout, true, func(_ context.Context) (bool, error) {
crd, err := sc.extensionsclientset.ApiextensionsV1().CustomResourceDefinitions().Get(context.TODO(), name, metav1.GetOptions{})
if err != nil {
//nolint:wrapcheck // wrapped outside the retry
return false, err
}
for _, condition := range crd.Status.Conditions {
if condition.Type == v1extensions.Established {
return condition.Status == v1extensions.ConditionTrue, nil
if condition.Type == apiextensionsv1.Established {
return condition.Status == apiextensionsv1.ConditionTrue, nil
}
}
return false, nil
})
if err != nil {
return fmt.Errorf("failed to ensure CRD ready: %w", err)
}
return nil
}
func (sc *syncContext) applyObject(t *syncTask, dryRun, force, validate bool) (common.ResultCode, string) {
func (sc *syncContext) shouldUseServerSideApply(targetObj *unstructured.Unstructured, dryRun bool) bool {
// if it is a dry run, disable server side apply, as the goal is to validate only the
// yaml correctness of the rendered manifests.
// running dry-run in server mode breaks the auto create namespace feature
// https://github.com/argoproj/argo-cd/issues/13874
if sc.dryRun || dryRun {
return false
}
resourceHasDisableSSAAnnotation := resourceutil.HasAnnotationOption(targetObj, common.AnnotationSyncOptions, common.SyncOptionDisableServerSideApply)
if resourceHasDisableSSAAnnotation {
return false
}
return sc.serverSideApply || resourceutil.HasAnnotationOption(targetObj, common.AnnotationSyncOptions, common.SyncOptionServerSideApply)
}
// needsClientSideApplyMigration checks if a resource has fields managed by the specified manager
// that need to be migrated to the server-side apply manager
func (sc *syncContext) needsClientSideApplyMigration(liveObj *unstructured.Unstructured, fieldManager string) bool {
if liveObj == nil || fieldManager == "" {
return false
}
managedFields := liveObj.GetManagedFields()
if len(managedFields) == 0 {
return false
}
for _, field := range managedFields {
if field.Manager == fieldManager {
return true
}
}
return false
}
// performClientSideApplyMigration performs a client-side-apply using the specified field manager.
// This moves the 'last-applied-configuration' field to be managed by the specified manager.
// The next time server-side apply is performed, kubernetes automatically migrates all fields from the manager
// that owns 'last-applied-configuration' to the manager that uses server-side apply. This will remove the
// specified manager from the resources managed fields. 'kubectl-client-side-apply' is used as the default manager.
func (sc *syncContext) performClientSideApplyMigration(targetObj *unstructured.Unstructured, fieldManager string) error {
sc.log.WithValues("resource", kubeutil.GetResourceKey(targetObj)).V(1).Info("Performing client-side apply migration step")
// Apply with the specified manager to set up the migration
_, err := sc.resourceOps.ApplyResource(
context.TODO(),
targetObj,
cmdutil.DryRunNone,
false,
false,
false,
fieldManager,
)
if err != nil {
return fmt.Errorf("failed to perform client-side apply migration on manager %s: %w", fieldManager, err)
}
return nil
}
func (sc *syncContext) applyObject(t *syncTask, dryRun, validate bool) (common.ResultCode, string) {
dryRunStrategy := cmdutil.DryRunNone
if dryRun {
// irrespective of the dry run mode set in the sync context, always run
// in client dry run mode as the goal is to validate only the
// yaml correctness of the rendered manifests.
// running dry-run in server mode breaks the auto create namespace feature
// https://github.com/argoproj/argo-cd/issues/13874
dryRunStrategy = cmdutil.DryRunClient
}
var err error
var message string
shouldReplace := sc.replace || resourceutil.HasAnnotationOption(t.targetObj, common.AnnotationSyncOptions, common.SyncOptionReplace)
serverSideApply := sc.serverSideApply || resourceutil.HasAnnotationOption(t.targetObj, common.AnnotationSyncOptions, common.SyncOptionServerSideApply)
force := sc.force || resourceutil.HasAnnotationOption(t.targetObj, common.AnnotationSyncOptions, common.SyncOptionForce)
serverSideApply := sc.shouldUseServerSideApply(t.targetObj, dryRun)
// Check if we need to perform client-side apply migration for server-side apply
if serverSideApply && !dryRun && sc.enableClientSideApplyMigration {
if sc.needsClientSideApplyMigration(t.liveObj, sc.clientSideApplyMigrationManager) {
err = sc.performClientSideApplyMigration(t.targetObj, sc.clientSideApplyMigrationManager)
if err != nil {
return common.ResultCodeSyncFailed, fmt.Sprintf("Failed to perform client-side apply migration: %v", err)
}
}
}
if shouldReplace {
if t.liveObj != nil {
// Avoid using `kubectl replace` for CRDs since 'replace' might recreate resource and so delete all CRD instances
if kube.IsCRD(t.targetObj) {
// Avoid using `kubectl replace` for CRDs since 'replace' might recreate resource and so delete all CRD instances.
// The same thing applies for namespaces, which would delete the namespace as well as everything within it,
// so we want to avoid using `kubectl replace` in that case as well.
if kubeutil.IsCRD(t.targetObj) || t.targetObj.GetKind() == kubeutil.NamespaceKind {
update := t.targetObj.DeepCopy()
update.SetResourceVersion(t.liveObj.GetResourceVersion())
_, err = sc.resourceOps.UpdateResource(context.TODO(), update, dryRunStrategy)
@ -883,12 +1194,12 @@ func (sc *syncContext) applyObject(t *syncTask, dryRun, force, validate bool) (c
message, err = sc.resourceOps.CreateResource(context.TODO(), t.targetObj, dryRunStrategy, validate)
}
} else {
message, err = sc.resourceOps.ApplyResource(context.TODO(), t.targetObj, dryRunStrategy, force, validate, serverSideApply)
message, err = sc.resourceOps.ApplyResource(context.TODO(), t.targetObj, dryRunStrategy, force, validate, serverSideApply, sc.serverSideApplyManager)
}
if err != nil {
return common.ResultCodeSyncFailed, err.Error()
}
if kube.IsCRD(t.targetObj) && !dryRun {
if kubeutil.IsCRD(t.targetObj) && !dryRun {
crdName := t.targetObj.GetName()
if err = sc.ensureCRDReady(crdName); err != nil {
sc.log.Error(err, fmt.Sprintf("failed to ensure that CRD %s is ready", crdName))
@ -903,21 +1214,19 @@ func (sc *syncContext) pruneObject(liveObj *unstructured.Unstructured, prune, dr
return common.ResultCodePruneSkipped, "ignored (requires pruning)"
} else if resourceutil.HasAnnotationOption(liveObj, common.AnnotationSyncOptions, common.SyncOptionDisablePrune) {
return common.ResultCodePruneSkipped, "ignored (no prune)"
} else {
if dryRun {
return common.ResultCodePruned, "pruned (dry run)"
} else {
// Skip deletion if object is already marked for deletion, so we don't cause a resource update hotloop
deletionTimestamp := liveObj.GetDeletionTimestamp()
if deletionTimestamp == nil || deletionTimestamp.IsZero() {
err := sc.kubectl.DeleteResource(context.TODO(), sc.config, liveObj.GroupVersionKind(), liveObj.GetName(), liveObj.GetNamespace(), sc.getDeleteOptions())
if err != nil {
return common.ResultCodeSyncFailed, err.Error()
}
}
return common.ResultCodePruned, "pruned"
}
if dryRun {
return common.ResultCodePruned, "pruned (dry run)"
}
// Skip deletion if object is already marked for deletion, so we don't cause a resource update hotloop
deletionTimestamp := liveObj.GetDeletionTimestamp()
if deletionTimestamp == nil || deletionTimestamp.IsZero() {
err := sc.kubectl.DeleteResource(context.TODO(), sc.config, liveObj.GroupVersionKind(), liveObj.GetName(), liveObj.GetNamespace(), sc.getDeleteOptions())
if err != nil {
return common.ResultCodeSyncFailed, err.Error()
}
}
return common.ResultCodePruned, "pruned"
}
func (sc *syncContext) getDeleteOptions() metav1.DeleteOptions {
@ -940,7 +1249,7 @@ func (sc *syncContext) targetObjs() []*unstructured.Unstructured {
}
func isCRDOfGroupKind(group string, kind string, obj *unstructured.Unstructured) bool {
if kube.IsCRD(obj) {
if kubeutil.IsCRD(obj) {
crdGroup, ok, err := unstructured.NestedString(obj.Object, "spec", "group")
if err != nil || !ok {
return false
@ -974,6 +1283,11 @@ func (sc *syncContext) Terminate() {
if !task.isHook() || task.liveObj == nil {
continue
}
if err := sc.removeHookFinalizer(task); err != nil {
sc.setResourceResult(task, task.syncStatus, common.OperationError, fmt.Sprintf("Failed to remove hook finalizer: %v", err))
terminateSuccessful = false
continue
}
phase, msg, err := sc.getOperationPhase(task.liveObj)
if err != nil {
sc.setOperationPhase(common.OperationError, fmt.Sprintf("Failed to get hook health: %v", err))
@ -981,7 +1295,7 @@ func (sc *syncContext) Terminate() {
}
if phase == common.OperationRunning {
err := sc.deleteResource(task)
if err != nil {
if err != nil && !apierrors.IsNotFound(err) {
sc.setResourceResult(task, "", common.OperationFailed, fmt.Sprintf("Failed to delete: %v", err))
terminateSuccessful = false
} else {
@ -1000,21 +1314,25 @@ func (sc *syncContext) Terminate() {
func (sc *syncContext) deleteResource(task *syncTask) error {
sc.log.WithValues("task", task).V(1).Info("Deleting resource")
resIf, err := sc.getResourceIf(task)
resIf, err := sc.getResourceIf(task, "delete")
if err != nil {
return err
}
return resIf.Delete(context.TODO(), task.name(), sc.getDeleteOptions())
err = resIf.Delete(context.TODO(), task.name(), sc.getDeleteOptions())
if err != nil {
return fmt.Errorf("failed to delete resource: %w", err)
}
return nil
}
func (sc *syncContext) getResourceIf(task *syncTask) (dynamic.ResourceInterface, error) {
apiResource, err := kube.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind())
func (sc *syncContext) getResourceIf(task *syncTask, verb string) (dynamic.ResourceInterface, error) {
apiResource, err := kubeutil.ServerResourceForGroupVersionKind(sc.disco, task.groupVersionKind(), verb)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to get api resource: %w", err)
}
res := kube.ToGroupVersionResource(task.groupVersionKind().GroupVersion().String(), apiResource)
resIf := kube.ToResourceInterface(sc.dynamicIf, apiResource, res, task.namespace())
return resIf, err
res := kubeutil.ToGroupVersionResource(task.groupVersionKind().GroupVersion().String(), apiResource)
resIf := kubeutil.ToResourceInterface(sc.dynamicIf, apiResource, res, task.namespace())
return resIf, nil
}
var operationPhases = map[common.ResultCode]common.OperationPhase{
@ -1051,6 +1369,24 @@ func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
}
// prune first
{
if !sc.pruneConfirmed {
var resources []string
for _, task := range pruneTasks {
if resourceutil.HasAnnotationOption(task.liveObj, common.AnnotationSyncOptions, common.SyncOptionPruneRequireConfirm) {
resources = append(resources, fmt.Sprintf("%s/%s/%s", task.obj().GetAPIVersion(), task.obj().GetKind(), task.name()))
}
}
if len(resources) > 0 {
sc.log.WithValues("resources", resources).Info("Prune requires confirmation")
andMessage := ""
if len(resources) > 1 {
andMessage = fmt.Sprintf(" and %d more resources", len(resources)-1)
}
sc.message = fmt.Sprintf("Waiting for pruning confirmation of %s%s", resources[0], andMessage)
return pending
}
}
ss := newStateSync(state)
for _, task := range pruneTasks {
t := task
@ -1088,7 +1424,7 @@ func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
if err != nil {
// it is possible to get a race condition here, such that the resource does not exist when
// delete is requested, we treat this as a nop
if !apierr.IsNotFound(err) {
if !apierrors.IsNotFound(err) {
state = failed
sc.setResourceResult(t, "", common.OperationError, fmt.Sprintf("failed to delete resource: %v", err))
}
@ -1111,7 +1447,7 @@ func (sc *syncContext) runTasks(tasks syncTasks, dryRun bool) runState {
// finally create resources
var tasksGroup syncTasks
for _, task := range createTasks {
//Only wait if the type of the next task is different than the previous type
// Only wait if the type of the next task is different than the previous type
if len(tasksGroup) > 0 && tasksGroup[0].targetObj.GetKind() != task.kind() {
state = sc.processCreateTasks(state, tasksGroup, dryRun)
tasksGroup = syncTasks{task}
@ -1136,7 +1472,7 @@ func (sc *syncContext) processCreateTasks(state runState, tasks syncTasks, dryRu
logCtx := sc.log.WithValues("dryRun", dryRun, "task", t)
logCtx.V(1).Info("Applying")
validate := sc.validate && !resourceutil.HasAnnotationOption(t.targetObj, common.AnnotationSyncOptions, common.SyncOptionsDisableValidation)
result, message := sc.applyObject(t, dryRun, sc.force, validate)
result, message := sc.applyObject(t, dryRun, validate)
if result == common.ResultCodeSyncFailed {
logCtx.WithValues("message", message).Info("Apply failed")
state = failed
@ -1170,7 +1506,8 @@ func (sc *syncContext) setResourceResult(task *syncTask, syncStatus common.Resul
existing, ok := sc.syncRes[task.resultKey()]
res := common.ResourceSyncResult{
ResourceKey: kube.GetResourceKey(task.obj()),
ResourceKey: kubeutil.GetResourceKey(task.obj()),
Images: kubeutil.GetResourceImages(task.obj()),
Version: task.version(),
Status: task.syncStatus,
Message: task.message,

File diff suppressed because it is too large Load Diff

View File

@ -23,7 +23,6 @@ func syncPhases(obj *unstructured.Unstructured) []common.SyncPhase {
phases = append(phases, phase)
}
return phases
} else {
return []common.SyncPhase{common.SyncPhaseSync}
}
return []common.SyncPhase{common.SyncPhaseSync}
}

View File

@ -7,7 +7,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestSyncPhaseNone(t *testing.T) {
@ -49,9 +49,9 @@ func TestSyncDuplicatedPhases(t *testing.T) {
}
func pod(hookType string) *unstructured.Unstructured {
return Annotate(NewPod(), "argocd.argoproj.io/hook", hookType)
return testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", hookType)
}
func podWithHelmHook(hookType string) *unstructured.Unstructured {
return Annotate(NewPod(), "helm.sh/hook", hookType)
return testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook", hookType)
}

View File

@ -29,9 +29,8 @@ type syncTask struct {
func ternary(val bool, a, b string) string {
if val {
return a
} else {
return b
}
return b
}
func (t *syncTask) String() string {
@ -71,6 +70,7 @@ func (t *syncTask) isHook() bool {
func (t *syncTask) group() string {
return t.groupVersionKind().Group
}
func (t *syncTask) kind() string {
return t.groupVersionKind().Kind
}
@ -107,12 +107,15 @@ func (t *syncTask) successful() bool {
return t.operationState.Successful()
}
func (t *syncTask) pruned() bool {
return t.syncStatus == common.ResultCodePruned
}
func (t *syncTask) hookType() common.HookType {
if t.isHook() {
return common.HookType(t.phase)
} else {
return ""
}
return ""
}
func (t *syncTask) hasHookDeletePolicy(policy common.HookDeletePolicy) bool {

View File

@ -7,11 +7,11 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func newHook(hookType common.HookType) *unstructured.Unstructured {
return Annotate(NewPod(), "argocd.argoproj.io/hook", string(hookType))
return testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", string(hookType))
}
func Test_syncTask_hookType(t *testing.T) {
@ -24,7 +24,7 @@ func Test_syncTask_hookType(t *testing.T) {
fields fields
want common.HookType
}{
{"Empty", fields{common.SyncPhaseSync, NewPod()}, ""},
{"Empty", fields{common.SyncPhaseSync, testingutils.NewPod()}, ""},
{"PreSyncHook", fields{common.SyncPhasePreSync, newHook(common.HookTypePreSync)}, common.HookTypePreSync},
{"SyncHook", fields{common.SyncPhaseSync, newHook(common.HookTypeSync)}, common.HookTypeSync},
{"PostSyncHook", fields{common.SyncPhasePostSync, newHook(common.HookTypePostSync)}, common.HookTypePostSync},
@ -36,41 +36,40 @@ func Test_syncTask_hookType(t *testing.T) {
liveObj: tt.fields.liveObj,
}
hookType := task.hookType()
assert.EqualValues(t, tt.want, hookType)
assert.Equal(t, tt.want, hookType)
})
}
}
func Test_syncTask_hasHookDeletePolicy(t *testing.T) {
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyHookSucceeded))
assert.False(t, (&syncTask{targetObj: NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyHookFailed))
assert.False(t, (&syncTask{targetObj: testingutils.NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.False(t, (&syncTask{targetObj: testingutils.NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyHookSucceeded))
assert.False(t, (&syncTask{targetObj: testingutils.NewPod()}).hasHookDeletePolicy(common.HookDeletePolicyHookFailed))
// must be hook
assert.False(t, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).hasHookDeletePolicy(common.HookDeletePolicyHookSucceeded))
assert.True(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).hasHookDeletePolicy(common.HookDeletePolicyHookFailed))
assert.False(t, (&syncTask{targetObj: testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).hasHookDeletePolicy(common.HookDeletePolicyBeforeHookCreation))
assert.True(t, (&syncTask{targetObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).hasHookDeletePolicy(common.HookDeletePolicyHookSucceeded))
assert.True(t, (&syncTask{targetObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).hasHookDeletePolicy(common.HookDeletePolicyHookFailed))
}
func Test_syncTask_deleteOnPhaseCompletion(t *testing.T) {
assert.False(t, (&syncTask{liveObj: NewPod()}).deleteOnPhaseCompletion())
assert.False(t, (&syncTask{liveObj: testingutils.NewPod()}).deleteOnPhaseCompletion())
// must be hook
assert.True(t, (&syncTask{operationState: common.OperationSucceeded, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).deleteOnPhaseCompletion())
assert.True(t, (&syncTask{operationState: common.OperationFailed, liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).deleteOnPhaseCompletion())
assert.True(t, (&syncTask{operationState: common.OperationSucceeded, liveObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookSucceeded")}).deleteOnPhaseCompletion())
assert.True(t, (&syncTask{operationState: common.OperationFailed, liveObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "HookFailed")}).deleteOnPhaseCompletion())
}
func Test_syncTask_deleteBeforeCreation(t *testing.T) {
assert.False(t, (&syncTask{liveObj: NewPod()}).deleteBeforeCreation())
assert.False(t, (&syncTask{liveObj: testingutils.NewPod()}).deleteBeforeCreation())
// must be hook
assert.False(t, (&syncTask{liveObj: Annotate(NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.False(t, (&syncTask{liveObj: testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
// no need to delete if no live obj
assert.False(t, (&syncTask{targetObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.True(t, (&syncTask{liveObj: Annotate(Annotate(NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.False(t, (&syncTask{targetObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.True(t, (&syncTask{liveObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
assert.True(t, (&syncTask{liveObj: testingutils.Annotate(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/hook", "Sync"), "argocd.argoproj.io/hook-delete-policy", "BeforeHookCreation")}).deleteBeforeCreation())
}
func Test_syncTask_wave(t *testing.T) {
assert.Equal(t, 0, (&syncTask{targetObj: NewPod()}).wave())
assert.Equal(t, 1, (&syncTask{targetObj: Annotate(NewPod(), "argocd.argoproj.io/sync-wave", "1")}).wave())
assert.Equal(t, 0, (&syncTask{targetObj: testingutils.NewPod()}).wave())
assert.Equal(t, 1, (&syncTask{targetObj: testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/sync-wave", "1")}).wave())
}

View File

@ -83,7 +83,6 @@ func (s syncTasks) Swap(i, j int) {
// 3. kind
// 4. name
func (s syncTasks) Less(i, j int) bool {
tA := s[i]
tB := s[j]
@ -198,7 +197,7 @@ func (s syncTasks) Split(predicate func(task *syncTask) bool) (trueTasks, falseT
}
func (s syncTasks) Map(predicate func(task *syncTask) string) []string {
messagesMap := make(map[string]interface{})
messagesMap := make(map[string]any)
for _, task := range s {
messagesMap[predicate(task)] = nil
}

View File

@ -5,11 +5,11 @@ import (
"testing"
"github.com/stretchr/testify/assert"
apiv1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/gitops-engine/pkg/sync/common"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func Test_syncTasks_kindOrder(t *testing.T) {
@ -58,24 +58,24 @@ func TestSplitSyncTasks(t *testing.T) {
var unsortedTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
@ -85,9 +85,9 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "1",
},
},
@ -96,8 +96,8 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "b",
},
},
@ -105,8 +105,8 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "a",
},
},
@ -114,9 +114,9 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "-1",
},
},
@ -125,8 +125,8 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
},
},
},
@ -139,8 +139,8 @@ var unsortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
@ -154,9 +154,9 @@ var sortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "-1",
},
},
@ -165,47 +165,47 @@ var sortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "a",
},
},
@ -213,8 +213,8 @@ var sortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "b",
},
},
@ -222,9 +222,9 @@ var sortedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "1",
},
},
@ -244,8 +244,8 @@ var sortedTasks = syncTasks{
var namedObjTasks = syncTasks{
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "a",
},
},
@ -253,8 +253,8 @@ var namedObjTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"name": "b",
},
},
@ -269,9 +269,9 @@ var unnamedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "-1",
},
},
@ -280,48 +280,48 @@ var unnamedTasks = syncTasks{
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "ConfigMap",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "PersistentVolume",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Service",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
"kind": "Pod",
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"GroupVersion": apiv1.SchemeGroupVersion.String(),
Object: map[string]any{
"GroupVersion": corev1.SchemeGroupVersion.String(),
},
},
},
{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"annotations": map[string]interface{}{
Object: map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
"argocd.argoproj.io/sync-wave": "1",
},
},
@ -349,13 +349,14 @@ func Test_syncTasks_Filter(t *testing.T) {
func TestSyncNamespaceAgainstCRD(t *testing.T) {
crd := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Workflow",
},
}}
},
}
namespace := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Namespace",
},
},
@ -371,30 +372,32 @@ func TestSyncTasksSort_NamespaceAndObjectInNamespace(t *testing.T) {
hook1 := &syncTask{
phase: common.SyncPhasePreSync,
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Job",
"metadata": map[string]interface{}{
"metadata": map[string]any{
"namespace": "myNamespace1",
"name": "mySyncHookJob1",
},
},
}}
},
}
hook2 := &syncTask{
phase: common.SyncPhasePreSync,
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Job",
"metadata": map[string]interface{}{
"metadata": map[string]any{
"namespace": "myNamespace2",
"name": "mySyncHookJob2",
},
},
}}
},
}
namespace1 := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Namespace",
"metadata": map[string]interface{}{
"metadata": map[string]any{
"name": "myNamespace1",
"annotations": map[string]string{
"argocd.argoproj.io/sync-wave": "1",
@ -405,9 +408,9 @@ func TestSyncTasksSort_NamespaceAndObjectInNamespace(t *testing.T) {
}
namespace2 := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Namespace",
"metadata": map[string]interface{}{
"metadata": map[string]any{
"name": "myNamespace2",
"annotations": map[string]string{
"argocd.argoproj.io/sync-wave": "2",
@ -431,7 +434,7 @@ func TestSyncTasksSort_CRDAndCR(t *testing.T) {
cr := &syncTask{
phase: common.SyncPhasePreSync,
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"kind": "Workflow",
"apiVersion": "argoproj.io/v1",
},
@ -439,17 +442,18 @@ func TestSyncTasksSort_CRDAndCR(t *testing.T) {
}
crd := &syncTask{
targetObj: &unstructured.Unstructured{
Object: map[string]interface{}{
Object: map[string]any{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"spec": map[string]interface{}{
"spec": map[string]any{
"group": "argoproj.io",
"names": map[string]interface{}{
"names": map[string]any{
"kind": "Workflow",
},
},
},
}}
},
}
unsorted := syncTasks{cr, crd}
unsorted.Sort()
@ -459,7 +463,7 @@ func TestSyncTasksSort_CRDAndCR(t *testing.T) {
func Test_syncTasks_multiStep(t *testing.T) {
t.Run("Single", func(t *testing.T) {
tasks := syncTasks{{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: common.SyncPhaseSync}}
tasks := syncTasks{{liveObj: testingutils.Annotate(testingutils.NewPod(), common.AnnotationSyncWave, "-1"), phase: common.SyncPhaseSync}}
assert.Equal(t, common.SyncPhaseSync, string(tasks.phase()))
assert.Equal(t, -1, tasks.wave())
assert.Equal(t, common.SyncPhaseSync, string(tasks.lastPhase()))
@ -468,8 +472,8 @@ func Test_syncTasks_multiStep(t *testing.T) {
})
t.Run("Double", func(t *testing.T) {
tasks := syncTasks{
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "-1"), phase: common.SyncPhasePreSync},
{liveObj: Annotate(NewPod(), common.AnnotationSyncWave, "1"), phase: common.SyncPhasePostSync},
{liveObj: testingutils.Annotate(testingutils.NewPod(), common.AnnotationSyncWave, "-1"), phase: common.SyncPhasePreSync},
{liveObj: testingutils.Annotate(testingutils.NewPod(), common.AnnotationSyncWave, "1"), phase: common.SyncPhasePostSync},
}
assert.Equal(t, common.SyncPhasePreSync, string(tasks.phase()))
assert.Equal(t, -1, tasks.wave())

View File

@ -5,11 +5,11 @@ import (
"github.com/stretchr/testify/assert"
. "github.com/argoproj/gitops-engine/pkg/utils/testing"
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
)
func TestWave(t *testing.T) {
assert.Equal(t, 0, Wave(NewPod()))
assert.Equal(t, 1, Wave(Annotate(NewPod(), "argocd.argoproj.io/sync-wave", "1")))
assert.Equal(t, 1, Wave(Annotate(NewPod(), "helm.sh/hook-weight", "1")))
assert.Equal(t, 0, Wave(testingutils.NewPod()))
assert.Equal(t, 1, Wave(testingutils.Annotate(testingutils.NewPod(), "argocd.argoproj.io/sync-wave", "1")))
assert.Equal(t, 1, Wave(testingutils.Annotate(testingutils.NewPod(), "helm.sh/hook-weight", "1")))
}

View File

@ -4,9 +4,8 @@ import (
"os"
)
var (
TempDir string
)
// TempDir is set to '/dev/shm' if exists, otherwise is "", which defaults to os.TempDir() when passed to os.CreateTemp()
var TempDir string
func init() {
fileInfo, err := os.Stat("/dev/shm")

View File

@ -1,31 +1,28 @@
package json
// https://github.com/ksonnet/ksonnet/blob/master/pkg/kubecfg/diff.go
func removeFields(config, live interface{}) interface{} {
func removeFields(config, live any) any {
switch c := config.(type) {
case map[string]interface{}:
l, ok := live.(map[string]interface{})
case map[string]any:
l, ok := live.(map[string]any)
if ok {
return RemoveMapFields(c, l)
} else {
return live
}
case []interface{}:
l, ok := live.([]interface{})
return live
case []any:
l, ok := live.([]any)
if ok {
return RemoveListFields(c, l)
} else {
return live
}
return live
default:
return live
}
}
// RemoveMapFields remove all non-existent fields in the live that don't exist in the config
func RemoveMapFields(config, live map[string]interface{}) map[string]interface{} {
result := map[string]interface{}{}
func RemoveMapFields(config, live map[string]any) map[string]any {
result := map[string]any{}
for k, v1 := range config {
v2, ok := live[k]
if !ok {
@ -39,10 +36,10 @@ func RemoveMapFields(config, live map[string]interface{}) map[string]interface{}
return result
}
func RemoveListFields(config, live []interface{}) []interface{} {
func RemoveListFields(config, live []any) []any {
// If live is longer than config, then the extra elements at the end of the
// list will be returned as-is so they appear in the diff.
result := make([]interface{}, 0, len(live))
result := make([]any, 0, len(live))
for i, v2 := range live {
if len(config) > i {
if v2 != nil {

View File

@ -1,26 +1,28 @@
package kube
import (
"github.com/argoproj/gitops-engine/pkg/utils/kube/scheme"
"fmt"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/gitops-engine/pkg/utils/kube/scheme"
)
func convertToVersionWithScheme(obj *unstructured.Unstructured, group string, version string) (*unstructured.Unstructured, error) {
s := scheme.Scheme
object, err := s.ConvertToVersion(obj, runtime.InternalGroupVersioner)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to convert to version using internal group versioner: %w", err)
}
unmarshalledObj, err := s.ConvertToVersion(object, schema.GroupVersion{Group: group, Version: version})
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to convert to version: %w", err)
}
unstrBody, err := runtime.DefaultUnstructuredConverter.ToUnstructured(unmarshalledObj)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to convert to unstructured object: %w", err)
}
return &unstructured.Unstructured{Object: unstrBody}, nil
}

View File

@ -6,6 +6,7 @@ import (
testingutils "github.com/argoproj/gitops-engine/pkg/utils/testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/yaml"
)
@ -76,7 +77,7 @@ func Test_convertToVersionWithScheme(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
obj := testingutils.UnstructuredFromFile("testdata/" + tt.file)
target, err := schema.ParseGroupVersion(tt.outputVersion)
assert.NoError(t, err)
require.NoError(t, err)
out, err := convertToVersionWithScheme(obj, target.Group, target.Version)
if assert.NoError(t, err) {
assert.NotNil(t, out)

Some files were not shown because too many files have changed in this diff Show More