Compare commits

...

218 Commits
v1.1.6 ... main

Author SHA1 Message Date
Sergen Yalçın 4c6bfc216d
Merge pull request #519 from erhancagirici/xp-v2-module-update
bump upjet go module to v2
2025-08-04 14:49:37 +03:00
Erhan Cagirici 8ff62171a7
Merge pull request #518 from crossplane/xp-v2
Generate crossplane v2 providers
2025-08-01 17:31:26 +03:00
Erhan Cagirici d1183ac46f bump upjet go module to v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-08-01 01:50:50 +03:00
Erhan Cagirici b4eb48bcba go mod: bump crossplane-runtime to v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici cf5fef8c8a fix: use astutils for import manipulation in resolver
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici 4b39d500a1 rename crossplane-runtime import paths for v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici 2c719f0b96 examples: generate namespaced examples
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 3157e7da7e regenerate unittest mocks
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b3b8ef0938 remove connection publisher options from controller template
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 88e0308220 generate gated controller setup methods
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 72f730ba32 update unit tests for namespaced MRs
update sensitive tests
update types builder tests
add tests for FileProducer connection secret ref resolution
add namespaced tests for api callbacks

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 7e1dd50d8f remove migration framework
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici c4427804fd make MR metric recording keys namespaced
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 0636518853 add MR namespace to various logs
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici a30bbb511d generate local cross-resource references for namespaced crds
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b64c7f1799 remove ESS configuration
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b20c833a70 handle local secret references for sensitive observations
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 96746e6b15 runtime resolution of local secret refs in sensitive parameters of namespaced MRs
- inject namespace to the local secret ref if MR is namespaced

- cross-namespace secret refs are effectively not allowed for namespaced MRs

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 4b64e6ecb8 generate local secret refs for sensitive fields in namespaced MRs
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 6ea3bc96f2 generate namespace-friendly Go structs for namespaced MR types
- namespaced MR Go structs now inline v2-style ManagedResourceSpec in the type template
as a result:
- writeConnectionSecretToRef becomes a local secret ref in namespaced MRs
- providerConfigRef becomes a typed reference with kind included
- deletionPolicy gets removed in namespaced MRs
- publishConnectionDetailsTo gets removed from all MRs

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Jared Watts 3d1ab3b9fb refactor: pipeline Run should return early after handling cluster only scenario
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 9d7ed5871c test: update unit tests for namespaced usage
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts a0e93d3827 make external client and friends namespace aware
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts a590156eb9 enabled namespaced conversions to be registered too
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 76770f82b2 Pipeline Run accepts both cluster scoped provider config and optional namespaced config
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 8e9fcfd9c3 feat: make namespaced resource generation opt-in
Also take into account if there is a main.go template file when
generating controller setup and provider main.go files. There
will be no template in the monolith case, meaning we shoud generate
a consolidated controller setup and shouldn't generate main.go files
for each group.

Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Nic Cope 387fb83493 Generate namespaced Go types and controllers
This is mostly a case of finding hard coded paths and assumptions and making
them configurable.

Signed-off-by: Nic Cope <nicc@rk0n.org>
2025-07-31 14:27:49 +03:00
Nic Cope 40bae8497f POC: Make api and controller paths configurable
This PR is a no-op when I run it on provider-upjet-aws. It generates
exactly the same code as before this PR.

My goal is to allow invoking the main apis/controllers loop twice.
Once for cluster scoped and once for namespaced resources. I'll do
that in a follow-up PR.

Signed-off-by: Nic Cope <nicc@rk0n.org>
2025-07-31 14:27:49 +03:00
Sergen Yalçın 96241b0ae5
Merge pull request #512 from sergenyalcin/ignore-identity-in-diff
Sanitize Identity field in InstanceDiff
2025-07-31 13:17:36 +03:00
Sergen Yalçın ffb68a034a
Fix unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-31 12:32:12 +03:00
Sergen Yalçın 99c75ab7cb
Sanitize Identity field in Diff
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-31 12:32:09 +03:00
Sergen Yalçın 766236448e
Merge pull request #515 from sergenyalcin/custom-state-check
Custom state check configuration for TF plugin framework resources
2025-07-28 15:18:18 +03:00
Sergen Yalçın 9505d31da7
Change function name to TerraformPluginFrameworkIsStateEmptyFn
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-28 15:11:59 +03:00
Sergen Yalçın 5ac5cb0b35
Custom nil check for state
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-25 17:18:40 +03:00
Erhan Cagirici 0af42ca259
Merge pull request #507 from erhancagirici/remove-id-check-in-externalname-fw
remove id validation from setExternalName for resources without id field
2025-06-19 13:09:11 +03:00
Fatih Türken f794e5eddf remove id validation from setExternalName for resources without id field
Signed-off-by: Fatih Türken <turkenf@gmail.com>
Co-authored-by: Erhan Cagirici <erhan@upbound.io>
2025-06-19 11:28:35 +03:00
Sergen Yalçın c4332e6ed1
Merge pull request #506 from sergenyalcin/fix-sensitive-parameter-generation
Fix incorrectly generated connection string map
2025-06-18 16:55:31 +03:00
Sergen Yalçın dd08349e54
Fix linter
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-18 16:43:38 +03:00
Sergen Yalçın c42638efc0
Fix incorrectly generated connection string map
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-18 15:56:56 +03:00
Sergen Yalçın 9098842035
Merge pull request #504 from sergenyalcin/fix-wildcard-expand-during-conversion
Fix wildcard expand behavior when if the field path is not found during conversion
2025-06-17 18:23:52 +03:00
Sergen Yalçın c275d5ec5c
Fix wildcard expand behavior when if the field path is not found during conversion
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-17 18:16:04 +03:00
Erhan Cagirici f6111127e7
Merge pull request #500 from nikimanoledaki/nm/debug-provider
Validate that `ts.FrameworkProvider` is not nil to avoid panic
2025-06-04 12:31:12 +03:00
nikimanoledaki 60517ef9af
Validate that ts.FrameworkProvdier is not nil to avoid panic
Fetching the ts.FrameworkProvider.Schema field panics if the FrameworkProvider
struct is not set / nil. Validate that the FrameworkProvider is not nil before
continuing. Return an error message if it is nil.

Signed-off-by: nikimanoledaki <niki.manoledaki@grafana.com>
2025-06-03 18:01:46 +02:00
Sergen Yalçın 55edf18c68
Merge pull request #493 from sergenyalcin/bump-crossplane-runtime
Bump crossplane-runtime dependency
2025-05-09 15:48:45 +03:00
Sergen Yalçın fe88167010
Bump crossplane-runtime dependency
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-05-09 15:39:59 +03:00
Jean du Plessis 6fdbab083c
Merge pull request #491 from jbw976/changelogs 2025-05-09 10:43:48 +02:00
Jared Watts b8639959bc
feat(changelogs): add support for change logs in controller templates
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-04-30 16:28:57 +01:00
Sergen Yalçın aea6e7c546
Merge pull request #488 from grafana/duologic/update_deps
chore(deps): update dependencies
2025-04-28 12:51:37 +03:00
Sergen Yalçın 16e038b6f7
Merge pull request #440 from digna-ionos/main
call ApplyTFConversions in Update function from terraform plugin sdk external client
2025-04-28 12:36:11 +03:00
Duologic 362869c714 chore: update crossplane/crossplane-tools
Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-25 18:36:14 +02:00
Duologic 32c12301c3 chore: update crossplane/crossplane
Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-24 15:27:48 +02:00
Duologic 1ab808957f chore(deps): update dependencies
Update crossplane-runtime and k8s.io dependencies

Fix linting errors as well.

Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-23 00:33:04 +02:00
Sergen Yalçın ae37e28e15
Merge pull request #458 from mergenci/bump-crossplane-runtime-v1.18.0
Bump crossplane-runtime version to v1.18.0
2025-04-22 13:54:37 +03:00
Sergen Yalçın c745c8cbe9
Remove the patch types that are no longer supported from migration framework
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-22 13:50:13 +03:00
Sergen Yalçın f890ff1448
Bump crossplane/crossplane to v1.18.0
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-22 13:19:17 +03:00
Cem Mergenci b3885e63ee
Replace deprecated API with typed versions.
References:
https://github.com/kubernetes/kubernetes/pull/124263
https://github.com/kubernetes-sigs/controller-runtime/pull/2799

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:04 +03:00
Cem Mergenci a514684a84
Pass context to panic handlers.
References:
https://github.com/kubernetes/kubernetes/pull/121970
126f5cee56

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:04 +03:00
Cem Mergenci 635a4f8d39
Bump crossplane-runtime version to v1.18.0.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:00 +03:00
Sergen Yalçın 7a4c1dd211
Merge pull request #485 from sergenyalcin/support-count-usage-in-registry
Adding support for parsing the count used registry examples and bumping go version and dependencies
2025-04-16 22:18:08 +03:00
Sergen Yalçın e02e0871aa
Use commit has instead of version for actions/cache
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-16 10:17:40 +03:00
Sergen Yalçın 32c53e5a72
Update the actions/cache version
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:44:09 +03:00
Sergen Yalçın 203e41eb7e
Bump go version and some dependencies
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:24:22 +03:00
Sergen Yalçın 67e73bb368
Add support for parsing the count used registry examples
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:21:35 +03:00
Sergen Yalçın 72675757bb
Merge pull request #466 from sergenyalcin/update-loop-prevention
Add a new configuration option for preventing the possible update loops
2025-02-11 15:46:00 +03:00
Sergen Yalçın 1a6d69bbd2
Merge pull request #465 from turkenh/fix-conversion
Expose conversion option to inject key/values in the conversion to list
2025-02-06 19:21:50 +03:00
Sergen Yalçın 3d9beef672
Add a new configuration option for Update Loop Prevention
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-02-06 17:19:27 +03:00
Hasan Turken 163784981f
Expose conversion option to inject key/values in the conversion to list
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2025-02-06 14:29:33 +03:00
Cem Mergenci, PhD ce71033d45
Merge pull request #461 from mergenci/remove-diff-in-observe-only
Remove diff calculation in observe-only reconciliation
2025-01-30 16:40:24 +03:00
Cem Mergenci 43311a8459 Add generics expression for compatibility with local environments.
We discovered that GoLand failed to build without the generics
expression, whereas VS Code warned that it was unnecessary.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 16:33:18 +03:00
Cem Mergenci 525abba0fa Add TODOs for other external clients.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 09:49:43 +03:00
Cem Mergenci 2f967cf07c Rename `noDiff` to `hasDiff`.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 02:17:12 +03:00
Cem Mergenci 9c244cdd10 Remove diff calculation in observe-only reconciliation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 02:17:12 +03:00
Sergen Yalçın 40ef4d8de7
Merge pull request #462 from sergenyalcin/parametrize-registry
Parametrize the registry name of the provider
2025-01-23 17:56:48 +03:00
Sergen Yalçın 5dbd74a606
Parametrize the registry name of the provider
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-01-23 16:13:22 +03:00
Cem Mergenci, PhD db86f70a16
Merge pull request #437 from smcavallo/crossplane-runtime_v1.17.0
Upgrade crossplane-runtime to v1.17.0
2025-01-08 17:22:16 +03:00
smcavallo bd4838eb82 update toolchain and move disconnect method
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo 33de4d42f6 upgrade go to 1.22.7
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo b6820b0a55 Implement `Disconnect` on `ExternalClient` interface
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo 9f51ffe663 Upgrade crossplane-runtime to v1.17.0
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
Cem Mergenci, PhD beb604beb6
Merge pull request #451 from erhancagirici/ci-chores
Update GH action dependencies and linter config
2025-01-08 14:37:50 +03:00
Cem Mergenci 04d0b9f42e Remove unintentional indentation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-08 14:01:43 +03:00
Cem Mergenci, PhD a10b5b985a
Merge pull request #450 from fernandezcuesta/patch-1
fix: typo in example
2024-12-29 17:14:11 +03:00
Alper Rifat Ulucinar 4d4ed3d890
Merge pull request #454 from ulucinar/fix-kssource-test
Fix TestNewKubernetesSource sporadic
2024-11-23 15:57:23 +03:00
Alper Rifat Ulucinar 3982f4caac
Fix TestNewKubernetesSource sporadic
- Compare sorted expected and observed slices.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-11-22 18:16:22 +03:00
Cem Mergenci, PhD ba35c31702
Merge pull request #424 from ulucinar/fix-conversion-typemeta
Fix empty TypeMeta while running API conversions
2024-11-15 20:35:42 +03:00
Erhan Cagirici be5e036794
fix unintentional modification of slice in GetSensitiveParameters (#449)
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-15 14:06:40 +03:00
Erhan Cagirici deff69065f update GH action dependency versions
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:48:50 +03:00
Erhan Cagirici 57535ad9fa update linter config & remove deprecations
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:46:47 +03:00
Erhan Cagirici fcb3112de2 switch build submodule to crossplane/build repo
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:45:16 +03:00
J. Fernández 8642d46957
fix: typo in example
Signed-off-by: Jesús Fernández <7312236+fernandezcuesta@users.noreply.github.com>
2024-11-13 18:24:34 +01:00
Alper Rifat Ulucinar a08ecd7fe3 Fix empty TypeMeta while running API conversions
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-11-04 23:50:01 +03:00
Jean du Plessis a18bd41b7a
Merge pull request #289 from sergenyalcin/migration-framework 2024-10-20 23:11:40 +02:00
Jean du Plessis 7eaaf8a403
Merge pull request #413 from yordis/chore-2 2024-10-20 23:10:29 +02:00
Jean du Plessis 5cdf36996e
Merge pull request #441 from rickard-von-essen/bug/ref-subnet 2024-10-07 17:47:51 +02:00
Rickard von Essen 1ae4c81e89
Add license header
Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-07 09:14:42 +02:00
Rickard von Essen 6dae02b730
Fix scraping Refs from attributes containing lists
This correctly parses references contained in lists and adds them to the map of
references.

Example 1):

```hcl
    require_attestations_by = [google_binary_authorization_attestor.attestor.name]
```

Correctly generates a ref and building `provider-upjet-gcp` with this change
produces the expected `examples-generated/binaryauthorization/v1beta2/policy.yaml`
with this diff compared to without this change.

```
@@ -14,8 +14,8 @@ spec:
     - cluster: us-central1-a.prod-cluster
       enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
       evaluationMode: REQUIRE_ATTESTATION
-      requireAttestationsBy:
-      - ${google_binary_authorization_attestor.attestor.name}
+      requireAttestationsByRefs:
+      - name: attestor
     defaultAdmissionRule:
     - enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
       evaluationMode: ALWAYS_ALLOW
```

1) https://github.com/hashicorp/terraform-provider-google/blob/v5.39.0/website/docs/r/binary_authorization_policy.html.markdown?plain=1#L49

Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-05 13:00:12 +02:00
Rickard von Essen ed6ae5a806
bug: Surface bug - scraping lists of references does not work
Scraping does not handle `TupleConsExpr` when parsing example HCL code from
Terraform documentation.

Example 1):

```hcl
    require_attestations_by = [google_binary_authorization_attestor.attestor.name]
```

Does not add:

```
cluster_admission_rules.require_attestations_by: google_binary_authorization_attestor.attestor.name
```

As it should since the reference is contained in a _list_.

1) https://github.com/hashicorp/terraform-provider-google/blob/v5.39.0/website/docs/r/binary_authorization_policy.html.markdown?plain=1#L49

Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-05 13:00:00 +02:00
digna-ionos 750f770b0f call ApplyTFConversions in Update function from terraform plugin sdk external client because it was causing unmarshalling errors.
Signed-off-by: digna-ionos <darius-andrei.igna@ionos.com>
2024-09-12 19:45:39 +03:00
Fatih Türken 3afbb7796d
Merge pull request #435 from turkenf/fix-nil-err
Fix the issue of hiding errors
2024-09-11 21:49:56 +03:00
Fatih Türken af44144929 Update recoverIfPanic() func comments
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-09-11 12:50:23 +03:00
Fatih Türken 11cd2f50f0 Fix the issue of hiding errors
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-09-09 18:57:36 +03:00
Jean du Plessis 34c30b90d6
Merge pull request #432 from mergenci/rename-referenced-branches 2024-09-03 15:03:49 +02:00
Cem Mergenci 7293dbb39e Rename referenced master branches to main.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-09-03 14:56:12 +02:00
Jean du Plessis 42ad41bb93
Merge pull request #429 from displague/patch-1 2024-08-28 15:26:00 +02:00
Marques Johansson 4149a0c367 docs: add link in README to new adding-new-resource guide
Follow-up to #405

Signed-off-by: Marques Johansson <mjohansson@equinix.com>
2024-08-28 13:04:54 +00:00
Cem Mergenci, PhD 2e361ad3b6
Merge pull request #428 from mergenci/async-panic-handler
Recover from panics in async external clients
2024-08-22 17:16:23 +03:00
Cem Mergenci 3dc4f0f69c Streamline async panic handler implementation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-08-22 17:09:59 +03:00
Cem Mergenci 78890e711d Recover from panics in async external clients.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-08-21 17:08:11 +03:00
Sergen Yalçın 1644827c94
Merge pull request #425 from tchinmai7/main 2024-08-02 12:29:52 +03:00
Tarun Chinmai Sekar 926e623f08 Refactor to a method
Signed-off-by: Tarun Chinmai Sekar <schinmai@akamai.com>
2024-08-01 08:35:23 -07:00
Tarun Chinmai Sekar 67bd255fdb Check for nil before calling IsKnown()
Signed-off-by: Tarun Chinmai Sekar <schinmai@akamai.com>
2024-07-25 15:13:35 -07:00
Jean du Plessis e295e17c7b
Merge pull request #405 from turkenf/add-resource-guide 2024-07-02 11:39:23 +02:00
Fatih Türken 79e8359981 Add section about running uptest locally and resolve review comments
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-26 15:54:52 +03:00
Fatih Türken de9edaebe5 Add new guide about adding a new resource
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-26 11:14:52 +03:00
Alper Rifat Ulucinar 37c7f4e91d
Merge pull request #411 from ulucinar/fix-embedded-conversion
Add config.Resource.RemoveSingletonListConversion
2024-06-12 12:39:27 +00:00
Alper Rifat Ulucinar 361331e820
Add unit tests for traverser.maxItemsSync
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-06-12 15:35:16 +03:00
Alper Rifat Ulucinar 58f4ba3fb8
Merge pull request #403 from turkenf/update-resource-config-doc
Update reference usage example and fix broken links in configuring a resource doc
2024-06-10 10:33:11 +00:00
Alper Rifat Ulucinar 6c305d0fb2
Merge pull request #410 from ulucinar/fix-ex-conv
Fix singleton list example conversion if there's no annotation
2024-06-07 09:29:28 +00:00
Alper Rifat Ulucinar 7ab5e2085d
Merge pull request #417 from ulucinar/fix-416
Do not prefix JSON fieldpaths starting with status.atProvider in resource.GetSensitiveParameters
2024-06-06 11:22:33 +00:00
Alper Rifat Ulucinar 91d382de43
Do not prefix JSON fieldpaths starting with status.atProvider in resource.GetSensitiveParameters
- If the MR API has a spec.forProvider.status field and there are sensitive attributes, then
  fieldpath.Paved.ExpandWildcards complains instead of expanding as an empty slice, which
  breaks the reconciliation.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-06-06 12:38:35 +03:00
Fatih Türken d10aa6e84a Update reference usage example and fix broken links in configuring a resource doc
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-06 10:39:07 +03:00
Yordis Prieto d37f2e3157
chore: improve references docs
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-06-01 18:02:14 -04:00
Alper Rifat Ulucinar f4f87bab85
Add traverser.maxItemsSync schema traverser for syncing the MaxItems
constraints between the JSON & Go schemas.

- We've observed that some MaxItems constraints in the JSON schemas are not set
  where the corresponding MaxItems constraints in the Go schemas are set to 1.
- This inconsistency results in some singleton lists not being properly converted
  in the MR API.
- This traverser can mitigate such inconsistencies.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-31 02:52:40 +03:00
Alper Rifat Ulucinar 5318cd959a
Add traverser.AccessSchema to access the Terraform schema elements
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 18:35:33 +03:00
Alper Rifat Ulucinar 40733472e6
Merge pull request #389 from gravufo/replace-kingpin-v2
Replace gopkg.in/alecthomas/kingpin.v2 by github.com/alecthomas/kingpin/v2
2024-05-30 14:00:05 +00:00
Alper Rifat Ulucinar cc76abb788
Add config.Provider.TraverseTFSchemas to traverse the Terraform schemas of
all the resources of a Provider.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 15:45:22 +03:00
Alper Rifat Ulucinar 5265292696
Export config.TraverseSchemas
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 15:27:41 +03:00
Christian Artin 14ac95fd84 Run go mod tidy
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:18:25 -04:00
Christian Artin 16b23263b1 Upgrade github.com/alecthomas/kingpin/v2
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:17:03 -04:00
Christian Artin 75bea9acf5 Replace gopkg.in/alecthomas/kingpin.v2 by github.com/alecthomas/kingpin/v2
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:17:03 -04:00
Alper Rifat Ulucinar 72ab08c2fe
Add config.Resource.RemoveSingletonListConversion to be able to remove
already configured singleton list conversions.

- The main use case is to prevent singleton list conversions for
  configuration arguments with single nested blocks.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 10:54:05 +03:00
Alper Rifat Ulucinar 7e3bb74eb1
Report in the error message the example manifest path if an object conversion fails
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-29 21:49:58 +03:00
Alper Rifat Ulucinar c537babf6d
Do not panic if the example manifest being converted with a singleton list
does not have any annotations set.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-29 21:34:17 +03:00
Sergen Yalçın a444cc1abc
Merge pull request #407 from sergenyalcin/cond-late-init
Add a new late-init configuration to skip already filled field in spec.initProvider
2024-05-24 11:48:02 +03:00
Alper Rifat Ulucinar 89a7d0afb5
Merge pull request #406 from ulucinar/sensitive-initProvider
Generate Secret References for Sensitive Parameters under the spec.initProvider API tree
2024-05-24 08:13:29 +00:00
Sergen Yalçın be0cde1fe9
Add conditionalIgnoredCanonicalFieldPaths to IgnoreFields to fix the unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-22 16:33:51 +03:00
Alper Rifat Ulucinar a6de4b1306
Add unit tests for spec.initProvider secret references
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-22 16:17:44 +03:00
Sergen Yalçın eadba75a4f
Add a new late-init API to skip already filled field in spec.initProvider
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-22 16:14:35 +03:00
Alper Rifat Ulucinar ec75edfd1d
Add support for resolving secret references from spec.initProvider
- If both spec.forProvider and spec.initProvider tree reference
  secrets for the same target field, spec.forProvider overrides
  spec.initProvider.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-22 15:06:06 +03:00
Alper Rifat Ulucinar f577a5483e
Generate the corresponding Kubernetes secret references for the sensitive Terraform
configuration arguments also under the spec.initProvider API tree.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-20 20:29:47 +03:00
Alper Rifat Ulucinar 92d1af84d2
Merge pull request #402 from ulucinar/available-versions
Add config.Resource.PreviousVersions to specify the previous versions of an MR API
2024-05-15 19:33:17 +00:00
Sergen Yalçın 942508c537
Merge pull request #397 from sergenyalcin/example-converter
Add example converter for conversion of singleton lists to embedded objects
2024-05-14 17:10:30 +03:00
Sergen Yalçın e3647026b8
Address review comments
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-14 17:04:42 +03:00
Alper Rifat Ulucinar b5622a10e1
Deprecate config.Resource.OverrideFieldNames in favor of
config.Resource.PreviousVersions.

- The only known use case for config.Resource.OverrideFieldNames is
  to resolve type confilicts between the older versions of the CRDs
  and the ones being generated. The "PreviousVersions" API allows
  loading of the existing types from the filesystem so that upjet
  code generation pipeline's type conflict resolution mechanisms
  can prevent such name collisions.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-14 14:15:12 +03:00
Alper Rifat Ulucinar 50e8284f1f
Add config.Resource.PreviousVersions to be able to specify the known previous
versions of an MR API.

- upjet code generation pipelines can then utilize this information to load
  the already existing type names for these previous versions and prevent
  collisions for the generated CRD types.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-14 14:12:12 +03:00
Alper Rifat Ulucinar c33a66dc58
Merge pull request #400 from ulucinar/tf-conversion
Allow Specification of the CRD API Version a Controller Watches & Reconciles
2024-05-14 11:07:23 +00:00
Sergen Yalçın 0b73a9fdeb
Add an example manifest converter for conversion of singleton lists to embedded object
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-12 13:23:29 +03:00
Alper Rifat Ulucinar 94634891cc
Deprecate config.Reference.Type in favor of config.Reference.TerraformName
- TerraformName will automatically handle name & version configurations that will affect
  the generated cross-resource reference. This is crucial especially if the
  provider generates multiple versions for its MR APIs.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-10 18:00:48 +03:00
Alper Rifat Ulucinar c606b4cbc0
Add config.NewTFSingletonConversion that returns a new TerraformConversion to convert between
singleton lists and embedded objects when parameters pass Crossplane and Terraform boundaries.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 19:23:09 +03:00
Alper Rifat Ulucinar b5dbcc5e33
conversion.singletonListConverter now operates on a configured set of path prefixes
- Previously, it only converted parameters under spec.forProvider
- This missed CRD API conversions (of singleton lists) under
  spec.initProvider & status.atProvider

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
(cherry picked from commit 08b3f3260ce86457271dda7401dfd9a69a10f656)
2024-05-09 16:55:40 +03:00
Alper Rifat Ulucinar 83b3b8e41d
Add config.TerraformConversion to abstract the runtime parameter conversions between
the Crossplane & Terraform layers.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 14:19:12 +03:00
Alper Rifat Ulucinar f5b0d82844
Rename conversion.Mode to conversion.ListConversionMode
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 11:52:24 +03:00
Alper Rifat Ulucinar cc7324eb98
Add config.Resource.ControllerReconcileVersion to be able to control the specific CR API version
the associated controller will watch & reconcile.

- For backwards-compatibility, ControllerReconcileVersion defaults to Version if unspecified.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 11:42:25 +03:00
Alper Rifat Ulucinar 03a207b641
Merge pull request #387 from ulucinar/embed-singleton
Generate singleton lists as embedded objects
2024-05-08 13:47:14 +00:00
Jean du Plessis b5d344b3cd
Merge pull request #399 from yordis/chore-improve-docs 2024-05-08 08:57:38 +02:00
Alper Rifat Ulucinar 44c139ef80
Add tests for config.SingletonListEmbedder
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar 26df74c447
Add unit tests for runtime API conversions
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar e82d6242c2
Call hub & spoke generation pipelines after the CRD generation pipeline
- Prevent execution of these pipelines multiple times for each available
  versions of an API group.
- Improve conversion.RoundTrip paved conversion error message

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar cf00700ffe
Fix connection details state value map conversion
- Fix runtime conversion for expanded field paths of length
  greater than 1.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 5a6fd9541b
Initialize config.Resource.OverrideFieldNames with an empty map
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 136fabd8ab
Fix slice value length assertion in conversion.Convert
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar a87bc73b65
Add conversion.identityConversion API converter and allow excluding field paths
when the conversion.RoundTrip copies the fields, from src to dst, with the same name.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 914df98fcc
Fix the error messages in the template implementations of conversion.Converter
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 8031da83b4
Add config.Resource.crd{Storage,Hub}Version to be able to configure
the storage & hub API versions independently.

- The default values for both of the storage & hub versions is
  the version being generated, i.e., Resource.Version.
- Replace pipeline.ConversionHubGenerator & pipeline.ConversionSpokeGenerator
  with a common pipeline.ConversionNodeGenerator implementation
- Now the hub generator can also inspect the generated files to regenerate
  the hub versions according to the latest resource configuration and
  we have removed the assumption that the hub version is always the
  latest version generated.
- Fix duplicated GKVs issue in zz_register.go.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 9e917b3d85
Add conversion.singletonConverter conversion function for CRD API conversions between
singleton list & embedded object API versions.

- Export conversion.Convert to be reused in embedded singleton list webhook API conversions.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar 668de344b5
Add a config.Resource configuration option to be able to mark
the generated CRD API version as the storage version.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar fb97ee5f28
Make singleton-list-to-embedded-object API conversions optional
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar 7e87a7fc47
Generate singleton lists as embedded objects
- Terraform configuration blocks, even if they have a MaxItems
  constraint of 1, are (almost) always generated as lists. We
  now generate the lists with a MaxItems constraint of 1 as
  embedded objects in our MR APIs.
- This also helps when updating or patching via SSA the
  (previously list) objects. The merging strategy implemented
  by SSA requires configuration for associative lists and
  converting the singleton lists into embedded objects removes
  the configuration need.
- A schema traverser is introduced, which can decouple the Terraform
  schema traversal logic from the actions (such as code generation,
  inspection, or singleton-list-to-embedded-object conversion)
  taken while traversing the schema.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Yordis Prieto 6a230300a8
chore: improve docs about selector field name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-05-07 16:22:47 -07:00
Jean du Plessis 0fb9c98a22
Merge pull request #394 from jeanduplessis/update-notices 2024-04-29 13:28:24 +02:00
Jean du Plessis ddffe7b362
Updates the MPL code used in NOTICE
Signed-off-by: Jean du Plessis <jean@upbound.io>
2024-04-29 14:25:35 +03:00
Jean du Plessis 4f6628bd74
Merge pull request #393 from yordis/yordis/chore-2 2024-04-27 01:43:52 +02:00
Yordis Prieto ff539fc32d
chore: clarify documentation around reference type name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-04-26 19:11:01 -04:00
Jean du Plessis e0436d3a1c
Merge pull request #392 from yordis/yordis/chore-1 2024-04-26 18:23:21 +02:00
Yordis Prieto 113d9e4190
chore: improve doc about configuring a resource external name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-04-26 10:29:36 -04:00
Alper Rifat Ulucinar c3efc56108
Post release commit after v1.3.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-04-25 15:55:55 +03:00
Alper Rifat Ulucinar 577bfa78fb
Merge pull request #391 from ulucinar/fix-sync-state
Cache the error from the last asynchronous reconciliation
2024-04-25 11:40:10 +00:00
Cem Mergenci, PhD c3cccedcbc
Merge pull request #390 from mergenci/mr-metrics
Introduce MR metrics
2024-04-25 14:28:53 +03:00
Cem Mergenci 461dcc3739 Introduce MR metrics.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-04-25 14:09:13 +03:00
Alper Rifat Ulucinar 77cc776e62
Cache the error from the last asynchronous reconciliation to return it in
the next asynchronous reconciliation for the Terraform plugin SDK &
framework based external clients.

- Set the "Synced" status condition to "False" in the async CallbackFn
  to immediately update it when the async operation fails.
- Set the "Synced" status condition to "True" when the async operation
  succeeds, or when the external client's Observe call reveals an
  up-to-date external resource which is not scheduled for deletion.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-04-25 00:10:12 +03:00
Cem Mergenci, PhD 4c67d8ebd3
Merge pull request #385 from mergenci/external-api-calls-metric
Add external API calls metric
2024-03-28 15:33:50 +03:00
Cem Mergenci 5f977ad584 Add external API calls metric.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-03-28 15:26:55 +03:00
Sergen Yalçın 50919febc5
Merge pull request #381 from sergenyalcin/add-required-configuration-option
Add a new configuration option for required field generation
2024-03-19 15:47:50 +03:00
Sergen Yalçın 845dbf6b1b
Add doc for RequiredFields function
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 15:26:10 +03:00
Sergen Yalçın b73a85f49b
Add requiredFields to ignoreUnexported for fixing unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 12:49:33 +03:00
Sergen Yalçın f25329f346
- Move the config.ExternalName.RequiredFields to config.Resource.requiredFields
- Deprecate config.MarkAsRequired in favor of a new configuration function on *config.Resource that still accepts a slice to mark multiple fields as required without doing and invervention in native field schema.

Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 12:42:36 +03:00
Sergen Yalçın bdfbe67ab3
Add a new ``Required` configuration option
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-18 18:17:54 +03:00
Sergen Yalçın 2ef7077f6d
Merge pull request #376 from sergenyalcin/add-header-to-setup
Add the `Header` Go template variable to setup.go.tmpl
2024-03-14 19:27:45 +03:00
Sergen Yalçın 85d8bf7b54
Add Header Go template variable to setup.go.tmpl
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 18:32:31 +03:00
Sergen Yalçın 84abd051e6
Merge pull request #373 from sergenyalcin/move-license-statements-tmpl
Move license statements to separate files (for tmpl files) to prevent license statement duplication
2024-03-14 14:14:22 +03:00
Sergen Yalçın 2d71c5b36d
Remove blank line on top
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 14:06:50 +03:00
Sergen Yalçın a648048b9d
Move license statements to separate files to prevent license statement duplication
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 00:02:10 +03:00
Sergen Yalçın 363f66c52d
Merge pull request #358 from sergenyalcin/fix-statefunc-call
Removing the applying of StateFuncs to parameters
2024-03-06 13:50:15 +03:00
Jean du Plessis 05703b568d
Merge pull request #363 from jaylevin/patch-1 2024-03-06 09:32:41 +02:00
Sergen Yalçın 956c7d489b
Remove the unnecessary case and add nolint
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-05 21:06:55 +03:00
Alper Rifat Ulucinar e2a229705f
Merge pull request #362 from bobh66/fix_make
Remove the empty img.build make target & the image.mk include
2024-03-05 18:41:17 +03:00
Bob Haddleton 712feee32d Remove img.build make target
Signed-off-by: Bob Haddleton <bob.haddleton@nokia.com>
2024-03-05 08:11:07 -06:00
Alper Rifat Ulucinar f043e2e5dd
Merge pull request #360 from bobh66/swap_synced_ready
Swap SYNCED and READY columns in output
2024-03-05 16:21:25 +03:00
Jordan Levin 6d2bf28827
Fix go code spacing in configuration-a-resource.md 2024-03-04 14:17:33 -08:00
Bob Haddleton f0b7317e96 Swap SYNCED and READY columns in output
Signed-off-by: Bob Haddleton <bob.haddleton@nokia.com>
2024-03-01 09:19:07 -06:00
Sergen Yalçın 93af08a988
Remove StateFunc calls
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-01 12:04:28 +03:00
Jean du Plessis 902acfe539
Merge pull request #357 from tomasmota/main 2024-02-27 15:04:00 +02:00
tomasmota 16d3101d6d
small docs corrections
Signed-off-by: tomasmota <tomasrebelomota@gmail.com>
2024-02-27 13:35:59 +01:00
Jean du Plessis 6f23c91b94
Merge pull request #356 from tomasmota/main 2024-02-27 13:39:45 +02:00
tomasmota b424aafa9c
fix link in docs
Signed-off-by: tomasmota <tomasrebelomota@gmail.com>
2024-02-27 12:20:32 +01:00
Sergen Yalçın c13945f264
Merge pull request #355 from sergenyalcin/fix-sensitive-generation
Fix slice type sensitive fieldpath generation
2024-02-26 20:10:52 +03:00
Sergen Yalçın b63f33874c
Change cases from string to type of FieldType
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-02-26 20:05:02 +03:00
Sergen Yalçın 8bf78e8106
Fix non-primitive type sensitive field generation
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-02-23 18:27:14 +03:00
Alper Rifat Ulucinar 2eb7f71f91
Merge pull request #354 from ulucinar/json-license
Add .license files for the JSON test artifacts
2024-02-23 17:53:01 +03:00
Alper Rifat Ulucinar 22a7a64531
Add .license files for the JSON test artifacts
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-23 17:32:11 +03:00
Cem Mergenci, PhD eaa1e9a7ea
Merge pull request #350 from mergenci/external-client-check-lateinitialize-management-policy
Check LateInitialize management policy in Plugin Framework external client
2024-02-21 16:06:43 +03:00
Cem Mergenci af31423729 Check LateInitialize management policy in Plugin Framework client.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-02-21 15:57:15 +03:00
Jean du Plessis 3ca1fe281f
Merge pull request #352 from jeanduplessis/new-maintainers 2024-02-21 11:02:38 +02:00
Jean du Plessis 5c6932fd1d
Merge pull request #310 from danielsinai/patch-1 2024-02-21 10:46:56 +02:00
Jean du Plessis 9823bcb918
Adds erhancagirici and mergenci as maintainers
Signed-off-by: Jean du Plessis <jean@upbound.io>
2024-02-21 10:02:07 +02:00
Alper Rifat Ulucinar 2f05dbe00e
Post release commit after v1.2.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-15 21:46:17 +03:00
Daniel Sinai 0353e98655
Changed 404 routes 2023-12-08 11:26:21 +02:00
Sergen Yalçın dded5ce3f3
Add doc for migration framework
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-10-11 13:36:14 +03:00
224 changed files with 8851 additions and 9726 deletions

View File

@ -34,4 +34,4 @@ needs to tested and shown to be correct. Briefly describe the testing that has
already been done or which is planned for this change.
-->
[contribution process]: https://github.com/crossplane/upjet/blob/master/CONTRIBUTING.md
[contribution process]: https://github.com/crossplane/upjet/blob/main/CONTRIBUTING.md

View File

@ -22,16 +22,16 @@ jobs:
# The main gotchas with this action are that it _only_ supports merge commits,
# and that PRs _must_ be labelled before they're merged to trigger a backport.
open-pr:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: github.event.pull_request.merged
steps:
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@v0.0.4
uses: zeebe-io/backport-action@be567af183754f6a5d831ae90f648954763f17f5 # v3.1.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}

View File

@ -14,9 +14,9 @@ on:
env:
# Common versions
GO_VERSION: "1.21"
GOLANGCI_VERSION: "v1.55.2"
DOCKER_BUILDX_VERSION: "v0.8.2"
GO_VERSION: "1.23"
GOLANGCI_VERSION: "v1.64.4"
DOCKER_BUILDX_VERSION: "v0.18.0"
# Common users. We can't run a step 'if secrets.AWS_USR != ""' but we can run
# a step 'if env.AWS_USR' != ""', so we copy these to succinctly test whether
@ -26,13 +26,13 @@ env:
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v5.3.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.png", "**.jpg"]'
@ -40,18 +40,23 @@ jobs:
concurrent_skipping: false
lint:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
@ -64,14 +69,14 @@ jobs:
run: echo "cache=$(go env GOCACHE)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go.outputs.cache }}
key: ${{ runner.os }}-build-lint-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-lint-
- name: Cache Go Dependencies
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -89,18 +94,23 @@ jobs:
version: ${{ env.GOLANGCI_VERSION }}
check-diff:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
@ -111,14 +121,14 @@ jobs:
echo "go-mod=$(make go.mod.cachedir)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-build }}
key: ${{ runner.os }}-build-check-diff-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-check-diff-
- name: Cache Go Dependencies
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-mod }}
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -131,13 +141,18 @@ jobs:
run: make check-diff
unit-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
@ -145,7 +160,7 @@ jobs:
run: git fetch --prune --unshallow
- name: Setup Go
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # v3
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
@ -156,14 +171,14 @@ jobs:
echo "go-mod=$(make go.mod.cachedir)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-build }}
key: ${{ runner.os }}-build-unit-tests-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-unit-tests-
- name: Cache Go Dependencies
uses: actions/cache@v3
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-mod }}
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}

View File

@ -13,13 +13,13 @@ on:
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v5.3.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.png", "**.jpg"]'
@ -27,20 +27,20 @@ jobs:
concurrent_skipping: false
analyze:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
uses: github/codeql-action/init@396bb3e45325a47dd9ef434068033c6d5bb0d11a # v3.27.3
with:
languages: go
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
uses: github/codeql-action/analyze@396bb3e45325a47dd9ef434068033c6d5bb0d11a # v3.27.3

View File

@ -14,7 +14,7 @@ jobs:
steps:
- name: Extract Command
id: command
uses: xt0rted/slash-command-action@v1
uses: xt0rted/slash-command-action@bf51f8f5f4ea3d58abc7eca58f77104182b23e88 # v2.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: points
@ -23,7 +23,7 @@ jobs:
allow-edits: "false"
permission-level: write
- name: Handle Command
uses: actions/github-script@v4
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
env:
POINTS: ${{ steps.command.outputs.command-arguments }}
with:
@ -69,12 +69,12 @@ jobs:
# NOTE(negz): See also backport.yml, which is the variant that triggers on PR
# merge rather than on comment.
backport:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: github.event.issue.pull_request && startsWith(github.event.comment.body, '/backport')
steps:
- name: Extract Command
id: command
uses: xt0rted/slash-command-action@v1
uses: xt0rted/slash-command-action@bf51f8f5f4ea3d58abc7eca58f77104182b23e88 # v2.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: backport
@ -84,13 +84,12 @@ jobs:
permission-level: write
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@v0.0.4
uses: zeebe-io/backport-action@be567af183754f6a5d831ae90f648954763f17f5 # v3.1.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
version: v0.0.4

View File

@ -10,10 +10,10 @@ jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v2
uses: fsfe/reuse-action@3ae3c6bdf1257ab19397fab11fd3312144692083 # v4.0.0
- name: REUSE SPDX SBOM
uses: fsfe/reuse-action@v2
uses: fsfe/reuse-action@3ae3c6bdf1257ab19397fab11fd3312144692083 # v4.0.0
with:
args: spdx

View File

@ -16,14 +16,14 @@ on:
jobs:
create-tag:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 #v3
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Create Tag
uses: negz/create-tag@v1
uses: negz/create-tag@39bae1e0932567a58c20dea5a1a0d18358503320 # v1
with:
version: ${{ github.event.inputs.version }}
message: ${{ github.event.inputs.message }}

2
.gitmodules vendored
View File

@ -4,4 +4,4 @@
[submodule "build"]
path = build
url = https://github.com/upbound/build
url = https://github.com/crossplane/build

View File

@ -5,12 +5,12 @@
run:
timeout: 10m
skip-files:
- "zz_generated\\..+\\.go$"
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
formats:
- format: colored-line-number
print-linter-name: true
show-stats: true
linters-settings:
errcheck:
@ -22,14 +22,10 @@ linters-settings:
# default is false: such cases aren't reported by default.
check-blank: false
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
ignore: fmt:.*,io/ioutil:^Read.*
govet:
# report about shadowed variables
check-shadowing: false
exclude-functions:
- io/ioutil.ReadFile
- io/ioutil.ReadDir
- io/ioutil.ReadAll
revive:
# confidence for issues, default is 0.8
@ -52,10 +48,6 @@ linters-settings:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
@ -70,13 +62,6 @@ linters-settings:
# tab width in spaces. Default to 1.
tab-width: 1
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
@ -116,29 +101,22 @@ linters-settings:
linters:
enable:
- megacheck
- govet
- gocyclo
- gocritic
- goconst
- gci
- gofmt # We enable this as well as goimports for its simplify mode.
- gosimple
- prealloc
- revive
- staticcheck
- unconvert
- unused
- misspell
- nakedret
- nolintlint
disable:
# These linters are all deprecated as of golangci-lint v1.49.0. We disable
# them explicitly to avoid the linter logging deprecation warnings.
- deadcode
- varcheck
- scopelint
- structcheck
- interfacer
presets:
- bugs
- unused
@ -168,37 +146,47 @@ issues:
# rather than using a pointer.
- text: "(hugeParam|rangeValCopy):"
linters:
- gocritic
- gocritic
# This "TestMain should call os.Exit to set exit code" warning is not clever
# enough to notice that we call a helper method that calls os.Exit.
- text: "SA3000:"
linters:
- staticcheck
- staticcheck
- text: "k8s.io/api/core/v1"
linters:
- goimports
- goimports
# This is a "potential hardcoded credentials" warning. It's triggered by
# any variable with 'secret' in the same, and thus hits a lot of false
# positives in Kubernetes land where a Secret is an object type.
- text: "G101:"
linters:
- gosec
- gas
- gosec
- gas
# This is an 'errors unhandled' warning that duplicates errcheck.
- text: "G104:"
linters:
- gosec
- gas
- gosec
- gas
# Some k8s dependencies do not have JSON tags on all fields in structs.
- path: k8s.io/
linters:
- musttag
# This exclusion is necessary because this package relies on deprecated
# functions and fields to maintain compatibility with older versions. This
# package acts as a migration framework, supporting users coming from
# previous versions. In the future, we may extract this package into a
# separate module and decouple its dependencies from upjet.
- path: pkg/migration/
linters:
- staticcheck
text: "SA1019:"
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
@ -214,7 +202,7 @@ issues:
new: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@ -22,4 +22,4 @@
pkg/migrations/* @sergenyalcin
# Fallback owners
* @ulucinar @sergenyalcin
* @ulucinar @sergenyalcin @erhancagirici @mergenci

View File

@ -6,4 +6,4 @@ SPDX-License-Identifier: CC0-1.0
# Community Code of Conduct
This project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
This project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).

View File

@ -108,9 +108,9 @@ replace github.com/crossplane/upjet => github.com/<your user name>/upjet <hash o
```
[Slack]: https://crossplane.slack.com/archives/C01TRKD4623
[code of conduct]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[code of conduct]: https://github.com/cncf/foundation/blob/main/code-of-conduct.md
[good git commit hygiene]: https://www.futurelearn.com/info/blog/telling-stories-with-your-git-history
[Developer Certificate of Origin]: https://github.com/apps/dco
[test review comments]: https://github.com/golang/go/wiki/TestComments
[docs]: docs/
[Coding Style]: https://github.com/crossplane/crossplane/blob/master/CONTRIBUTING.md#coding-style
[Coding Style]: https://github.com/crossplane/crossplane/blob/main/CONTRIBUTING.md#coding-style

View File

@ -8,11 +8,12 @@
PROJECT_NAME := upjet
PROJECT_REPO := github.com/crossplane/$(PROJECT_NAME)
GO_PROJECT := github.com/crossplane/$(PROJECT_NAME)/v2
# GOLANGCILINT_VERSION is inherited from build submodule by default.
# Uncomment below if you need to override the version.
GOLANGCILINT_VERSION ?= 1.55.2
GO_REQUIRED_VERSION ?= 1.21
GOLANGCILINT_VERSION ?= 1.64.4
# GO_REQUIRED_VERSION ?= 1.22
PLATFORMS ?= linux_amd64 linux_arm64
# -include will silently skip missing files, which allows us
@ -21,14 +22,6 @@ PLATFORMS ?= linux_amd64 linux_arm64
# to run a target until the include commands succeeded.
-include build/makelib/common.mk
# ====================================================================================
# Setup Images
# even though this repo doesn't build images (note the no-op img.build target below),
# some of the init is needed for the cross build container, e.g. setting BUILD_REGISTRY
-include build/makelib/image.mk
img.build:
# ====================================================================================
# Setup Go

29
NOTICE
View File

@ -11,32 +11,17 @@ Notably, this larger work combines with the following Terraform components,
which are licensed under the Mozilla Public License 2.0 (see
<https://www.mozilla.org/en-US/MPL/2.0/> or the individual projects listed
below).
<https://github.com/hashicorp/terraform>
<https://github.com/hashicorp/hcl>
<https://github.com/hashicorp/terraform-json>
<https://github.com/hashicorp/terraform-plugin-framework>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-sdk>
<https://github.com/hashicorp/go-getter>
<https://github.com/hashicorp/vault>
<https://github.com/hashicorp/errwrap>
<https://github.com/hashicorp/go-cleanhttp>
<https://github.com/hashicorp/go-cty>
<https://github.com/hashicorp/go-hclog>
<https://github.com/hashicorp/go-multierror>
<https://github.com/hashicorp/go-safetemp>
<https://github.com/hashicorp/go-plugin>
<https://github.com/hashicorp/go-uuid>
<https://github.com/hashicorp/go-version>
<https://github.com/hashicorp/hcl>
<https://github.com/hashicorp/logutils>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-log>
<https://github.com/hashicorp/terraform-registry-address>
<https://github.com/hashicorp/terraform-svchost>
<https://github.com/hashicorp/go-hclog>
<https://github.com/hashicorp/go-immutable-radix>
<https://github.com/hashicorp/go-plugin>
<https://github.com/hashicorp/go-retryablehttp>
<https://github.com/hashicorp/go-rootcerts>
<https://github.com/hashicorp/go-secure-stdlib>
<https://github.com/hashicorp/go-sockaddr>
<https://github.com/hashicorp/golang-lru>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-log>
<https://github.com/hashicorp/yamux>
<https://github.com/hashicorp/yamux>

View File

@ -15,5 +15,7 @@ repository maintainers in their own `OWNERS.md` file.
* Alper Ulucinar <alper@upbound.com> ([ulucinar](https://github.com/ulucinar))
* Sergen Yalcin <sergen@upbound.com> ([sergenyalcin](https://github.com/sergenyalcin))
* Jean du Plessis <jean@upbound.com> ([jeanduplessis](https://github.com/jeanduplessis))
* Erhan Cagirici <erhan@upbound.com> ([erhancagirici](https://github.com/erhancagirici))
* Cem Mergenci <cem@upbound.com> ([mergenci](https://github.com/mergenci))
See [CODEOWNERS](./CODEOWNERS) for automatic PR assignment.

View File

@ -54,5 +54,5 @@ Upjet originates from the [Terrajet][terrajet] project. See the original
Upjet is under [the Apache 2.0 license](LICENSE) with [notice](NOTICE).
[terrajet-design-doc]: https://github.com/crossplane/crossplane/blob/master/design/design-doc-terrajet.md
[terrajet-design-doc]: https://github.com/crossplane/crossplane/blob/main/design/design-doc-terrajet.md
[terrajet]: https://github.com/crossplane/terrajet

2
build

@ -1 +1 @@
Subproject commit bd5297bd16c113cbc5ed1905b1d96aa1cb3078ec
Subproject commit cc14f9cdac034e0eaaeb43479f57ee85d5490473

View File

@ -8,12 +8,12 @@ import (
"os"
"path/filepath"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/alecthomas/kingpin/v2"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/spf13/afero"
"gopkg.in/alecthomas/kingpin.v2"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/crossplane/upjet/pkg/transformers"
"github.com/crossplane/upjet/v2/pkg/transformers"
)
func main() {

View File

@ -8,9 +8,9 @@ import (
"os"
"path/filepath"
"gopkg.in/alecthomas/kingpin.v2"
"github.com/alecthomas/kingpin/v2"
"github.com/crossplane/upjet/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/registry"
)
func main() {

View File

@ -32,11 +32,13 @@ end to end.
- Guide on how to add support for
[management policies](adding-support-for-management-policies.md) to an existing
provider.
- Guide on how to [add a new resource](adding-new-resource.md) to an existing provider.
## Additional documentation
- [Provider identity based authentication](design-doc-provider-identity-based-auth.md)
- [Monitoring](monitoring.md) the Upjet runtime using Prometheus.
- [Migration Framework](migration-framework.md)
Feel free to ask your questions by opening an issue or starting a discussion in
the [#upjet](https://crossplane.slack.com/archives/C05T19TB729) channel in

753
docs/adding-new-resource.md Normal file
View File

@ -0,0 +1,753 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
### Prerequisites
To follow this guide, you will need:
1. A Kubernetes Cluster: For manual/local effort, generally a Kind cluster
is sufficient and can be used. For detailed information about Kind see
[this repo]. An alternative way to obtain a cluster is: [k3d]
2. [Go] installed and configured. Check the provider repo you will be working
with and install the version in the `go.mod` file.
3. [Terraform v1.5.5] installed locally. The last version we used before the
license change.
4. [goimports] installed.
# Adding a New Resource
There are long and detailed guides showing [how to bootstrap a
provider][provider-guide] and [how to configure resources][config-guide]. Here
we will go over the steps that will take us to `v1beta1` quality.
1. Fork the provider repo to which you will add resources and create a feature
branch.
2. Go to the Terraform Registry page of the resource you will add. We will add
the resource [`aws_redshift_endpoint_access`] as an example in this guide.
We will use this page in the following steps, especially in determining the
external name configuration, determining conflicting fields, etc.
3. Determine the resource's external name configuration:
Our external name configuration relies on the Terraform ID format of the
resource which we find in the import section on the Terraform Registry page.
Here we'll look for clues about how the Terraform ID is shaped so that we can
infer the external name configuration. In this case, there is an `endpoint_name`
argument seen under the `Argument Reference` section and when we look at
[Import] section, we see that this is what's used to import, i.e. Terraform ID
is the same as the `endpoint_name` argument. This means that we can use
`config.ParameterAsIdentifier("endpoint_name")` configuration from Upjet as our
external name config. See section [External Name Cases] to see how you can infer
in many different cases of Terraform ID.
4. Check if the resource is an Terraform Plugin SDK resource or Terraform Plugin
Framework resource from the [source code].
- For SDK resources, you will see a comment line like `// @SDKResource` in the
source code.
The `aws_redshift_endpoint_access` resource is an SDK resource, go to
`config/externalname.go` and add the following line to the
`TerraformPluginSDKExternalNameConfigs` table:
- Check the `redshift` group, if there is a group, add the external-name config below:
```golang
// redshift
...
// Redshift endpoint access can be imported using the endpoint_name
"aws_redshift_endpoint_access": config.ParameterAsIdentifier("endpoint_name"),
```
- If there is no group, continue by adding the group name as a comment line.
- For Framework resources, you will see a comment line like
`// @FrameworkResource` in the source code. If the resource is a Framework
resource, add the external-name config to the
`TerraformPluginFrameworkExternalNameConfigs` table.
*Note: Look at the `config/externalnamenottested.go` file and check if there is
a configuration for the resource and remove it from there.*
5. Run `make submodules` to initialize the build submodule and run
`make generate`. When the command process is completed, you will see that the
controller, CRD, generated example, and other necessary files for the resource
have been created and modified.
```bash
> git status
On branch add-redshift-endpoint-access
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: apis/redshift/v1beta1/zz_generated.conversion_hubs.go
modified: apis/redshift/v1beta1/zz_generated.deepcopy.go
modified: apis/redshift/v1beta1/zz_generated.managed.go
modified: apis/redshift/v1beta1/zz_generated.managedlist.go
modified: apis/redshift/v1beta1/zz_generated.resolvers.go
modified: config/externalname.go
modified: config/externalnamenottested.go
modified: config/generated.lst
modified: internal/controller/zz_monolith_setup.go
modified: internal/controller/zz_redshift_setup.go
Untracked files:
(use "git add <file>..." to include in what will be committed)
apis/redshift/v1beta1/zz_endpointaccess_terraformed.go
apis/redshift/v1beta1/zz_endpointaccess_types.go
examples-generated/redshift/v1beta1/endpointaccess.yaml
internal/controller/redshift/endpointaccess/
package/crds/redshift.aws.upbound.io_endpointaccesses.yaml
```
6. Go through the "Warning" boxes (if any) in the Terraform Registry page to
see whether any of the fields are represented as separate resources as well.
It usually goes like this:
> Routes can be defined either directly on the azurerm_iothub
> resource, or using the azurerm_iothub_route resource - but the two cannot be
> used together.
In such cases, the field should be moved to status since we prefer to
represent it only as a separate CRD. Go ahead and add a configuration block
for that resource similar to the following:
```golang
p.AddResourceConfigurator("azurerm_iothub", func(r *config.Resource) {
// Mutually exclusive with azurerm_iothub_route
config.MoveToStatus(r.TerraformResource, "route")
})
```
7. Resource configuration is largely done, so we need to prepare the example
YAML for testing. Copy `examples-generated/redshift/v1beta1/endpointaccess.yaml`
into `examples/redshift/v1beta1/endpointaccess.yaml` and check the dependent
resources, if not, please add them to the YAML file.
```
NOTE: The resources that are tried to be created may have dependencies. For
example, you might actually need resources Y and Z while trying to test resource
X. Many of the generated examples include these dependencies. However, in some
cases, there may be missing dependencies. In these cases, please add the
relevant dependencies to your example manifest. This is important both for you
to pass the tests and to provide the correct manifests.
```
- In our case, the generated example has required fields
`spec.forProvider.clusterIdentifierSelector` and
`spec.forProvider.subnetGroupNameSelector`. We need to check its argument list
in Terraform documentation and figure out which field needs a reference to
which resource. Let's check the [cluster_identifier] field, we see that the
field requires a reference to the `Cluster.redshift` resource identifier.
For the [subnet_group_name] field, we see that the field requires a reference
to the `SubnetGroup.redshift` resource ID.
Then add the `Cluster.redshift` and `SubnetGroup.redshift` resource examples
to our YAML file and edit the annotations and labels.
```yaml
apiVersion: redshift.aws.upbound.io/v1beta1
kind: EndpointAccess
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example
name: example-endpointaccess
spec:
forProvider:
clusterIdentifierSelector:
matchLabels:
testing.upbound.io/example-name: example-endpointaccess
region: us-west-1
subnetGroupNameSelector:
matchLabels:
testing.upbound.io/example-name: example-endpointaccess
---
apiVersion: redshift.aws.upbound.io/v1beta1
kind: Cluster
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example-endpointaccess
name: example-endpointaccess-c
spec:
forProvider:
clusterType: single-node
databaseName: mydb
masterPasswordSecretRef:
key: example-key
name: cluster-secret
namespace: upbound-system
masterUsername: exampleuser
nodeType: ra3.xlplus
region: us-west-1
skipFinalSnapshot: true
---
apiVersion: redshift.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example-endpointaccess
name: example-endpointaccess-sg
spec:
forProvider:
region: us-west-1
subnetIdRefs:
- name: foo
- name: bar
tags:
environment: Production
```
Here the references for `clusterIdentifier` and `subnetGroupName` are
[automatically] defined.
If it is not defined automatically or if you want to define a reference for
another field, please see [Cross Resource Referencing].
8. Create a commit to cover all changes so that it's easier for the reviewer
with a message like the following:
`Configure EndpointAccess.redshift resource and add example`
9. Run `make reviewable` to ensure this PR is ready for review.
10. That's pretty much all we need to do in the codebase, we can open a
new PR: `git push --set-upstream origin add-redshift-endpoint-access`
# Testing Instructions
While configuring resources, the testing effort is the longest part, because the
characteristics of cloud providers and services can change. This test effort can
be executed in two main methods. The first one is testing the resources in a
manual way and the second one is using the [Uptest] which is an automated test
tool for Official Providers. `Uptest` provides a framework to test resources in
an end-to-end pipeline during the resource configuration process. Together with
the example manifest generation tool, it allows us to avoid manual interventions
and shortens testing processes.
## Automated Tests - Uptest
After providing all the required fields of the resource we added and added
dependent resources, if any, we can start with automatic testing. To trigger
automated tests, you must have one approved PR and be a contributor in the
relevant repo. In other cases, maintainers will trigger automatic tests when
your PR is ready. To trigger it, you can drop [a comment] on the PR containing
the following:
```
/test-examples="examples/redshift/v1beta1/endpointaccess.yaml"
```
Once the automated tests pass, we're good to go. All you have to do is put
the link to the successful uptest run in the `How has this code been tested`
section in the PR description.
If the automatic test fails, click on the uptest run details, then click
`e2e/uptest` -> `Run uptest` and try to debug from the logs.
In adding the `EndpointAccess.redshift` resource case, we see the following
error from uptest run logs:
```
logger.go:42: 14:32:49 | case/0-apply | - lastTransitionTime: "2024-05-20T14:25:08Z"
logger.go:42: 14:32:49 | case/0-apply | message: 'cannot resolve references: mg.Spec.ForProvider.SubnetGroupName: no
logger.go:42: 14:32:49 | case/0-apply | resources matched selector'
logger.go:42: 14:32:49 | case/0-apply | reason: ReconcileError
logger.go:42: 14:32:49 | case/0-apply | status: "False"
logger.go:42: 14:32:49 | case/0-apply | type: Synced
```
Make the fixes, create a [new commit], and trigger the automated test again.
**Ignoring Some Resources in Automated Tests**
Some resources require manual intervention such as providing valid public keys
or using on-the-fly values. These cases can be handled in manual tests, but in
cases where we cannot provide generic values for automated tests, we can skip
some resources in the tests of the relevant group via an annotation:
```yaml
upjet.upbound.io/manual-intervention: "The Certificate needs to be provisioned successfully which requires a real domain."
```
The key is important for skipping. We are checking this
`upjet.upbound.io/manual-intervention` annotation key and if it is in there, we
skip the related resource. The value is also important to see why we skip this
resource.
```
NOTE: For resources that are ignored during Automated Tests, manual testing is a
must, because we need to make sure that all resources published in the `v1beta1`
version is working.
```
### Running Uptest locally
For a faster feedback loop, you might want to run `uptest` locally in your
development setup. For this, you can use the e2e make target available in
the provider repositories. This target requires the following environment
variables to be set:
- `UPTEST_CLOUD_CREDENTIALS`: cloud credentials for the provider being tested.
- `UPTEST_EXAMPLE_LIST`: a comma-separated list of examples to test.
- `UPTEST_DATASOURCE_PATH`: (optional), see [Injecting Dynamic Values (and Datasource)]
You can check the e2e target in the Makefile for each provider. Let's check the [target]
in provider-upjet-aws and run a test for the resource `examples/ec2/v1beta1/vpc.yaml`.
- You can either save your credentials in a file as stated in the target's [comments],
or you can do it by adding your credentials to the command below.
```console
export UPTEST_CLOUD_CREDENTIALS="DEFAULT='[default]
aws_access_key_id = <YOUR-ACCESS_KEY_ID>
aws_secret_access_key = <YOUR-ACCESS_KEY'"
```
```console
export UPTEST_EXAMPLE_LIST="examples/ec2/v1beta1/vpc.yaml"
```
After setting the above environment variables, run `make e2e`. If the test is
successful, you will see a log like the one below, kindly add to the PR
description this log:
```console
--- PASS: kuttl (37.41s)
--- PASS: kuttl/harness (0.00s)
--- PASS: kuttl/harness/case (36.62s)
PASS
14:02:30 [ OK ] running automated tests
```
## Manual Test
Configured resources can be tested by using manual methods. This method generally
contains the environment preparation and creating the example manifest in the
Kubernetes cluster steps. The following steps can be followed for preparing the
environment:
1. Registering the CRDs (Custom Resource Definitions) to Cluster: We need to
apply the CRD manifests to the cluster. The relevant manifests are located in
the `package/crds` folder of provider subdirectories such as:
`provider-aws/package/crds`. For registering them please run the following
command: `kubectl apply -f package/crds`
2. Create ProviderConfig: ProviderConfig Custom Resource contains some
configurations and credentials for the provider. For example, to connect to the
cloud provider, we use the credentials field of ProviderConfig. For creating the
ProviderConfig with correct credentials, please see:
- [Create a Kubernetes secret with the AWS credentials]
- [Create a Kubernetes secret with the Azure credentials]
- [Create a Kubernetes secret with the GCP credentials]
3. Start Provider: For every Custom Resource, there is a controller and these
controllers are part of the provider. So, for starting the reconciliations for
Custom Resources, we need to run the provider (collect of controllers). For
running provider: Run `make run`
4. Now, you can create the examples you've generated and check events/logs to
spot problems and fix them.
- Start Testing: After completing the steps above, your environment is ready for
testing. There are 3 steps we need to verify in manual tests: `Apply`, `Import`,
`Delete`.
### Apply:
We need to apply the example manifest to the cluster.
```bash
kubectl apply -f examples/redshift/v1beta1/endpointaccess.yaml
```
Successfully applying the example manifests to the cluster is only the first
step. After successfully creating the Managed Resources, we need to check
whether their statuses are ready or not. So we need to expect a `True` value for
`Synced` and `Ready` conditions. To check the statuses of all created example
manifests quickly you can run the `kubectl get managed` command. We will wait
for all values to be `True` in this list:
```bash
NAME SYNCED READY EXTERNAL-NAME AGE
subnet.ec2.aws.upbound.io/bar True True subnet-0149bf6c20720d596 26m
subnet.ec2.aws.upbound.io/foo True True subnet-02971ebb943f5bb6e 26m
NAME SYNCED READY EXTERNAL-NAME AGE
vpc.ec2.aws.upbound.io/foo True True vpc-0ee6157df1f5a116a 26m
NAME SYNCED READY EXTERNAL-NAME AGE
cluster.redshift.aws.upbound.io/example-endpointaccess-c True True example-endpointaccess-c 26m
NAME SYNCED READY EXTERNAL-NAME AGE
endpointaccess.redshift.aws.upbound.io/example-endpointaccess True True example-endpointaccess 26m
NAME SYNCED READY EXTERNAL-NAME AGE
subnetgroup.redshift.aws.upbound.io/example-endpointaccess-sg True True example-endpointaccess-sg 26m
```
As a second step, we need to check the `UpToDate` status condition. This status
condition will be visible when you set the annotation: `upjet.upbound.io/test=true`.
Without adding this annotation you cannot see the mentioned condition. The rough
significance of this condition is to make sure that the resource does not remain
in an update loop. To check the `UpToDate` condition for all MRs in the cluster,
run:
```bash
kubectl annotate managed --all upjet.upbound.io/test=true --overwrite
# check the conditions
kubectl get endpointaccess.redshift.aws.upbound.io/example-endpointaccess -o yaml
```
You should see the output below:
```yaml
conditions:
- lastTransitionTime: "2024-05-20T17:37:20Z"
reason: Available
status: "True"
type: Ready
- lastTransitionTime: "2024-05-20T17:37:11Z"
reason: ReconcileSuccess
status: "True"
type: Synced
- lastTransitionTime: "2024-05-20T17:37:15Z"
reason: Success
status: "True"
type: LastAsyncOperation
- lastTransitionTime: "2024-05-20T17:37:48Z"
reason: UpToDate
status: "True"
type: Test
```
When all of the fields are `True`, the `Apply` test was successfully completed!
### Import
There are a few steps to perform the import test, here we will stop the provider,
delete the status conditions, and check the conditions when we re-run the provider.
- Stop `make run`
- Delete the status conditions with the following command:
```bash
kubectl --subresource=status patch endpointaccess.redshift.aws.upbound.io/example-endpointaccess --type=merge -p '{"status":{"conditions":[]}}'
```
- Store the `status.atProvider.id` field for comparison
- Run `make run`
- Make sure that the `Ready`, `Synced`, and `UpToDate` conditions are `True`
- Compare the new `status.atProvider.id` with the one you stored and make sure
they are the same
The import test was successful when the above conditions were met.
### Delete
Make sure the resource has been successfully deleted by running the following
command:
```bash
kubectl delete endpointaccess.redshift.aws.upbound.io/example-endpointaccess
```
When the resource is successfully deleted, the manual testing steps are completed.
```
IMPORTANT NOTE: `make generate` and `kubectl apply -f package/crds` commands
must be run after any change that will affect the schema or controller of the
configured/tested resource.
In addition, the provider needs to be restarted after the changes in the
controllers, because the controller change actually corresponds to the changes
made in the running code.
```
You can look at the [PR] we created for the `EndpointAccess.redshift` resource
we added in this guide.
## External Name Cases
### Case 1: `name` As Identifier
There is a `name` argument under the `Argument Reference` section and `Import`
section suggests to use `name` to import the resource.
Use `config.NameAsIdentifier`.
An example would be [`aws_eks_cluster`] and [here][eks-config] is its
configuration.
### Case 2: Parameter As Identifier
There is an argument under the `Argument Reference` section that is used like
name, i.e. `cluster_name` or `group_name`, and the `Import` section suggests
using the value of that argument to import the resource.
Use `config.ParameterAsIdentifier(<name of the argument parameter>)`.
An example would be [`aws_elasticache_cluster`] and [here][cache-config] is its
configuration.
### Case 3: Random Identifier From Provider
The ID used in the `Import` section is completely random and assigned by the
provider, like a UUID, where you don't have any means of impact on it.
Use `config.IdentifierFromProvider`.
An example would be [`aws_vpc`] and [here][vpc-config] is its configuration.
### Case 4: Random Identifier Substring From Provider
The ID used in the `Import` section is partially random and assigned by the
provider. For example, a node in a cluster could have a random ID like `13213`
but the Terraform Identifier could include the name of the cluster that's
represented as an argument field under `Argument Reference`, i.e.
`cluster-name:23123`. In that case, we'll use only the randomly assigned part
as external name and we need to tell Upjet how to construct the full ID back
and forth.
```golang
func resourceName() config.ExternalName{
e := config.IdentifierFromProvider
e.GetIDFn = func(_ context.Context, externalName string, parameters map[string]interface{}, _ map[string]interface{}) (string, error) {
cl, ok := parameters["cluster_name"]
if !ok {
return "", errors.New("cluster_name cannot be empty")
}
return fmt.Sprintf("%s:%s", cl.(string), externalName), nil
}
e.GetExternalNameFn = func(tfstate map[string]interface{}) (string, error) {
id, ok := tfstate["id"]
if !ok {
return "", errors.New("id in tfstate cannot be empty")
}
w := strings.Split(id.(string), ":")
return w[len(w)-1], nil
}
}
```
### Case 5: Non-random Substrings as Identifier
There are more than a single argument under `Argument Reference` that are
concatenated to make up the whole identifier, e.g. `<region>/<cluster
name>/<node name>`. We will need to tell Upjet to use `<node name>` as external
name and take the rest from the parameters.
Use `config.TemplatedStringAsIdentifier("<name argument>", "<go template>")` in
such cases. The following is the list of available parameters for you to use in
your go template:
```
parameters: A tree of parameters that you'd normally see in a Terraform HCL
file. You can use TF registry documentation of given resource to
see what's available.
terraformProviderConfig: The Terraform configuration object of the provider. You can
take a look at the TF registry provider configuration object
to see what's available. Not to be confused with ProviderConfig
custom resource of the Crossplane provider.
externalName: The value of external name annotation of the custom resource.
It is required to use this as part of the template.
```
You can see example usages in the big three providers below.
#### AWS
For `aws_glue_user_defined_function`, we see that the `name` argument is used
to name the resource and the import instructions read as the following:
> Glue User Defined Functions can be imported using the
> `catalog_id:database_name:function_name`. If you have not set a Catalog ID
> specify the AWS Account ID that the database is in, e.g.,
> $ terraform import aws_glue_user_defined_function.func
123456789012:my_database:my_func
Our configuration would look like the following:
```golang
"aws_glue_user_defined_function": config.TemplatedStringAsIdentifier("name", "{{ .parameters.catalog_id }}:{{ .parameters.database_name }}:{{ .externalName }}")
```
Another prevalent case in AWS is the usage of Amazon Resource Name (ARN) to
identify a resource. We can use `config.TemplatedStringAsIdentifier` in many of
those cases like the following:
```
"aws_glue_registry": config.TemplatedStringAsIdentifier("registry_name", "arn:aws:glue:{{ .parameters.region }}:{{ .setup.client_metadata.account_id }}:registry/{{ .external_name }}"),
```
However, there are cases where the ARN includes random substring and that would
fall under Case 4. The following is such an example:
```
// arn:aws:acm-pca:eu-central-1:609897127049:certificate-authority/ba0c7989-9641-4f36-a033-dee60121d595
"aws_acmpca_certificate_authority_certificate": config.IdentifierFromProvider,
```
#### Azure
Most Azure resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `azurerm_mariadb_firewall_rule`, we see that the `name` argument is used to
name the resource and the import instructions read as the following:
> MariaDB Firewall rules can be imported using the resource ID, e.g.
>
> `terraform import azurerm_mariadb_firewall_rule.rule1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.DBforMariaDB/servers/server1/firewallRules/rule1`
Our configuration would look like the following:
```golang
"azurerm_mariadb_firewall_rule": config.TemplatedStringAsIdentifier("name", "/subscriptions/{{ .terraformProviderConfig.subscription_id }}/resourceGroups/{{ .parameters.resource_group_name }}/providers/Microsoft.DBforMariaDB/servers/{{ .parameters.server_name }}/firewallRules/{{ .externalName }}")
```
In some resources, an argument requires ID, like `azurerm_cosmosdb_sql_function`
where it has `container_id` and `name` but no separate `resource_group_name`
which would be required to build the full ID. Our configuration would look like
the following in this case:
```golang
config.TemplatedStringAsIdentifier("name", "{{ .parameters.container_id }}/userDefinedFunctions/{{ .externalName }}")
```
#### GCP
Most GCP resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `google_container_cluster`, we see that the `name` argument is used to name
the resource and the import instructions read as the following:
> GKE clusters can be imported using the project, location, and name.
> If the project is omitted, the default provider value will be used.
> Examples:
>
> ```console
> $ terraform import google_container_cluster.mycluster projects/my-gcp-project/locations/us-east1-a/clusters/my-cluster
> $ terraform import google_container_cluster.mycluster my-gcp-project/us-east1-a/my-cluster
> $ terraform import google_container_cluster.mycluster us-east1-a/my-cluster
> ```
In cases where there are multiple ways to construct the ID, we should take the
one with the least parameters so that we rely only on required fields because
optional fields may have some defaults that are assigned after the creation
which may make it tricky to work with. In this case, the following would be our
configuration:
```golang
"google_compute_instance": config.TemplatedStringAsIdentifier("name", "{{ .parameters.location }}/{{ .externalName }}")
```
There are cases where one of the example import commands uses just `name`, like
`google_compute_instance`:
```console
terraform import google_compute_instance.default {{name}}
```
In such cases, we should use `config.NameAsIdentifier` since we'd like to have
the least complexity in our configuration as possible.
### Case 6: No Import Statement
There are no instructions under the `Import` section of the resource page in
Terraform Registry, like `aws_acm_certificate_validation` from AWS.
Use the following in such cases with a comment indicating the case:
```golang
// No import documented.
"aws_acm_certificate_validation": config.IdentifierFromProvider,
```
### Case 7: Using Identifier of Another Resource
There are auxiliary resources that don't have an ID and since they map
one-to-one to another resource, they just opt to use the identifier of that
other resource. In many cases, the identifier is also a valid argument, maybe
even the only argument, to configure this resource.
An example would be
[`aws_ecrpublic_repository_policy`] from AWS where the identifier is
`repository_name`.
Use `config.IdentifierFromProvider` because in these cases `repository_name` is
more meaningful as an argument rather than the name of the policy for users,
hence we assume the ID is coming from the provider.
### Case 8: Using Identifiers of Other Resources
There are resources that mostly represent a relation between two resources
without any particular name that identifies the relation. An example would be
[`azurerm_subnet_nat_gateway_association`] where the ID is made up of two
arguments `nat_gateway_id` and `subnet_id` without any particular field used
to give a name to the resource.
Use `config.IdentifierFromProvider` because in these cases, there is no name
argument to be used as external name and both creation and import scenarios
would work the same way even if you configured the resources with conversion
functions between arguments and ID.
## No Matching Case
If it doesn't match any of the cases above, then we'll need to implement the
external name configuration from the ground up. Though in most cases, it's just
a little bit different that we only need to override a few things on top of
common functions.
One example is [`aws_route`] resource where the ID could use a different
argument depending on which one is given. You can take a look at the
implementation [here][route-impl]. [This section] in the
detailed guide could also help you.
[comment]: <> (References)
[this repo]: https://github.com/kubernetes-sigs/kind
[k3d]: https://k3d.io/
[Go]: https://go.dev/doc/install
[Terraform v1.5.5]: https://developer.hashicorp.com/terraform/install
[goimports]: https://pkg.go.dev/golang.org/x/tools/cmd/goimports
[provider-guide]: https://github.com/upbound/upjet/blob/main/docs/generating-a-provider.md
[config-guide]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md
[`aws_redshift_endpoint_access`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshift_endpoint_access
[Import]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshift_endpoint_access#import
[External Name Cases]: #external-name-cases
[source code]: https://github.com/hashicorp/terraform-provider-aws/blob/f222bd785228729dc1f5aad7d85c4d04a6109075/internal/service/redshift/endpoint_access.go#L24
[cluster_identifier]: https://registry.terraform.io/providers/hashicorp/aws/5.35.0/docs/resources/redshift_endpoint_access#cluster_identifier
[subnet_group_name]: https://registry.terraform.io/providers/hashicorp/aws/5.35.0/docs/resources/redshift_endpoint_access#subnet_group_name
[automatically]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md#auto-cross-resource-reference-generation
[Cross Resource Referencing]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md#cross-resource-referencing
[a comment]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314#issuecomment-2120539099
[new commit]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314/commits/b76e566eea5bd53450f2175e7e5a6e274934255b
[Create a Kubernetes secret with the AWS credentials]: https://docs.crossplane.io/latest/getting-started/provider-aws/#create-a-kubernetes-secret-with-the-aws-credentials
[Create a Kubernetes secret with the Azure credentials]: https://docs.crossplane.io/latest/getting-started/provider-azure/#create-a-kubernetes-secret-with-the-azure-credentials
[Create a Kubernetes secret with the GCP credentials]: https://docs.crossplane.io/latest/getting-started/provider-gcp/#create-a-kubernetes-secret-with-the-gcp-credentials
[PR]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314
[`aws_eks_cluster`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster
[eks-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L284
[`aws_elasticache_cluster`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster
[cache-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L299
[`aws_vpc`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc
[vpc-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L155
[`aws_ecrpublic_repository_policy`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecrpublic_repository_policy
[`azurerm_subnet_nat_gateway_association`]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet_nat_gateway_association
[`aws_route`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route
[route-impl]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L172
[This section]: #external-name-cases
[Injecting Dynamic Values (and Datasource)]: https://github.com/crossplane/uptest?tab=readme-ov-file#injecting-dynamic-values-and-datasource
[target]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L257
[comments]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L259
[Uptest]: https://github.com/crossplane/uptest

View File

@ -21,7 +21,7 @@ directory on your local machine.
make generate
# Consume the latest crossplane-runtime:
go get github.com/crossplane/crossplane-runtime@master
go get github.com/crossplane/crossplane-runtime@main
go mod tidy
```

View File

@ -20,13 +20,19 @@ which can be found by checking the Terraform documentation of the resource:
## External Name
Crossplane uses an annotation in managed resource CR to identify the external
resource which is managed by Crossplane. See [the external name documentation]
for more details. The format and source of the external name depends on the
cloud provider; sometimes it could simply be the name of resource (e.g. S3
Bucket), and sometimes it is an auto-generated id by cloud API (e.g. VPC id ).
This is something specific to resource, and we need some input configuration for
upjet to appropriately generate a resource.
Crossplane uses `crossplane.io/external-name` annotation in managed resource CR
to identify the external resource which is managed by Crossplane and always
shows the final value of the external resource name.
See [the external name documentation]
and [Naming Conventions - One Pager Managed Resource API Design] for more
details
The format and source of the external name depends on the cloud provider;
sometimes it could simply be the name of resource (e.g. S3 Bucket), and
sometimes it is an auto-generated id by cloud API (e.g. VPC id ). That is
something specific to resource, and we need some input configuration for `upjet`
to appropriately generate a resource.
Since Terraform already needs [a similar identifier] to import a resource, most
helpful part of resource documentation is the [import section].
@ -110,7 +116,7 @@ conditions:
```go
import (
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -137,7 +143,7 @@ also omit `bucket` and `bucket_prefix` arguments from the spec with
```go
import (
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -169,7 +175,7 @@ Here, we can just use [IdentifierFromProvider] configuration:
```go
import (
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -200,7 +206,7 @@ this id back (`GetIDFn`).
```go
import (
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -313,9 +319,14 @@ In Upjet, we have a [configuration] to provide this information for a field:
// Reference represents the Crossplane options used to generate
// reference resolvers for fields
type Reference struct {
// Type is the type name of the CRD if it is in the same package or
// Type is the Go type name of the CRD if it is in the same package or
// <package-path>.<type-name> if it is in a different package.
Type string
// TerraformName is the name of the Terraform resource
// which will be referenced. The supplied resource name is
// converted to a type name of the corresponding CRD using
// the configured TerraformTypeMapper.
TerraformName string
// Extractor is the function to be used to extract value from the
// referenced type. Defaults to getting external name.
// Optional
@ -324,13 +335,20 @@ type Reference struct {
// <field-name>Ref or <field-name>Refs.
// Optional
RefFieldName string
// SelectorFieldName is the field name for the Selector field. Defaults to
// SelectorFieldName is the Go field name for the Selector field. Defaults to
// <field-name>Selector.
// Optional
SelectorFieldName string
}
```
> [!Warning]
> Please note the `Reference.Type` field has been deprecated, use
`Reference.TerraformName` instead. `TerraformName` is a more stable and
less error prone API compared to `Type` because it automatically accounts
for the configuration changes affecting the cross-resource reference target's
kind name, group or version.
For a resource that we want to generate, we need to check its argument list in
Terraform documentation and figure out which field needs reference to which
resource.
@ -343,26 +361,25 @@ following referencing configuration:
func Configure(p *config.Provider) {
p.AddResourceConfigurator("aws_iam_access_key", func (r *config.Resource) {
r.References["user"] = config.Reference{
Type: "User",
TerraformName: "aws_iam_user",
}
})
}
```
Please note the value of `Type` field needs to be a string representing the Go
type of the resource. Since, `AccessKey` and `User` resources are under the same
go package, we don't need to provide the package path. However, this is not
always the case and referenced resources might be in different package. In that
case, we would need to provide the full path. Referencing to a [kms key] from
`aws_ebs_volume` resource is a good example here:
`TerraformName` is the name of the Terraform resource, such as
`aws_iam_user`. Because `TerraformName` uniquely identifies a
Terraform resource, it remains the same when the referenced and
referencing resources are in different groups. A good example is
referencing a [kms key] from `aws_ebs_volume` resource:
```go
func Configure(p *config.Provider) {
p.AddResourceConfigurator("aws_ebs_volume", func(r *config.Resource) {
r.References["kms_key_id"] = config.Reference{
Type: "github.com/crossplane-contrib/provider-tf-aws/apis/kms/v1alpha1.Key",
}
})
p.AddResourceConfigurator("aws_ebs_volume", func(r *config.Resource) {
r.References["kms_key_id"] = config.Reference{
TerraformName: "aws_kms_key",
}
})
}
```
@ -372,9 +389,7 @@ Cross Resource Referencing is one of the key concepts of the resource
configuration. As a very common case, cloud services depend on other cloud
services. For example, AWS Subnet resource needs an AWS VPC for creation. So,
for creating a Subnet successfully, before you have to create a VPC resource.
Please see the [Dependencies] documentation for more details. And also, for
resource configuration-related parts of cross-resource referencing, please see
[this part] of [Configuring a Resource] documentation.
Please see the [Managed Resources] documentation for more details.
These documentations focus on the general concepts and manual configurations
of Cross Resource References. However, the main topic of this documentation is
@ -403,7 +418,7 @@ Resource manifest, and we can use this manifest in our test efforts.
This is an example from Terraform Registry AWS Ebs Volume resource:
```go
```hcl
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 40
@ -485,10 +500,10 @@ reference generator. However, there are two cases where we miss generating the
references.
The first one is related to some bugs or improvement points in the generator.
This means that the generator can handle many references in the scraped
examples and generate correctly them. But we cannot say that the ratio is %100.
For some cases, the generator cannot generate references although, they are in
the scraped example manifests.
This means that the generator can handle many references in the scraped examples
and correctly generate them. But we cannot say that the ratio is 100%. For some
cases, the generator cannot generate references, even though they are in the
scraped example manifests.
The second one is related to the scraped example itself. As I mentioned above,
the source of the generator is the scraped example manifest. So, it checks the
@ -508,7 +523,7 @@ example manifest, this reference field will only be defined over Y. In this
case, since the reference pool of the relevant field will be narrowed, it would
be more appropriate to delete this reference. For example,
```go
```hcl
resource "aws_route53_record" "www" {
zone_id = aws_route53_zone.primary.zone_id
name = "example.com"
@ -524,14 +539,14 @@ resource "aws_route53_record" "www" {
Route53 Record resources alias.name field has a reference. In the example, this
reference is shown by using the `aws_elb` resource. However, when we check the
field documentation, we see that this field can also be used for reference
for other resources:
field documentation, we see that this field can also be referenced by other
resources:
```text
Alias
Alias records support the following:
name - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB,
name - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB,
or another resource record set in this hosted zone.
```
@ -541,9 +556,7 @@ As a result, mentioned scraper and example&reference generators are very useful
for making easy the test efforts. But using these functionalities, we must be
careful to avoid undesired states.
[Dependencies]: https://crossplane.io/docs/v1.7/concepts/managed-resources.html#dependencies
[this part]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md#cross-resource-referencing
[Configuring a Resource]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md
[Managed Resources]: https://docs.crossplane.io/latest/concepts/managed-resources/#referencing-other-resources
## Additional Sensitive Fields and Custom Connection Details
@ -725,8 +738,8 @@ like:
- Field contains sensitive information but not marked as `Sensitive` or vice
versa.
- An attribute does not make sense to have in CRD schema, like [tags_all for jet
AWS resources].
- An attribute does not make sense to have in CRD schema, like [tags_all for
provider-upjet-aws resources].
- Moving parameters from Terraform provider config to resources schema to fit
Crossplane model, e.g. [AWS region] parameter is part of provider config in
Terraform but Crossplane expects it in CR spec.
@ -871,17 +884,17 @@ initializers for a resource.
[import section of s3 bucket]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#import
[bucket]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#bucket
[cluster_identifier]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#cluster_identifier
[aws_rds_cluster]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster.
[aws_rds_cluster]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
[import section of aws_vpc]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#import
[arguments list]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#argument-reference
[example usages]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#example-usage
[IdentifierFromProvider]: https://github.com/crossplane/upjet/blob/main/config/externalname.go#L42
[IdentifierFromProvider]: https://github.com/crossplane/upjet/blob/main/pkg/config/externalname.go#L42
[a similar identifier]: https://www.terraform.io/docs/glossary#id
[import section of azurerm_sql_server]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/sql_server#import
[handle dependencies]: https://docs.crossplane.io/master/concepts/managed-resources/#referencing-other-resources
[user]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#user
[generate reference resolution methods]: https://github.com/crossplane/crossplane-tools/pull/35
[configuration]: https://github.com/crossplane/upjet/blob/main/pkg/config/resource.go#L123
[configuration]: https://github.com/crossplane/upjet/blob/942508c5370a697b1cb81d074933ba75d8f1fb4f/pkg/config/resource.go#L172
[iam_access_key]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#argument-reference
[kms key]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_volume#kms_key_id
[connection details]: https://docs.crossplane.io/master/concepts/managed-resources/#writeconnectionsecrettoref
@ -897,11 +910,12 @@ initializers for a resource.
[Description]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L120
[Optional]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L80
[Computed]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L139
[tags_all for jet AWS resources]: https://github.com/upbound/provider-aws/blob/main/config/overrides.go#L62
[AWS region]: https://github.com/upbound/provider-aws/blob/main/config/overrides.go#L32
[tags_all for provider-upjet-aws resources]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/199dbf93b8c67632db50b4f9c0adbd79021146a3/config/overrides.go#L72
[AWS region]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/199dbf93b8c67632db50b4f9c0adbd79021146a3/config/overrides.go#L42
[this figure]: ../docs/images/upjet-externalname.png
[Initializers]: #initializers
[InitializerFns]: https://github.com/crossplane/upjet/blob/main/pkg/config/resource.go#L297
[NewInitializerFn]: https://github.com/crossplane/upjet/blob/main/pkg/config/resource.go#L210
[InitializerFns]: https://github.com/crossplane/upjet/blob/92d1af84d24241bef08e6b4a2cfe1ab66a93308a/pkg/config/resource.go#L427
[NewInitializerFn]: https://github.com/crossplane/upjet/blob/92d1af84d24241bef08e6b4a2cfe1ab66a93308a/pkg/config/resource.go#L265
[crossplane-runtime]: https://github.com/crossplane/crossplane-runtime/blob/428b7c3903756bb0dcf5330f40298e1fa0c34301/pkg/reconciler/managed/reconciler.go#L138
[tagging convention]: https://github.com/crossplane/crossplane/blob/60c7df9/design/one-pager-managed-resource-api-design.md#external-resource-labeling
[Naming Conventions - One Pager Managed Resource API Design]: https://github.com/crossplane/crossplane/blob/main/design/one-pager-managed-resource-api-design.md#naming-conventions

View File

@ -57,7 +57,7 @@ variables in the `Makefile`:
export TERRAFORM_DOCS_PATH := website/docs/r
```
Refer to [the Dockerfile](https://github.com/crossplane/upjet-provider-template/blob/main/cluster/images/upjet-provider-template/Dockerfile) to see the variables called when building the provider.
Refer to [the Dockerfile](https://github.com/upbound/upjet-provider-template/blob/main/cluster/images/upjet-provider-template/Dockerfile) to see the variables called when building the provider.
## Configure the provider resources
@ -141,7 +141,7 @@ variables in the `Makefile`:
cat <<EOF > config/repository/config.go
package repository
import "github.com/crossplane/upjet/pkg/config"
import "github.com/crossplane/upjet/v2/pkg/config"
// Configure configures individual resources by adding custom ResourceConfigurators.
func Configure(p *config.Provider) {
@ -163,7 +163,7 @@ variables in the `Makefile`:
cat <<EOF > config/branch/config.go
package branch
import "github.com/crossplane/upjet/pkg/config"
import "github.com/crossplane/upjet/v2/pkg/config"
func Configure(p *config.Provider) {
p.AddResourceConfigurator("github_branch", func(r *config.Resource) {
@ -377,7 +377,7 @@ your provider, you can learn more about
[testing your resources](testing-with-uptest.md) with Uptest.
[Terraform GitHub provider]: https://registry.terraform.io/providers/integrations/github/latest/docs
[upjet-provider-template]: https://github.com/crossplane/upjet-provider-template
[upjet-provider-template]: https://github.com/upbound/upjet-provider-template
[upbound/build]: https://github.com/upbound/build
[github_repository]: https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository
[github_branch]: https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch

414
docs/migration-framework.md Normal file
View File

@ -0,0 +1,414 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
## Migration Framework
The [migration package](https://github.com/crossplane/upjet/tree/main/pkg/migration)
in the [upjet](https://github.com/crossplane/upjet) repository contains a framework
that allows users to write converters and migration tools that are suitable for
their system and use. This document will focus on the technical details of the
Migration Framework and how to use it.
The concept of migration in this document is, in its most basic form, the
conversion of a crossplane resource from its current state in the source API to
its new state in the target API.
Let's explain this with an example. For example, a user has a classic
provider-based VPC Managed Resource in her cluster. The user wants to migrate
her system to upjet-based providers. However, she needs to make various
conversions for this. Because there are differences between the API of the VPC
MR in classic provider (source API) and upjet-based provider (target API), such
as group name. While the group value of the VPC MR in the source API is
ec2.aws.crossplane.io, it is ec2.aws.upbound.io in upjet-based providers. There
may also be some changes in other API fields of the resource. The fact that the
values of an existing resource, such as group and kind, cannot be changed
requires that this resource be recreated with the migration procedure without
affecting the user's case.
So, lets see how the framework can help us to migrate the systems.
### Concepts
The Migration Framework had various concepts for users to perform an end-to-end
migration. These play an essential role in understanding the framework's
structure and how to use it.
#### What Is Migration?
There are two main topics when we talk about the concept of migration. The first
one is API Migration. This includes the migration of the example mentioned above.
API Migration, in its most general form, is the migration from the API/provider
that the user is currently using (e.g. Community Classic) to the target
API/provider (e.g. upjet-based provider). The context of the resources
to be migrated here are MRs and Compositions because there are various API
changes in the related resources.
The other type of migration is the migration of Crossplane Configuration
packages. After the release of family providers, migrating users with monolithic
providers and configuration packages to family providers has come to the agenda.
The migration framework is extended in this context, and monolithic package
references in Crossplane resources such as Provider, Lock, and Configuration are
replaced with family ones. There is no API migration here. Because there is no
API change in the related resources in source and targets. The purpose is only
to provide a smooth transition from monolithic packages to family packages.
#### Converters
Converters convert related resource kind from the source API or structure to the
target API or structure. There are many types of converters supported in the
migration framework. However, in this document, we will talk about converters
and examples for API migration.
1. **Resource Converter** converts a managed resource from the migration source
provider's schema to the migration target provider's schema. The function of
the interface is [Resource](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L32).
`Resource` takes a managed resource and returns zero or more managed resources to
be created.
```go
Resource(mg resource.Managed) ([]resource.Managed, error)
```
[Here](https://github.com/upbound/extensions-migration/blob/main/converters/provider-aws/kafka/cluster.go)
is an example.
As can be seen in the example, the [CopyInto](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L29)
function was called before starting the change in the resource. This function
copies all fields that can be copied from the source API to the target API. The
function may encounter an error when copying some fields. In this case, these
fields should be passed to the `skipFieldPaths` value of the function. This way,
the function will not try to copy these fields. In the Kafka Cluster resource in
the example, there have been various changes in the Spec fields, such as Group
and Kind. Related converters should be written to handle these changes. The main
points to be addressed in this regard are listed below.
- Changes in Group and Kind names.
- Changes in the Spec field.
- Changes in the [Field](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L38)
name. Changes due to differences such as lower case upper
case. The important thing here is not the field's Go name but the json path's
name. Therefore, the changes here should be made considering the changes in the
json name.
```go
target.Spec.ForProvider.EnableDNSSupport = source.Spec.ForProvider.EnableDNSSupport
target.Spec.ForProvider.EnableDNSHostnames = source.Spec.ForProvider.EnableDNSHostNames
```
- Fields with [completely changed](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L39)
names. You may need to review the API documentation to understand them.
```go
target.Spec.ForProvider.AssignGeneratedIPv6CidrBlock = source.Spec.ForProvider.AmazonProvidedIpv6CIDRBlock
```
- Changes in the [field's type](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L31).
Such as a value that was previously Integer
changing to [Float64](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L38).
```go
target.Spec.ForProvider.Tags = make(map[string]*string, len(source.Spec.ForProvider.Tags))
for _, t := range source.Spec.ForProvider.Tags {
v := t.Value
target.Spec.ForProvider.Tags[t.Key] = &v
}
```
```go
target.Spec.ForProvider.ConfigurationInfo[0].Revision = common.PtrFloat64FromInt64(source.Spec.ForProvider.CustomConfigurationInfo.Revision)
```
- In Upjet-based providers, all structs in the API are defined as Slice. This is
not the case in Classic Providers. For this reason, this situation should be
taken into consideration when making the relevant [struct transformations](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L40).
```go
if source.Spec.ForProvider.EncryptionInfo != nil {
target.Spec.ForProvider.EncryptionInfo = make([]targetv1beta1.EncryptionInfoParameters, 1)
target.Spec.ForProvider.EncryptionInfo[0].EncryptionAtRestKMSKeyArn = source.Spec.ForProvider.EncryptionInfo.EncryptionAtRest.DataVolumeKMSKeyID
if source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit != nil {
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit = make([]targetv1beta1.EncryptionInTransitParameters, 1)
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit[0].InCluster = source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit.InCluster
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit[0].ClientBroker = source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit.ClientBroker
}
}
```
- External name conventions may differ between upjet-based providers and classic
providers. For this reason, external name conversions of related resources
should also be done in converter functions.
Another important case is when an MR in the source API corresponds to more than
one MR in the target API. Since the [Resource](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L32)
function returns a list of MRs, this infrastructure is ready. There is also an
example converter [here](https://github.com/upbound/extensions-migration/blob/main/converters/provider-aws/ec2/routetable.go).
2. **Composed Template Converter** converts a Composition's ComposedTemplate
from the migration source provider's schema to the migration target provider's
schema. Conversion of the `Base` must be handled by a ResourceConverter.
This interface has a function [ComposedTemplate](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L49).
`ComposedTemplate` receives a migration source v1.ComposedTemplate that has been
converted, by a resource converter, to the v1.ComposedTemplates with the new
shapes specified in the `convertedTemplates` argument. Conversion of the
v1.ComposedTemplate.Bases is handled via ResourceConverter.Resource and
ComposedTemplate must only convert the other fields (`Patches`,
`ConnectionDetails`, `PatchSet`s, etc.) Returns any errors encountered.
```go
ComposedTemplate(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error
```
There is a generic Composed Template implementation [DefaultCompositionConverter](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/registry.go#L481).
`DefaultCompositionConverter` is a generic composition converter. It takes a
`conversionMap` that is fieldpath map for conversion. Key of the conversionMap
points to the source field and the Value of the conversionMap points to the
target field. Example: "spec.forProvider.assumeRolePolicyDocument": "spec.forProvider.assumeRolePolicy".
And the fns are functions that manipulate the patchsets.
3. **PatchSetConverter** converts patch sets of Compositions. Any registered
PatchSetConverters will be called before any resource or ComposedTemplate
conversion is done. The rationale is to convert the Composition-wide patch sets
before any resource-specific conversions so that migration targets can
automatically inherit converted patch sets if their schemas match them.
Registered PatchSetConverters will be called in the order they are registered.
This interface has function [PatchSets](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L69).
`PatchSets` converts the `spec.patchSets` of a Composition from the migration
source provider's schema to the migration target provider's schema.
```go
PatchSets(psMap map[string]*xpv1.PatchSet) error
```
There is a [common PatchSets implementation](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/common/common.go#L14)
for provider-aws resources.
```
NOTE: Unlike MR converters, Composition converters and PatchSets converters can
contain very specific cases depending on the user scenario. Therefore, it is not
possible to write a universal migrator in this context. This is due to the fact
that all compositions are inherently different, although some helper functions
can be used in common.
```
#### Registry
Registry is a bunch of converters. Every Converter is keyed with the associated
`schema.GroupVersionKind`s and an associated `runtime.Scheme` with which the
corresponding types are registered. All converters intended to be used during
migration must be registered in the Registry. For Kinds that are not registered
in the Registry, no conversion will be performed, even if the resource is
included and read in the Source.
Before registering converters in the registry, the source and target API schemes
need to be added to the registry so that the respective Kinds are recognized.
```go
sourceF := sourceapis.AddToScheme
targetF := targetapis.AddToScheme
if err := common.AddToScheme(registry, sourceF, targetF); err != nil {
panic(err)
}
```
In addition, the Composition type and, if applicable, the Composite and Claim
types must also be defined in the registry before the converters are registered.
```go
if err := registry.AddCompositionTypes(); err != nil {
panic(err)
}
registry.AddClaimType(...)
registry.AddCompositeType(...)
```
The `RegisterAPIConversionFunctions` is used for registering the API conversion
functions. These functions are, Resource Converters, Composition Converters and
PatchSet Converters.
```go
registry.RegisterAPIConversionFunctions(ec2v1beta1.VPCGroupVersionKind, ec2.VPCResource,
migration.DefaultCompositionConverter(nil, common.ConvertComposedTemplateTags),
common.DefaultPatchSetsConverter)
```
#### Source
[Source](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L114)
is a source for reading resource manifests. It is an interface used to read the
resources subject to migration.
There were currently two implementations of the Source interface.
[File System Source](https://github.com/crossplane/upjet/blob/main/pkg/migration/filesystem.go)
is a source implementation to read resources from the file system. It is the
source type that the user will use to read the resources that they want to
migrate for their cases, such as those in their local system, those in a GitHub
repository, etc.
[Kubernetes Source](https://github.com/crossplane/upjet/blob/main/pkg/migration/kubernetes.go)
is a source implementation to read resources from Kubernetes cluster. It is the
source type that the user will use to read the resources that they want to
migrate for their cases in their Kubernetes Cluster. There are two types for
reading the resources from the Kubernetes cluster.
The important point here is that the Kubernetes source does not read all the
resources in the cluster. Kubernetes Source reads using two ways. The first one
is when the user specifies the category when initializing the Migration Plan.
If this is done, Kubernetes Source will read all resources belonging to the
specified categories. Example categories: managed, claim, etc. The other way is
the Kinds of the converters registered in the Registry. As it is known, every
converter registered in the registry is registered according to a specific type.
Kubernetes Source reads the resources of the registered converter Kinds. For
example, if a converter of VPC Kind is registered in the registry, Kubernetes
Source will read the resources of the VPC type in the cluster.
```
NOTE: Multiple source is allowed. While creating the Plan Generator object,
by using the following option, you can register both sources.
migration.WithMultipleSources(sources...)
```
#### Target
[Target](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L132)
is a target where resource manifests can be manipulated
(e.g., added, deleted, patched, etc.). It is the interface for storing the
manifests resulting from the migration steps. Currently only File System Target
is supported. In other words, the converted manifests that the user will see as
output when they start the migration process will be stored in the File System.
#### Migration Plan
Plan represents a migration plan for migrating managed resources, and associated
composites and claims from a migration source provider to a migration target provider.
PlanGenerator generates a Migration Plan reading the manifests available from
`source`, converting managed resources and compositions using the available
`Converter`s registered in the `registry` and writing the output manifests to
the specified `target`.
There is an example plan:
```yaml
spec:
steps:
- patch:
type: merge
files:
- pause-managed/sample-vpc.vpcs.fakesourceapi.yaml
name: pause-managed
manualExecution:
- "kubectl patch --type='merge' -f pause-managed/sample-vpc.vpcs.fakesourceapi.yaml --patch-file pause-managed/sample-vpc.vpcs.fakesourceapi.yaml"
type: Patch
- patch:
type: merge
files:
- pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: pause-composites
manualExecution:
- "kubectl patch --type='merge' -f pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
- apply:
files:
- create-new-managed/sample-vpc.vpcs.faketargetapi.yaml
name: create-new-managed
manualExecution:
- "kubectl apply -f create-new-managed/sample-vpc.vpcs.faketargetapi.yaml"
type: Apply
- apply:
files:
- new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml
name: new-compositions
manualExecution:
- "kubectl apply -f new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml"
type: Apply
- patch:
type: merge
files:
- edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: edit-composites
manualExecution:
- "kubectl patch --type='merge' -f edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
- patch:
type: merge
files:
- edit-claims/my-resource.myresources.test.com.yaml
name: edit-claims
manualExecution:
- "kubectl patch --type='merge' -f edit-claims/my-resource.myresources.test.com.yaml --patch-file edit-claims/my-resource.myresources.test.com.yaml"
type: Patch
- patch:
type: merge
files:
- deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml
name: deletion-policy-orphan
manualExecution:
- "kubectl patch --type='merge' -f deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml --patch-file deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml"
type: Patch
- delete:
options:
finalizerPolicy: Remove
resources:
- group: fakesourceapi
kind: VPC
name: sample-vpc
version: v1alpha1
name: delete-old-managed
manualExecution:
- "kubectl delete VPC.fakesourceapi sample-vpc"
type: Delete
- patch:
type: merge
files:
- start-managed/sample-vpc.vpcs.faketargetapi.yaml
name: start-managed
manualExecution:
- "kubectl patch --type='merge' -f start-managed/sample-vpc.vpcs.faketargetapi.yaml --patch-file start-managed/sample-vpc.vpcs.faketargetapi.yaml"
type: Patch
- patch:
type: merge
files:
- start-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: start-composites
manualExecution:
- "kubectl patch --type='merge' -f start-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file start-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
version: 0.1.0
```
As can be seen here, the Plan includes steps on how to migrate the user's
resources. The output manifests existing in the relevant steps are the outputs
of the converters registered by the user. Therefore, it should be underlined
once again that converters are the ones that do the actual conversion during
migration.
While creating a Plan Generator object, user may set some options by using the
option functions. The most important two option functions are:
- WithErrorOnInvalidPatchSchema returns a PlanGeneratorOption for configuring
whether the PlanGenerator should error and stop the migration plan generation in
case an error is encountered while checking a patch statement's conformance to
the migration source or target.
- WithSkipGVKs configures the set of GVKs to skip for conversion during a
migration.
### Example Usage - Template Migrator
In the [upbound/extensions-migration](https://github.com/upbound/extensions-migration)
repository there are two important things for the Migration Framework. One of
them is the [common converters](https://github.com/upbound/extensions-migration/tree/main/converters/provider-aws).
These converters include previously written API converters. The relevant
converters are added here for re-use in different migrators. By reviewing
converters here, in addition to this document, you can better understand how to
write a converter.
The second part is the [template migrator](https://github.com/upbound/extensions-migration/blob/main/converters/template/cmd/main.go).
Here, it will be possible to generate a Plan by using the above-mentioned
capabilities of the migration framework. It is worth remembering that since each
source has its own characteristics, the user has to make various changes on the
template.

170
go.mod
View File

@ -2,94 +2,89 @@
//
// SPDX-License-Identifier: CC0-1.0
module github.com/crossplane/upjet
module github.com/crossplane/upjet/v2
go 1.21
go 1.24.0
toolchain go1.24.5
require (
dario.cat/mergo v1.0.0
dario.cat/mergo v1.0.2
github.com/alecthomas/kingpin/v2 v2.4.0
github.com/antchfx/htmlquery v1.2.4
github.com/crossplane/crossplane v1.13.2
github.com/crossplane/crossplane-runtime v1.14.0-rc.0.0.20231011070344-cc691421c2e5
github.com/crossplane/crossplane-runtime/v2 v2.0.0-20250730220209-c306b1c8b181
github.com/fatih/camelcase v1.0.0
github.com/golang/mock v1.6.0
github.com/google/go-cmp v0.6.0
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320
github.com/hashicorp/hcl/v2 v2.19.1
github.com/hashicorp/terraform-json v0.17.1
github.com/hashicorp/terraform-plugin-framework v1.4.1
github.com/hashicorp/terraform-plugin-go v0.19.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.30.0
github.com/google/go-cmp v0.7.0
github.com/hashicorp/go-cty v1.5.0
github.com/hashicorp/hcl/v2 v2.23.0
github.com/hashicorp/terraform-json v0.25.0
github.com/hashicorp/terraform-plugin-framework v1.15.0
github.com/hashicorp/terraform-plugin-go v0.28.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.37.0
github.com/iancoleman/strcase v0.2.0
github.com/json-iterator/go v1.1.12
github.com/mitchellh/go-ps v1.0.0
github.com/muvaf/typewriter v0.0.0-20210910160850-80e49fe1eb32
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.16.0
github.com/spf13/afero v1.10.0
github.com/prometheus/client_golang v1.22.0
github.com/spf13/afero v1.12.0
github.com/tmccombs/hcl2json v0.3.3
github.com/yuin/goldmark v1.4.13
github.com/zclconf/go-cty v1.14.1
github.com/zclconf/go-cty v1.16.2
github.com/zclconf/go-cty-yaml v1.0.3
golang.org/x/net v0.17.0
golang.org/x/tools v0.13.0
gopkg.in/alecthomas/kingpin.v2 v2.2.6
golang.org/x/net v0.39.0
golang.org/x/tools v0.32.0
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.28.2
k8s.io/apimachinery v0.28.2
k8s.io/cli-runtime v0.28.2
k8s.io/client-go v0.28.2
k8s.io/utils v0.0.0-20230726121419-3b25d923346b
sigs.k8s.io/controller-runtime v0.16.2
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd
sigs.k8s.io/yaml v1.3.0
k8s.io/api v0.33.0
k8s.io/apimachinery v0.33.0
k8s.io/client-go v0.33.0
k8s.io/utils v0.0.0-20250321185631-1f6e0b77f77e
sigs.k8s.io/controller-runtime v0.19.0
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/agext/levenshtein v1.2.3 // indirect
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b // indirect
github.com/antchfx/xpath v1.2.0 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.10.2 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/fatih/color v1.15.0 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/zapr v1.2.4 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.1 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gobuffalo/flect v1.0.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/hashicorp/go-hclog v1.5.0 // indirect
github.com/hashicorp/go-plugin v1.5.1 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-plugin v1.6.3 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/hashicorp/go-version v1.6.0 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect
github.com/hashicorp/logutils v1.0.0 // indirect
github.com/hashicorp/terraform-plugin-log v0.9.0 // indirect
github.com/hashicorp/terraform-registry-address v0.2.2 // indirect
github.com/hashicorp/terraform-registry-address v0.2.5 // indirect
github.com/hashicorp/terraform-svchost v0.1.1 // indirect
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-testing-interface v1.14.1 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
@ -97,40 +92,45 @@ require (
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/oklog/run v1.0.0 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/cobra v1.9.1 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect
github.com/vmihailenco/msgpack/v5 v5.3.5 // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/mod v0.13.0 // indirect
golang.org/x/oauth2 v0.10.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.14.0 // indirect
golang.org/x/term v0.13.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/oauth2 v0.29.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/term v0.31.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/time v0.11.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 // indirect
google.golang.org/grpc v1.58.3 // indirect
google.golang.org/protobuf v1.31.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/apiextensions-apiserver v0.28.2 // indirect
k8s.io/component-base v0.28.2 // indirect
k8s.io/klog/v2 v2.100.1 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
k8s.io/apiextensions-apiserver v0.33.0 // indirect
k8s.io/code-generator v0.33.0 // indirect
k8s.io/component-base v0.33.0 // indirect
k8s.io/gengo/v2 v2.0.0-20250207200755-1244d31929d7 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
sigs.k8s.io/controller-tools v0.18.0 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
)

793
go.sum
View File

@ -1,55 +1,13 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1 h1:EKPd1INOIyr5hWOWhvpmQpY6tKjeG0hT1s3AMC/9fic=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230106234847-43070de90fa1/go.mod h1:VzwV+t+dZ9j/H867F1M2ziD+yLHtB46oM35FxxMJ4d0=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/alecthomas/kingpin/v2 v2.4.0 h1:f48lwail6p8zpO1bC4TxtqACaGqHYA22qkHjHpqDjYY=
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/kong v0.2.16/go.mod h1:kQOmtJgV+Lb4aj+I2LEn40cbtawdWJ9Y8QLq+lElKxE=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b h1:mimo19zliBX/vSQ6PWWSL9lK8qwHozUj03+zLoEB8O0=
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b/go.mod h1:fvzegU4vN3H1qMT+8wDmzjAcDONcgo2/SZ/TyfdUOFs=
github.com/antchfx/htmlquery v1.2.4 h1:qLteofCMe/KGovBI6SQgmou2QNyedFUW+pE+BpeZ494=
github.com/antchfx/htmlquery v1.2.4/go.mod h1:2xO6iu3EVWs7R2JYqBbp8YzG50gj/ofqs5/0VZoDZLc=
github.com/antchfx/xpath v1.2.0 h1:mbwv7co+x0RwgeGAOHdrKy89GvHaGvxxBtPK0uF9Zr8=
@ -60,205 +18,131 @@ github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJE
github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo=
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bufbuild/protocompile v0.6.0 h1:Uu7WiSQ6Yj9DbkdnOe7U4mNKp58y9WDMKDn28/ZlunY=
github.com/bufbuild/protocompile v0.6.0/go.mod h1:YNP35qEYoYGme7QMtz5SBCoN4kL4g12jTtjuzRNdjpE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bufbuild/protocompile v0.4.0 h1:LbFKd2XowZvQ/kajzguUp2DC9UEIQhIq77fZZlaQsNA=
github.com/bufbuild/protocompile v0.4.0/go.mod h1:3v93+mbWn/v3xzN+31nwkJfrEpAUwp+BagBSZWx+TP8=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/crossplane/crossplane v1.13.2 h1:/qxoQvNV9+eJyWVP3pu3j7q0ltdZXPgrDIkbAyCd1uI=
github.com/crossplane/crossplane v1.13.2/go.mod h1:jjYHNF5j2JidsrFZ7sfTZoVnBDVEvZHC64GyH/cYMbU=
github.com/crossplane/crossplane-runtime v1.14.0-rc.0.0.20231011070344-cc691421c2e5 h1:K1Km6NCu9+VlZB3CmWSjrs09cDSbwQxJd2Qw2002dFs=
github.com/crossplane/crossplane-runtime v1.14.0-rc.0.0.20231011070344-cc691421c2e5/go.mod h1:kCS5576be8g++HhiDGEBUw+8nkW8p4jhURYeC0zx8jM=
github.com/cyphar/filepath-securejoin v0.2.3 h1:YX6ebbZCZP7VkM3scTTokDgBL2TY741X51MTk3ycuNI=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/crossplane/crossplane-runtime/v2 v2.0.0-20250730220209-c306b1c8b181 h1:yU9+BtCiiMmymYt499egG5FZ1IQ7WddSi1V6H0h0DTk=
github.com/crossplane/crossplane-runtime/v2 v2.0.0-20250730220209-c306b1c8b181/go.mod h1:pkd5UzmE8esaZAApevMutR832GjJ1Qgc5Ngr78ByxrI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful/v3 v3.10.2 h1:hIovbnmBTLjHXkqEBUz3HGpXZdM7ZrE9fJIZIqlJLqE=
github.com/emicklei/go-restful/v3 v3.10.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU=
github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v5.9.11+incompatible h1:ixHHqfcGvxhWkniF1tWxBHA0yb4Z+d1UQi45df52xW8=
github.com/evanphx/json-patch v5.9.11+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8=
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/zapr v1.2.4 h1:QHVo+6stLbfJmYGkQ7uGHUCu5hnAFAj6mDe6Ea0SeOo=
github.com/go-logr/zapr v1.2.4/go.mod h1:FyHWQIzQORZ0QVE1BtVHv3cKtNLuXsbNLtpuhNapBOA=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/go-test/deep v1.0.7 h1:/VSMRlnY/JSyqxQUzQLKVMAskpY/NZKFA5j2P+0pP2M=
github.com/go-test/deep v1.0.7/go.mod h1:QV8Hv/iy04NyLBxAdO9njL0iVPN1S4d/A3NVv1V36o8=
github.com/gobuffalo/flect v1.0.3 h1:xeWBM2nui+qnVvNM4S3foBhCAL2XgPU+a7FdpelbTq4=
github.com/gobuffalo/flect v1.0.3/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/addlicense v0.0.0-20210428195630-6d92264d7170/go.mod h1:EMjYTRimagHs1FwlIqKyX3wAM0u3rA+McvlIIWmSamA=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20230926050212-f7f687d19a98 h1:pUa4ghanp6q4IJHwE9RwLgmVFfReJN+KbQ8ExNEUUoQ=
github.com/google/pprof v0.0.0-20230926050212-f7f687d19a98/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 h1:pdN6V1QBWetyv/0+wjACpqVH+eVULgEjkurDLq3goeM=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 h1:1/D3zfFHttUKaCaGKZ/dR2roBXv0vKbSCnssIldfQdI=
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320/go.mod h1:EiZBMaudVLy8fmjf9Npq1dq9RalhveqZG5w/yz3mHWs=
github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c=
github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-plugin v1.5.1 h1:oGm7cWBaYIp3lJpx1RUEfLWophprE2EV/KUeqBYo+6k=
github.com/hashicorp/go-plugin v1.5.1/go.mod h1:w1sAEES3g3PuV/RzUrgow20W2uErMly84hhD3um1WL4=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/go-cty v1.5.0 h1:EkQ/v+dDNUqnuVpmS5fPqyY71NXVgT5gf32+57xY8g0=
github.com/hashicorp/go-cty v1.5.0/go.mod h1:lFUCG5kd8exDobgSfyj4ONE/dc822kiYMguVKdHGMLM=
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-plugin v1.6.3 h1:xgHB+ZUSYeuJi96WtxEjzi23uh7YQpznjGh0U0UUrwg=
github.com/hashicorp/go-plugin v1.6.3/go.mod h1:MRobyh+Wc/nYy1V4KAXUiYfzxoYhs7V1mlH1Z7iY2h0=
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek=
github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/hcl/v2 v2.9.1/go.mod h1:FwWsfWEjyV/CMj8s/gqAuiviY72rJ1/oayI9WftqcKg=
github.com/hashicorp/hcl/v2 v2.19.1 h1://i05Jqznmb2EXqa39Nsvyan2o5XyMowW5fnCKW5RPI=
github.com/hashicorp/hcl/v2 v2.19.1/go.mod h1:ThLC89FV4p9MPW804KVbe/cEXoQ8NZEh+JtMeeGErHE=
github.com/hashicorp/hcl/v2 v2.23.0 h1:Fphj1/gCylPxHutVSEOf2fBOh1VE4AuLV7+kbJf3qos=
github.com/hashicorp/hcl/v2 v2.23.0/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA=
github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/terraform-json v0.17.1 h1:eMfvh/uWggKmY7Pmb3T85u86E2EQg6EQHgyRwf3RkyA=
github.com/hashicorp/terraform-json v0.17.1/go.mod h1:Huy6zt6euxaY9knPAFKjUITn8QxUFIe9VuSzb4zn/0o=
github.com/hashicorp/terraform-plugin-framework v1.4.1 h1:ZC29MoB3Nbov6axHdgPbMz7799pT5H8kIrM8YAsaVrs=
github.com/hashicorp/terraform-plugin-framework v1.4.1/go.mod h1:XC0hPcQbBvlbxwmjxuV/8sn8SbZRg4XwGMs22f+kqV0=
github.com/hashicorp/terraform-plugin-go v0.19.0 h1:BuZx/6Cp+lkmiG0cOBk6Zps0Cb2tmqQpDM3iAtnhDQU=
github.com/hashicorp/terraform-plugin-go v0.19.0/go.mod h1:EhRSkEPNoylLQntYsk5KrDHTZJh9HQoumZXbOGOXmec=
github.com/hashicorp/terraform-json v0.25.0 h1:rmNqc/CIfcWawGiwXmRuiXJKEiJu1ntGoxseG1hLhoQ=
github.com/hashicorp/terraform-json v0.25.0/go.mod h1:sMKS8fiRDX4rVlR6EJUMudg1WcanxCMoWwTLkgZP/vc=
github.com/hashicorp/terraform-plugin-framework v1.15.0 h1:LQ2rsOfmDLxcn5EeIwdXFtr03FVsNktbbBci8cOKdb4=
github.com/hashicorp/terraform-plugin-framework v1.15.0/go.mod h1:hxrNI/GY32KPISpWqlCoTLM9JZsGH3CyYlir09bD/fI=
github.com/hashicorp/terraform-plugin-go v0.28.0 h1:zJmu2UDwhVN0J+J20RE5huiF3XXlTYVIleaevHZgKPA=
github.com/hashicorp/terraform-plugin-go v0.28.0/go.mod h1:FDa2Bb3uumkTGSkTFpWSOwWJDwA7bf3vdP3ltLDTH6o=
github.com/hashicorp/terraform-plugin-log v0.9.0 h1:i7hOA+vdAItN1/7UrfBqBwvYPQ9TFvymaRGZED3FCV0=
github.com/hashicorp/terraform-plugin-log v0.9.0/go.mod h1:rKL8egZQ/eXSyDqzLUuwUYLVdlYeamldAHSxjUFADow=
github.com/hashicorp/terraform-plugin-sdk/v2 v2.30.0 h1:X7vB6vn5tON2b49ILa4W7mFAsndeqJ7bZFOGbVO+0Cc=
github.com/hashicorp/terraform-plugin-sdk/v2 v2.30.0/go.mod h1:ydFcxbdj6klCqYEPkPvdvFKiNGKZLUs+896ODUXCyao=
github.com/hashicorp/terraform-registry-address v0.2.2 h1:lPQBg403El8PPicg/qONZJDC6YlgCVbWDtNmmZKtBno=
github.com/hashicorp/terraform-registry-address v0.2.2/go.mod h1:LtwNbCihUoUZ3RYriyS2wF/lGPB6gF9ICLRtuDk7hSo=
github.com/hashicorp/terraform-plugin-sdk/v2 v2.37.0 h1:NFPMacTrY/IdcIcnUB+7hsore1ZaRWU9cnB6jFoBnIM=
github.com/hashicorp/terraform-plugin-sdk/v2 v2.37.0/go.mod h1:QYmYnLfsosrxjCnGY1p9c7Zj6n9thnEE+7RObeYs3fA=
github.com/hashicorp/terraform-registry-address v0.2.5 h1:2GTftHqmUhVOeuu9CW3kwDkRe4pcBDq0uuK5VJngU1M=
github.com/hashicorp/terraform-registry-address v0.2.5/go.mod h1:PpzXWINwB5kuVS5CA7m1+eO2f1jKb5ZDIxrOPfpnGkg=
github.com/hashicorp/terraform-svchost v0.1.1 h1:EZZimZ1GxdqFRinZ1tpJwVxxt49xc/S52uzrw4x0jKQ=
github.com/hashicorp/terraform-svchost v0.1.1/go.mod h1:mNsjQfZyf/Jhz35v6/0LWcv26+X7JPS+buii2c9/ctc=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/hashicorp/yamux v0.1.1 h1:yrQxtgseBDrq9Y652vSRDvsKCJKOUD+GzTS4Y0Y8pvE=
github.com/hashicorp/yamux v0.1.1/go.mod h1:CtWFDAQgb7dxtzFs4tWbplKIe2jSi3+5vKbgIO0SLnQ=
github.com/iancoleman/strcase v0.2.0 h1:05I4QRnGpI0m37iZQRuskXh+w77mr6Z41lwQzuHLwW0=
github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=
github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jhump/protoreflect v1.15.1 h1:HUMERORf3I3ZdX05WaQ6MIpd/NJ434hTp5YiKgfCL6c=
github.com/jhump/protoreflect v1.15.1/go.mod h1:jD/2GMKKE6OqX8qTjhADU1e6DShO+gavG9e0Q693nKo=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
@ -279,10 +163,8 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=
@ -301,78 +183,75 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/muvaf/typewriter v0.0.0-20210910160850-80e49fe1eb32 h1:yBQlHXLeUJL3TWVmzup5uT3wG5FLxhiTAiTsmNVocys=
github.com/muvaf/typewriter v0.0.0-20210910160850-80e49fe1eb32/go.mod h1:SAAdeMEiFXR8LcHffvIdiLI1w243DCH2DuHq7UrA5YQ=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/onsi/ginkgo/v2 v2.11.0 h1:WgqUCUt/lT6yXoQ8Wef0fsNn5cAuMK7+KT9UFRz2tcU=
github.com/onsi/ginkgo/v2 v2.11.0/go.mod h1:ZhrRA5XmEE3x3rhlzamx/JJvujdZoJ2uvgI7kR0iZvM=
github.com/onsi/gomega v1.27.10 h1:naR28SdDFlqrG6kScpT8VWpu1xWY5nJRCF3XaYyBjhI=
github.com/onsi/gomega v1.27.10/go.mod h1:RsS8tutOdbdgzbPtzzATp12yT7kM5I5aElG3evPbQ0M=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v1.37.0 h1:CdEG8g0S133B4OswTDC/5XPSzE1OeP29QOioj2PID2Y=
github.com/onsi/gomega v1.37.0/go.mod h1:8D9+Txp43QWKhM24yyOBEdpkzN8FvJyAwecBgsU4KU0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/procfs v0.10.1 h1:kYK1Va/YMlutzCGazswoHKo//tZVlFpKYh+PymziUAg=
github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/spf13/afero v1.10.0 h1:EaGW2JJh15aKOejeuJ+wpFSHnbd7GE6Wvp3TsNhb6LY=
github.com/spf13/afero v1.10.0/go.mod h1:UBogFpq8E9Hx+xc5CNTTEpTnuHVmXDwZcZcE1eb/UhQ=
github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs=
github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tmccombs/hcl2json v0.3.3 h1:+DLNYqpWE0CsOQiEZu+OZm5ZBImake3wtITYxQ8uLFQ=
github.com/tmccombs/hcl2json v0.3.3/go.mod h1:Y2chtz2x9bAeRTvSibVRVgbLJhLJXKlUeIvjeVdnm4w=
github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
github.com/vmihailenco/msgpack v4.0.4+incompatible h1:dSLoQfGFAo3F6OoNhwUmLwVgaUXK79GlxNBwueZn0xI=
github.com/vmihailenco/msgpack v4.0.4+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4=
github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU=
github.com/vmihailenco/msgpack/v5 v5.3.5/go.mod h1:7xyJ9e+0+9SaZT0Wt1RGleJXzli6Q/V5KbhBonMG9jc=
github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8=
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ=
github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13 h1:fVcFKWvrslecOb/tg+Cc05dkeYx540o0FuFt3nUVDoE=
@ -380,412 +259,174 @@ github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5t
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
github.com/zclconf/go-cty v1.8.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
github.com/zclconf/go-cty v1.8.1/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
github.com/zclconf/go-cty v1.14.1 h1:t9fyA35fwjjUMcmL5hLER+e/rEPqrbCK1/OSE4SI9KA=
github.com/zclconf/go-cty v1.14.1/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
github.com/zclconf/go-cty v1.16.2 h1:LAJSwc3v81IRBZyUVQDUdZ7hs3SYs9jv0eZJDWHD/70=
github.com/zclconf/go-cty v1.16.2/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM=
github.com/zclconf/go-cty-yaml v1.0.3 h1:og/eOQ7lvA/WWhHGFETVWNduJM7Rjsv2RRpx1sdFMLc=
github.com/zclconf/go-cty-yaml v1.0.3/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca/go.mod h1:jxU+3+j+71eXOW14274+SmmuW82qJzl6iZSeqEtTGds=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.11/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo=
go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 h1:GoHiUyI/Tp2nVkLI2mCxVkOjsbSXD66ic0XW0js0R9g=
golang.org/x/exp v0.0.0-20230905200255-921286631fa9/go.mod h1:S2oDrQGGwySpoQPVqRShND87VCbxmc6bL1Yd2oYrm6k=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa h1:ELnwvuAXPNtPk1TJRuGkI9fDTwym6AYBu0qzT8AcHdI=
golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa/go.mod h1:akd2r19cwCdwSwWeIdzYQGa/EZZyqcOdwWiwj5L5eKQ=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.13.0 h1:I/DsJXRlw/8l/0c24sM9yb0T4z9liZTduXvdAWYiysY=
golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200421231249-e086a090c8fd/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98=
golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.31.0 h1:erwDkOK1Msy6offm1mOgvspSkslFnIGsFnxOKoufg3o=
golang.org/x/term v0.31.0/go.mod h1:R4BeIy7D95HzImkxGkTW1UQTtP54tio2RyHz7PwK0aw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.32.0 h1:Q7N1vhpkQv7ybVzLFtTjvQya2ewbwNDZzUgfXGqtMWU=
golang.org/x/tools v0.32.0/go.mod h1:ZxrU41P/wAbZD8EDa6dDCa6XfpkhJ7HFMjHJXfBDu8s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 h1:bVf09lpb+OJbByTj913DRJioFFAjf/ZGxEz7MajTp2U=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98/go.mod h1:TUfxEVdsvPg18p6AslUXFoLdpED4oBnGwyqk3dV1XzM=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ=
google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM=
google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ=
google.golang.org/grpc v1.72.1 h1:HR03wO6eyZ7lknl75XlxABNVLLFc2PAb6mHlYh756mA=
google.golang.org/grpc v1.72.1/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.28.2 h1:9mpl5mOb6vXZvqbQmankOfPIGiudghwCoLl1EYfUZbw=
k8s.io/api v0.28.2/go.mod h1:RVnJBsjU8tcMq7C3iaRSGMeaKt2TWEUXcpIt/90fjEg=
k8s.io/apiextensions-apiserver v0.28.2 h1:J6/QRWIKV2/HwBhHRVITMLYoypCoPY1ftigDM0Kn+QU=
k8s.io/apiextensions-apiserver v0.28.2/go.mod h1:5tnkxLGa9nefefYzWuAlWZ7RZYuN/765Au8cWLA6SRg=
k8s.io/apimachinery v0.28.2 h1:KCOJLrc6gu+wV1BYgwik4AF4vXOlVJPdiqn0yAWWwXQ=
k8s.io/apimachinery v0.28.2/go.mod h1:RdzF87y/ngqk9H4z3EL2Rppv5jj95vGS/HaFXrLDApU=
k8s.io/cli-runtime v0.28.2 h1:64meB2fDj10/ThIMEJLO29a1oujSm0GQmKzh1RtA/uk=
k8s.io/cli-runtime v0.28.2/go.mod h1:bTpGOvpdsPtDKoyfG4EG041WIyFZLV9qq4rPlkyYfDA=
k8s.io/client-go v0.28.2 h1:DNoYI1vGq0slMBN/SWKMZMw0Rq+0EQW6/AK4v9+3VeY=
k8s.io/client-go v0.28.2/go.mod h1:sMkApowspLuc7omj1FOSUxSoqjr+d5Q0Yc0LOFnYFJY=
k8s.io/component-base v0.28.2 h1:Yc1yU+6AQSlpJZyvehm/NkJBII72rzlEsd6MkBQ+G0E=
k8s.io/component-base v0.28.2/go.mod h1:4IuQPQviQCg3du4si8GpMrhAIegxpsgPngPRR/zWpzc=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM=
k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI=
k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/controller-runtime v0.16.2 h1:mwXAVuEk3EQf478PQwQ48zGOXvW27UJc8NHktQVuIPU=
sigs.k8s.io/controller-runtime v0.16.2/go.mod h1:vpMu3LpI5sYWtujJOa2uPK61nB5rbwlN7BAB8aSLvGU=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 h1:XX3Ajgzov2RKUdc5jW3t5jwY7Bo7dcRm+tFxT+NfgY0=
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3/go.mod h1:9n16EZKMhXBNSiUC5kSdFQJkdH3zbxS/JoO619G1VAY=
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 h1:W6cLQc5pnqM7vh3b7HvGNfXrJ/xL6BDMS0v1V/HHg5U=
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3/go.mod h1:JWP1Fj0VWGHyw3YUPjXSQnRnrwezrZSrApfX5S0nIag=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
k8s.io/api v0.33.0 h1:yTgZVn1XEe6opVpP1FylmNrIFWuDqe2H0V8CT5gxfIU=
k8s.io/api v0.33.0/go.mod h1:CTO61ECK/KU7haa3qq8sarQ0biLq2ju405IZAd9zsiM=
k8s.io/apiextensions-apiserver v0.33.0 h1:d2qpYL7Mngbsc1taA4IjJPRJ9ilnsXIrndH+r9IimOs=
k8s.io/apiextensions-apiserver v0.33.0/go.mod h1:VeJ8u9dEEN+tbETo+lFkwaaZPg6uFKLGj5vyNEwwSzc=
k8s.io/apimachinery v0.33.0 h1:1a6kHrJxb2hs4t8EE5wuR/WxKDwGN1FKH3JvDtA0CIQ=
k8s.io/apimachinery v0.33.0/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/client-go v0.33.0 h1:UASR0sAYVUzs2kYuKn/ZakZlcs2bEHaizrrHUZg0G98=
k8s.io/client-go v0.33.0/go.mod h1:kGkd+l/gNGg8GYWAPr0xF1rRKvVWvzh9vmZAMXtaKOg=
k8s.io/code-generator v0.33.0 h1:B212FVl6EFqNmlgdOZYWNi77yBv+ed3QgQsMR8YQCw4=
k8s.io/code-generator v0.33.0/go.mod h1:KnJRokGxjvbBQkSJkbVuBbu6z4B0rC7ynkpY5Aw6m9o=
k8s.io/component-base v0.33.0 h1:Ot4PyJI+0JAD9covDhwLp9UNkUja209OzsJ4FzScBNk=
k8s.io/component-base v0.33.0/go.mod h1:aXYZLbw3kihdkOPMDhWbjGCO6sg+luw554KP51t8qCU=
k8s.io/gengo/v2 v2.0.0-20250207200755-1244d31929d7 h1:2OX19X59HxDprNCVrWi6jb7LW1PoqTlYqEq5H2oetog=
k8s.io/gengo/v2 v2.0.0-20250207200755-1244d31929d7/go.mod h1:EJykeLsmFC60UQbYJezXkEsG2FLrt0GPNkU5iK5GWxU=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/utils v0.0.0-20250321185631-1f6e0b77f77e h1:KqK5c/ghOm8xkHYhlodbp6i6+r+ChV2vuAuVRdFbLro=
k8s.io/utils v0.0.0-20250321185631-1f6e0b77f77e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/controller-runtime v0.19.0 h1:nWVM7aq+Il2ABxwiCizrVDSlmDcshi9llbaFbC0ji/Q=
sigs.k8s.io/controller-runtime v0.19.0/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4=
sigs.k8s.io/controller-tools v0.18.0 h1:rGxGZCZTV2wJreeRgqVoWab/mfcumTMmSwKzoM9xrsE=
sigs.k8s.io/controller-tools v0.18.0/go.mod h1:gLKoiGBriyNh+x1rWtUQnakUYEujErjXs9pf+x/8n1U=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 h1:gBQPwqORJ8d8/YNZWEjoZs7npUVDpVXUUOFfW6CgAqE=
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@ -7,7 +7,7 @@ package config
import (
"github.com/pkg/errors"
"github.com/crossplane/upjet/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/resource/json"
)
const (

View File

@ -10,8 +10,9 @@ import (
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/crossplane/upjet/pkg/registry"
tjname "github.com/crossplane/upjet/pkg/types/name"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
tjname "github.com/crossplane/upjet/v2/pkg/types/name"
)
const (
@ -32,17 +33,17 @@ var (
DefaultBasePackages = BasePackages{
APIVersion: []string{
// Default package for ProviderConfig APIs
"apis/v1alpha1",
"apis/v1beta1",
"v1alpha1",
"v1beta1",
},
Controller: []string{
// Default package for ProviderConfig controllers
"internal/controller/providerconfig",
"providerconfig",
},
ControllerMap: map[string]string{
// Default package for ProviderConfig controllers
"internal/controller/providerconfig": PackageNameConfig,
"providerconfig": PackageNameConfig,
},
}
@ -91,6 +92,9 @@ func DefaultResource(name string, terraformSchema *schema.Resource, terraformPlu
UseAsync: true,
SchemaElementOptions: make(SchemaElementOptions),
ServerSideApplyMergeStrategies: make(ServerSideApplyMergeStrategies),
Conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)},
OverrideFieldNames: map[string]string{},
listConversionPaths: make(map[string]string),
}
for _, f := range opts {
f(r)
@ -124,13 +128,21 @@ func MoveToStatus(sch *schema.Resource, fieldpaths ...string) {
}
}
// MarkAsRequired marks the given fieldpaths as required without manipulating
// the native field schema.
func (r *Resource) MarkAsRequired(fieldpaths ...string) {
r.requiredFields = append(r.requiredFields, fieldpaths...)
}
// MarkAsRequired marks the schema of the given fieldpath as required. It's most
// useful in cases where external name contains an optional parameter that is
// defaulted by the provider but we need it to exist or to fix plain buggy
// schemas.
// Deprecated: Use Resource.MarkAsRequired instead.
// This function will be removed in future versions.
func MarkAsRequired(sch *schema.Resource, fieldpaths ...string) {
for _, fieldpath := range fieldpaths {
if s := GetSchema(sch, fieldpath); s != nil {
for _, fp := range fieldpaths {
if s := GetSchema(sch, fp); s != nil {
s.Computed = false
s.Optional = false
}

View File

@ -5,6 +5,7 @@
package config
import (
"reflect"
"testing"
"github.com/google/go-cmp/cmp"
@ -12,10 +13,13 @@ import (
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/crossplane/upjet/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
)
func TestDefaultResource(t *testing.T) {
identityConversion := conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)
type args struct {
name string
sch *schema.Resource
@ -45,6 +49,8 @@ func TestDefaultResource(t *testing.T) {
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"TwoSectionsName": {
@ -63,6 +69,8 @@ func TestDefaultResource(t *testing.T) {
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithPrefixAcronym": {
@ -81,6 +89,8 @@ func TestDefaultResource(t *testing.T) {
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithSuffixAcronym": {
@ -99,6 +109,8 @@ func TestDefaultResource(t *testing.T) {
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithMultipleAcronyms": {
@ -117,6 +129,8 @@ func TestDefaultResource(t *testing.T) {
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
}
@ -124,10 +138,10 @@ func TestDefaultResource(t *testing.T) {
// TODO(muvaf): Find a way to compare function pointers.
ignoreUnexported := []cmp.Option{
cmpopts.IgnoreFields(Sensitive{}, "fieldPaths", "AdditionalConnectionDetailsFn"),
cmpopts.IgnoreFields(LateInitializer{}, "ignoredCanonicalFieldPaths"),
cmpopts.IgnoreFields(LateInitializer{}, "ignoredCanonicalFieldPaths", "conditionalIgnoredCanonicalFieldPaths"),
cmpopts.IgnoreFields(ExternalName{}, "SetIdentifierArgumentFn", "GetExternalNameFn", "GetIDFn"),
cmpopts.IgnoreFields(Resource{}, "useTerraformPluginSDKClient"),
cmpopts.IgnoreFields(Resource{}, "useTerraformPluginFrameworkClient"),
cmpopts.IgnoreUnexported(Resource{}),
cmpopts.IgnoreUnexported(reflect.ValueOf(identityConversion).Elem().Interface()),
}
for name, tc := range cases {

View File

@ -5,8 +5,11 @@
package conversion
import (
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/resource"
"fmt"
"slices"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
@ -19,12 +22,31 @@ const (
AllVersions = "*"
)
// Conversion is the interface for the API version converters.
const (
pathForProvider = "spec.forProvider"
pathInitProvider = "spec.initProvider"
pathAtProvider = "status.atProvider"
)
var (
_ PrioritizedManagedConversion = &identityConversion{}
_ PavedConversion = &fieldCopy{}
_ PavedConversion = &singletonListConverter{}
)
// Conversion is the interface for the CRD API version converters.
// Conversion implementations registered for a source, target
// pair are called in chain so Conversion implementations can be modular, e.g.,
// a Conversion implementation registered for a specific source and target
// versions does not have to contain all the needed API conversions between
// these two versions.
// these two versions. All PavedConversions are run in their registration
// order before the ManagedConversions. Conversions are run in three stages:
// 1. PrioritizedManagedConversions are run.
// 2. The source and destination objects are paved and the PavedConversions are
// run in chain without unpaving the unstructured representation between
// conversions.
// 3. The destination paved object is converted back to a managed resource and
// ManagedConversions are run in the order they are registered.
type Conversion interface {
// Applicable should return true if this Conversion is applicable while
// converting the API of the `src` object to the API of the `dst` object.
@ -63,11 +85,23 @@ type ManagedConversion interface {
ConvertManaged(src, target resource.Managed) (bool, error)
}
// PrioritizedManagedConversion is a ManagedConversion that take precedence
// over all the other converters. PrioritizedManagedConversions are run,
// in their registration order, before the PavedConversions.
type PrioritizedManagedConversion interface {
ManagedConversion
Prioritized()
}
type baseConversion struct {
sourceVersion string
targetVersion string
}
func (c *baseConversion) String() string {
return fmt.Sprintf("source API version %q, target API version %q", c.sourceVersion, c.targetVersion)
}
func newBaseConversion(sourceVersion, targetVersion string) baseConversion {
return baseConversion{
sourceVersion: sourceVersion,
@ -141,3 +175,161 @@ func NewCustomConverter(sourceVersion, targetVersion string, converter func(src,
customConverter: converter,
}
}
type singletonListConverter struct {
baseConversion
pathPrefixes []string
crdPaths []string
mode ListConversionMode
convertOptions *ConvertOptions
}
type SingletonListConversionOption func(*singletonListConverter)
// WithConvertOptions sets the ConvertOptions for the singleton list conversion.
func WithConvertOptions(opts *ConvertOptions) SingletonListConversionOption {
return func(s *singletonListConverter) {
s.convertOptions = opts
}
}
// NewSingletonListConversion returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion and uses the
// CRD field paths given in crdPaths to convert between the singleton
// lists and embedded objects in the given conversion mode.
func NewSingletonListConversion(sourceVersion, targetVersion string, pathPrefixes []string, crdPaths []string, mode ListConversionMode, opts ...SingletonListConversionOption) Conversion {
s := &singletonListConverter{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
pathPrefixes: pathPrefixes,
crdPaths: crdPaths,
mode: mode,
}
for _, o := range opts {
o(s)
}
return s
}
func (s *singletonListConverter) ConvertPaved(src, target *fieldpath.Paved) (bool, error) {
if !s.Applicable(&unstructured.Unstructured{Object: src.UnstructuredContent()},
&unstructured.Unstructured{Object: target.UnstructuredContent()}) {
return false, nil
}
if len(s.crdPaths) == 0 {
return false, nil
}
for _, p := range s.pathPrefixes {
v, err := src.GetValue(p)
if err != nil {
return true, errors.Wrapf(err, "failed to read the %s value for conversion in mode %q", p, s.mode)
}
m, ok := v.(map[string]any)
if !ok {
return true, errors.Errorf("value at path %s is not a map[string]any", p)
}
if _, err := Convert(m, s.crdPaths, s.mode, s.convertOptions); err != nil {
return true, errors.Wrapf(err, "failed to convert the source map in mode %q with %s", s.mode, s.baseConversion.String())
}
if err := target.SetValue(p, m); err != nil {
return true, errors.Wrapf(err, "failed to set the %s value for conversion in mode %q", p, s.mode)
}
}
return true, nil
}
type identityConversion struct {
baseConversion
excludePaths []string
}
func (i *identityConversion) ConvertManaged(src, target resource.Managed) (bool, error) {
if !i.Applicable(src, target) {
return false, nil
}
srcCopy := src.DeepCopyObject()
srcRaw, err := runtime.DefaultUnstructuredConverter.ToUnstructured(srcCopy)
if err != nil {
return false, errors.Wrap(err, "cannot convert the source managed resource into an unstructured representation")
}
// remove excluded fields
if len(i.excludePaths) > 0 {
pv := fieldpath.Pave(srcRaw)
for _, ex := range i.excludePaths {
exPaths, err := pv.ExpandWildcards(ex)
if err != nil {
return false, errors.Wrapf(err, "cannot expand wildcards in the fieldpath expression %s", ex)
}
for _, p := range exPaths {
if err := pv.DeleteField(p); err != nil {
return false, errors.Wrapf(err, "cannot delete a field in the conversion source object")
}
}
}
}
// copy the remaining fields
gvk := target.GetObjectKind().GroupVersionKind()
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(srcRaw, target); err != nil {
return true, errors.Wrap(err, "cannot convert the map[string]any representation of the source object to the conversion target object")
}
// restore the original GVK for the conversion destination
target.GetObjectKind().SetGroupVersionKind(gvk)
return true, nil
}
func (i *identityConversion) Prioritized() {}
// newIdentityConversion returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion, which copies the
// identical paths from the source to the target. excludePaths can be used
// to ignore certain field paths while copying.
func newIdentityConversion(sourceVersion, targetVersion string, excludePaths ...string) Conversion {
return &identityConversion{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
excludePaths: excludePaths,
}
}
// NewIdentityConversionExpandPaths returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion, which copies the
// identical paths from the source to the target. excludePaths can be used
// to ignore certain field paths while copying. Exclude paths must be specified
// in standard crossplane-runtime fieldpath library syntax, i.e., with proper
// indices for traversing map and slice types (e.g., a.b[*].c).
// The field paths in excludePaths are sorted in lexical order and are prefixed
// with each of the path prefixes specified with pathPrefixes. So if an
// exclude path "x" is specified with the prefix slice ["a", "b"], then
// paths a.x and b.x will both be skipped while copying fields from a source to
// a target.
func NewIdentityConversionExpandPaths(sourceVersion, targetVersion string, pathPrefixes []string, excludePaths ...string) Conversion {
return newIdentityConversion(sourceVersion, targetVersion, ExpandParameters(pathPrefixes, excludePaths...)...)
}
// ExpandParameters sorts and expands the given list of field path suffixes
// with the given prefixes.
func ExpandParameters(prefixes []string, excludePaths ...string) []string {
slices.Sort(excludePaths)
if len(prefixes) == 0 {
return excludePaths
}
r := make([]string, 0, len(prefixes)*len(excludePaths))
for _, p := range prefixes {
for _, ex := range excludePaths {
r = append(r, fmt.Sprintf("%s.%s", p, ex))
}
}
return r
}
// DefaultPathPrefixes returns the list of the default path prefixes for
// excluding paths in the identity conversion. The returned value is
// ["spec.forProvider", "spec.initProvider", "status.atProvider"].
func DefaultPathPrefixes() []string {
return []string{pathForProvider, pathInitProvider, pathAtProvider}
}

View File

@ -1,4 +1,4 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
@ -6,11 +6,16 @@ package conversion
import (
"fmt"
"slices"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
jsoniter "github.com/json-iterator/go"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/utils/ptr"
)
@ -136,6 +141,406 @@ func TestConvertPaved(t *testing.T) {
}
}
func TestIdentityConversion(t *testing.T) {
type args struct {
sourceVersion string
source resource.Managed
targetVersion string
target *mockManaged
pathPrefixes []string
excludePaths []string
}
type want struct {
converted bool
err error
target *mockManaged
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulConversionNoExclusions": {
reason: "Successfully copy identical fields from the source to the target with no exclusions.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
},
},
"SuccessfulConversionExclusionsWithNoPrefixes": {
reason: "Successfully copy identical fields from the source to the target with exclusions without prefixes.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2", "k3"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
}),
},
},
"SuccessfulConversionNestedExclusionsWithNoPrefixes": {
reason: "Successfully copy identical fields from the source to the target with nested exclusions without prefixes.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2", "k3.nk1"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
// key key3 is copied without its nested element (as an empty map)
"k3": map[string]any{},
}),
},
},
"SuccessfulConversionWithListExclusion": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a root-level list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": []map[string]any{
{
"nk3": "nv3",
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
}),
},
},
"SuccessfulConversionWithNestedListExclusion": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a nested list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": []map[string]any{
{
"nk3": []map[string]any{
{
"nk4": "nv4",
},
},
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2[*].nk3"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
"k2": []any{map[string]any{}},
}),
},
},
"SuccessfulConversionWithDefaultExclusionPrefixes": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a nested list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
"forProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
},
"status": map[string]any{
"atProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2"},
pathPrefixes: DefaultPathPrefixes(),
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"k1": "v1",
},
"forProvider": map[string]any{
"k1": "v1",
},
},
"status": map[string]any{
"atProvider": map[string]any{
"k1": "v1",
},
},
}),
},
},
}
for n, tc := range tests {
t.Run(n, func(t *testing.T) {
c := NewIdentityConversionExpandPaths(tc.args.sourceVersion, tc.args.targetVersion, tc.args.pathPrefixes, tc.args.excludePaths...)
converted, err := c.(*identityConversion).ConvertManaged(tc.args.source, tc.args.target)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvertManaged(source, target): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.converted, converted); diff != "" {
t.Errorf("\n%s\nConvertManaged(source, target): -wantConverted, +gotConverted:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.target.UnstructuredContent(), tc.args.target.UnstructuredContent()); diff != "" {
t.Errorf("\n%s\nConvertManaged(source, target): -wantTarget, +gotTarget:\n%s", tc.reason, diff)
}
})
}
}
func TestDefaultPathPrefixes(t *testing.T) {
// no need for a table-driven test here as we assert all the parameter roots
// in the MR schema are asserted.
want := []string{"spec.forProvider", "spec.initProvider", "status.atProvider"}
slices.Sort(want)
got := DefaultPathPrefixes()
slices.Sort(got)
if diff := cmp.Diff(want, got); diff != "" {
t.Fatalf("DefaultPathPrefixes(): -want, +got:\n%s", diff)
}
}
func TestSingletonListConversion(t *testing.T) {
type args struct {
sourceVersion string
sourceMap map[string]any
targetVersion string
targetMap map[string]any
crdPaths []string
mode ListConversionMode
opts []SingletonListConversionOption
}
type want struct {
converted bool
err error
targetMap map[string]any
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulToEmbeddedObjectConversion": {
reason: "Successful conversion from a singleton list to an embedded object.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
},
},
"SuccessfulToSingletonListConversion": {
reason: "Successful conversion from an embedded object to a singleton list.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"o"},
mode: ToSingletonList,
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"NoCRDPath": {
reason: "No conversion when the specified CRD paths is empty.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
mode: ToSingletonList,
},
want: want{
converted: false,
targetMap: map[string]any{},
},
},
"SuccessfulToSingletonListConversionWithInjectedKey": {
reason: "Successful conversion from an embedded object to a singleton list.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"o"},
mode: ToSingletonList,
opts: []SingletonListConversionOption{
WithConvertOptions(&ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"o": {
Key: "index",
Value: "0",
},
},
}),
},
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
},
},
},
},
}
for n, tc := range tests {
t.Run(n, func(t *testing.T) {
c := NewSingletonListConversion(tc.args.sourceVersion, tc.args.targetVersion, []string{pathInitProvider}, tc.args.crdPaths, tc.args.mode, tc.args.opts...)
sourceMap, err := roundTrip(tc.args.sourceMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.args.sourceMap: %v", err)
}
targetMap, err := roundTrip(tc.args.targetMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.args.targetMap: %v", err)
}
converted, err := c.(*singletonListConverter).ConvertPaved(fieldpath.Pave(sourceMap), fieldpath.Pave(targetMap))
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvertPaved(source, target): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.converted, converted); diff != "" {
t.Errorf("\n%s\nConvertPaved(source, target): -wantConverted, +gotConverted:\n%s", tc.reason, diff)
}
m, err := roundTrip(tc.want.targetMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.want.targetMap: %v", err)
}
if diff := cmp.Diff(m, targetMap); diff != "" {
t.Errorf("\n%s\nConvertPaved(source, target): -wantTarget, +gotTarget:\n%s", tc.reason, diff)
}
})
}
}
func getPaved(version, field string, value *string) *fieldpath.Paved {
m := map[string]any{
"apiVersion": fmt.Sprintf("mockgroup/%s", version),
@ -146,3 +551,30 @@ func getPaved(version, field string, value *string) *fieldpath.Paved {
}
return fieldpath.Pave(m)
}
type mockManaged struct {
*fake.Managed
*fieldpath.Paved
}
func (m *mockManaged) DeepCopyObject() runtime.Object {
buff, err := jsoniter.ConfigCompatibleWithStandardLibrary.Marshal(m.Paved.UnstructuredContent())
if err != nil {
panic(err)
}
var u map[string]any
if err := jsoniter.Unmarshal(buff, &u); err != nil {
panic(err)
}
return &mockManaged{
Managed: m.Managed.DeepCopyObject().(*fake.Managed),
Paved: fieldpath.Pave(u),
}
}
func newMockManaged(m map[string]any) *mockManaged {
return &mockManaged{
Managed: &fake.Managed{},
Paved: fieldpath.Pave(m),
}
}

View File

@ -0,0 +1,159 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"reflect"
"slices"
"sort"
"strings"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/pkg/errors"
)
// ListConversionMode denotes the mode of the list-object API conversion, e.g.,
// conversion of embedded objects into singleton lists.
type ListConversionMode int
const (
// ToEmbeddedObject represents a runtime conversion from a singleton list
// to an embedded object, i.e., the runtime conversions needed while
// reading from the Terraform state and updating the CRD
// (for status, late-initialization, etc.)
ToEmbeddedObject ListConversionMode = iota
// ToSingletonList represents a runtime conversion from an embedded object
// to a singleton list, i.e., the runtime conversions needed while passing
// the configuration data to the underlying Terraform layer.
ToSingletonList
)
const (
errFmtMultiItemList = "singleton list, at the field path %s, must have a length of at most 1 but it has a length of %d"
errFmtNonSlice = "value at the field path %s must be []any, not %q"
)
// String returns a string representation of the conversion mode.
func (m ListConversionMode) String() string {
switch m {
case ToSingletonList:
return "toSingletonList"
case ToEmbeddedObject:
return "toEmbeddedObject"
default:
return "unknown"
}
}
// setValue sets the value, in pv, to v at the specified path fp.
// It's implemented on top of the fieldpath library by accessing
// the parent map in fp and directly setting v as a value in the
// parent map. We don't use fieldpath.Paved.SetValue because the
// JSON value validation performed by it potentially changes types.
func setValue(pv *fieldpath.Paved, v any, fp string) error {
segments := strings.Split(fp, ".")
p := fp
var pm any = pv.UnstructuredContent()
var err error
if len(segments) > 1 {
p = strings.Join(segments[:len(segments)-1], ".")
pm, err = pv.GetValue(p)
if err != nil {
return errors.Wrapf(err, "cannot get the parent value at field path %s", p)
}
}
parent, ok := pm.(map[string]any)
if !ok {
return errors.Errorf("parent at field path %s must be a map[string]any", p)
}
parent[segments[len(segments)-1]] = v
return nil
}
type SingletonListInjectKey struct {
Key string
Value string
}
type ConvertOptions struct {
// ListInjectKeys is used to inject a key with a default value into the
// singleton list for a given path.
ListInjectKeys map[string]SingletonListInjectKey
}
// Convert performs conversion between singleton lists and embedded objects
// while passing the CRD parameters to the Terraform layer and while reading
// state from the Terraform layer at runtime. The paths where the conversion
// will be performed are specified using paths and the conversion mode (whether
// an embedded object will be converted into a singleton list or a singleton
// list will be converted into an embedded object) is determined by the mode
// parameter.
func Convert(params map[string]any, paths []string, mode ListConversionMode, opts *ConvertOptions) (map[string]any, error) { //nolint:gocyclo // easier to follow as a unit
switch mode {
case ToSingletonList:
slices.Sort(paths)
case ToEmbeddedObject:
sort.Slice(paths, func(i, j int) bool {
return paths[i] > paths[j]
})
}
pv := fieldpath.Pave(params)
for _, fp := range paths {
exp, err := pv.ExpandWildcards(fp)
if err != nil && !fieldpath.IsNotFound(err) {
return nil, errors.Wrapf(err, "cannot expand wildcards for the field path expression %s", fp)
}
for _, e := range exp {
v, err := pv.GetValue(e)
if err != nil {
return nil, errors.Wrapf(err, "cannot get the value at the field path %s with the conversion mode set to %q", e, mode)
}
switch mode {
case ToSingletonList:
if opts != nil {
// We replace 0th index with "*" to be able to stay consistent
// with the paths parameter in the keys of opts.ListInjectKeys.
if inj, ok := opts.ListInjectKeys[strings.ReplaceAll(e, "0", "*")]; ok && inj.Key != "" && inj.Value != "" {
if m, ok := v.(map[string]any); ok {
m[inj.Key] = inj.Value
}
}
}
if err := setValue(pv, []any{v}, e); err != nil {
return nil, errors.Wrapf(err, "cannot set the singleton list's value at the field path %s", e)
}
case ToEmbeddedObject:
var newVal any = nil
if v != nil {
newVal = map[string]any{}
s, ok := v.([]any)
if !ok {
// then it's not a slice
return nil, errors.Errorf(errFmtNonSlice, e, reflect.TypeOf(v))
}
if len(s) > 1 {
return nil, errors.Errorf(errFmtMultiItemList, e, len(s))
}
if len(s) > 0 {
newVal = s[0]
}
}
if opts != nil {
// We replace 0th index with "*" to be able to stay consistent
// with the paths parameter in the keys of opts.ListInjectKeys.
if inj, ok := opts.ListInjectKeys[strings.ReplaceAll(e, "0", "*")]; ok && inj.Key != "" && inj.Value != "" {
delete(newVal.(map[string]any), inj.Key)
}
}
if err := setValue(pv, newVal, e); err != nil {
return nil, errors.Wrapf(err, "cannot set the embedded object's value at the field path %s", e)
}
}
}
}
return params, nil
}

View File

@ -0,0 +1,474 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"reflect"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
)
func TestConvert(t *testing.T) {
type args struct {
params map[string]any
paths []string
mode ListConversionMode
opts *ConvertOptions
}
type want struct {
err error
params map[string]any
}
tests := map[string]struct {
reason string
args args
want want
}{
"NilParamsAndPaths": {
reason: "Conversion on an nil map should not fail.",
args: args{},
},
"EmptyPaths": {
reason: "Empty conversion on a map should be an identity function.",
args: args{
params: map[string]any{"a": "b"},
},
want: want{
params: map[string]any{"a": "b"},
},
},
"SingletonListToEmbeddedObject": {
reason: "Should successfully convert a singleton list at the root level to an embedded object.",
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"NestedSingletonListsToEmbeddedObjectsPathsInLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
"NestedSingletonListsToEmbeddedObjectsPathsInReverseLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in reverse-lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
paths: []string{"parent[*].child", "parent"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
"EmbeddedObjectToSingletonList": {
reason: "Should successfully convert an embedded object at the root level to a singleton list.",
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
"NestedEmbeddedObjectsToSingletonListInLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"NestedEmbeddedObjectsToSingletonListInReverseLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in reverse-lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent[*].child", "parent"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"FailConversionOfAMultiItemList": {
reason: `Conversion of a multi-item list in mode "ToEmbeddedObject" should fail.`,
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k1": "v1",
},
{
"k2": "v2",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
err: errors.Errorf(errFmtMultiItemList, "l", 2),
},
},
"FailConversionOfNonSlice": {
reason: `Conversion of a non-slice value in mode "ToEmbeddedObject" should fail.`,
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
err: errors.Errorf(errFmtNonSlice, "l", reflect.TypeOf(map[string]any{})),
},
},
"ToSingletonListWithNonExistentPath": {
reason: `"ToSingletonList" mode conversions specifying only non-existent paths should be identity functions.`,
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"nonexistent"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"ToEmbeddedObjectWithNonExistentPath": {
reason: `"ToEmbeddedObject" mode conversions specifying only non-existent paths should be identity functions.`,
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
paths: []string{"nonexistent"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
"WithInjectedKeySingletonListToEmbeddedObject": {
reason: "Should successfully convert a singleton list at the root level to an embedded object.",
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"l": {
Key: "index",
Value: "0",
},
},
}},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"WithInjectedKeyEmbeddedObjectToSingletonList": {
reason: "Should successfully convert an embedded object at the root level to a singleton list.",
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToSingletonList,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"l": {
Key: "index",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
},
},
"WithInjectedKeyNestedEmbeddedObjectsToSingletonListInLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToSingletonList,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"parent": {
Key: "index",
Value: "0",
},
"parent[*].child": {
Key: "another",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"index": "0",
"child": []map[string]any{
{
"k": "v",
"another": "0",
},
},
},
},
},
},
},
"WithInjectedKeyNestedSingletonListsToEmbeddedObjectsPathsInLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"index": "0",
"child": []map[string]any{
{
"k": "v",
"another": "0",
},
},
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToEmbeddedObject,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"parent": {
Key: "index",
Value: "0",
},
"parent[*].child": {
Key: "another",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
params, err := roundTrip(tt.args.params)
if err != nil {
t.Fatalf("Failed to preprocess tt.args.params: %v", err)
}
wantParams, err := roundTrip(tt.want.params)
if err != nil {
t.Fatalf("Failed to preprocess tt.want.params: %v", err)
}
got, err := Convert(params, tt.args.paths, tt.args.mode, tt.args.opts)
if diff := cmp.Diff(tt.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvert(tt.args.params, tt.args.paths): -wantErr, +gotErr:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(wantParams, got); diff != "" {
t.Errorf("\n%s\nConvert(tt.args.params, tt.args.paths): -wantConverted, +gotConverted:\n%s", tt.reason, diff)
}
})
}
}
func TestModeString(t *testing.T) {
tests := map[string]struct {
m ListConversionMode
want string
}{
"ToSingletonList": {
m: ToSingletonList,
want: "toSingletonList",
},
"ToEmbeddedObject": {
m: ToEmbeddedObject,
want: "toEmbeddedObject",
},
"Unknown": {
m: ToSingletonList + 1,
want: "unknown",
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
if diff := cmp.Diff(tt.want, tt.m.String()); diff != "" {
t.Errorf("String(): -want, +got:\n%s", diff)
}
})
}
}
func roundTrip(m map[string]any) (map[string]any, error) {
if len(m) == 0 {
return m, nil
}
buff, err := jsoniter.ConfigCompatibleWithStandardLibrary.Marshal(m)
if err != nil {
return nil, err
}
var r map[string]any
return r, jsoniter.ConfigCompatibleWithStandardLibrary.Unmarshal(buff, &r)
}

View File

@ -12,8 +12,8 @@ import (
"text/template"
"text/template/parse"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
)
const (

View File

@ -8,8 +8,8 @@ import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
)

View File

@ -15,8 +15,9 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"github.com/crossplane/upjet/pkg/registry"
conversiontfjson "github.com/crossplane/upjet/pkg/types/conversion/tfjson"
"github.com/crossplane/upjet/v2/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/schema/traverser"
conversiontfjson "github.com/crossplane/upjet/v2/pkg/types/conversion/tfjson"
)
// ResourceConfiguratorFn is a function that implements the ResourceConfigurator
@ -155,6 +156,12 @@ type Provider struct {
// resourceConfigurators is a map holding resource configurators where key
// is Terraform resource name.
resourceConfigurators map[string]ResourceConfiguratorChain
// schemaTraversers is a chain of schema traversers to be used with
// this Provider configuration. Schema traversers can be used to inspect or
// modify the Provider configuration based on the underlying Terraform
// resource schemas.
schemaTraversers []traverser.SchemaTraverser
}
// ReferenceInjector injects cross-resource references across the resources
@ -257,19 +264,32 @@ func WithFeaturesPackage(s string) ProviderOption {
}
}
// WithMainTemplate configures the provider family main module file's path.
// This template file will be used to generate the main modules of the
// family's members.
func WithMainTemplate(template string) ProviderOption {
return func(p *Provider) {
p.MainTemplate = template
}
}
// WithSchemaTraversers configures a chain of schema traversers to be used with
// this Provider configuration. Schema traversers can be used to inspect or
// modify the Provider configuration based on the underlying Terraform
// resource schemas.
func WithSchemaTraversers(traversers ...traverser.SchemaTraverser) ProviderOption {
return func(p *Provider) {
p.schemaTraversers = traversers
}
}
// NewProvider builds and returns a new Provider from provider
// tfjson schema, that is generated using Terraform CLI with:
// `terraform providers schema --json`
func NewProvider(schema []byte, prefix string, modulePath string, metadata []byte, opts ...ProviderOption) *Provider { //nolint:gocyclo
ps := tfjson.ProviderSchemas{}
if err := ps.UnmarshalJSON(schema); err != nil {
panic(err)
panic(errors.Wrap(err, "failed to unmarshal the Terraform JSON schema"))
}
if len(ps.Schemas) != 1 {
panic(fmt.Sprintf("there should exactly be 1 provider schema but there are %d", len(ps.Schemas)))
@ -351,6 +371,11 @@ func NewProvider(schema []byte, prefix string, modulePath string, metadata []byt
p.Resources[name] = DefaultResource(name, terraformResource, terraformPluginFrameworkResource, providerMetadata.Resources[name], p.DefaultResourceOptions...)
p.Resources[name].useTerraformPluginSDKClient = isTerraformPluginSDK
p.Resources[name].useTerraformPluginFrameworkClient = isPluginFrameworkResource
// traverse the Terraform resource schema to initialize the upjet Resource
// configurations
if err := TraverseSchemas(name, p.Resources[name], p.schemaTraversers...); err != nil {
panic(errors.Wrap(err, "failed to execute the Terraform schema traverser chain"))
}
}
for i, refInjector := range p.refInjectors {
if err := refInjector.InjectReferences(p.Resources); err != nil {

View File

@ -7,13 +7,16 @@ package config
import (
"context"
"fmt"
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
rschema "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"github.com/pkg/errors"
@ -22,8 +25,8 @@ import (
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config/conversion"
"github.com/crossplane/upjet/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
)
// A ListType is a type of list.
@ -166,13 +169,21 @@ type ExternalName struct {
// References represents reference resolver configurations for the fields of a
// given resource. Key should be the field path of the field to be referenced.
// The key is the Terraform field path of the field to be referenced.
// Example: "vpc_id" or "forwarding_rule.certificate_name" in case of nested
// in another object.
type References map[string]Reference
// Reference represents the Crossplane options used to generate
// reference resolvers for fields
type Reference struct {
// Type is the type name of the CRD if it is in the same package or
// Type is the Go type name of the CRD if it is in the same package or
// <package-path>.<type-name> if it is in a different package.
// Deprecated: Type is deprecated in favor of TerraformName, which provides
// a more stable and less error-prone API compared to Type. TerraformName
// will automatically handle name & version configurations that will affect
// the generated cross-resource reference. This is crucial especially if the
// provider generates multiple versions for its MR APIs.
Type string
// TerraformName is the name of the Terraform resource
// which will be referenced. The supplied resource name is
@ -187,7 +198,7 @@ type Reference struct {
// <field-name>Ref or <field-name>Refs.
// Optional
RefFieldName string
// SelectorFieldName is the field name for the Selector field. Defaults to
// SelectorFieldName is the Go field name for the Selector field. Defaults to
// <field-name>Selector.
// Optional
SelectorFieldName string
@ -214,10 +225,20 @@ type LateInitializer struct {
// "block_device_mappings.ebs".
IgnoredFields []string
// ConditionalIgnoredFields are the field paths to be skipped during
// late-initialization if they are filled in spec.initProvider.
ConditionalIgnoredFields []string
// ignoredCanonicalFieldPaths are the Canonical field paths to be skipped
// during late-initialization. This is filled using the `IgnoredFields`
// field which keeps Terraform paths by converting them to Canonical paths.
ignoredCanonicalFieldPaths []string
// conditionalIgnoredCanonicalFieldPaths are the Canonical field paths to be
// skipped during late-initialization if they are filled in spec.initProvider.
// This is filled using the `ConditionalIgnoredFields` field which keeps
// Terraform paths by converting them to Canonical paths.
conditionalIgnoredCanonicalFieldPaths []string
}
// GetIgnoredCanonicalFields returns the ignoredCanonicalFields
@ -233,6 +254,19 @@ func (l *LateInitializer) AddIgnoredCanonicalFields(cf string) {
l.ignoredCanonicalFieldPaths = append(l.ignoredCanonicalFieldPaths, cf)
}
// GetConditionalIgnoredCanonicalFields returns the conditionalIgnoredCanonicalFieldPaths
func (l *LateInitializer) GetConditionalIgnoredCanonicalFields() []string {
return l.conditionalIgnoredCanonicalFieldPaths
}
// AddConditionalIgnoredCanonicalFields sets conditional ignored canonical fields
func (l *LateInitializer) AddConditionalIgnoredCanonicalFields(cf string) {
if l.conditionalIgnoredCanonicalFieldPaths == nil {
l.conditionalIgnoredCanonicalFieldPaths = make([]string, 0)
}
l.conditionalIgnoredCanonicalFieldPaths = append(l.conditionalIgnoredCanonicalFieldPaths, cf)
}
// GetFieldPaths returns the fieldPaths map for Sensitive
func (s *Sensitive) GetFieldPaths() map[string]string {
return s.fieldPaths
@ -392,9 +426,24 @@ type Resource struct {
// be `ec2.aws.crossplane.io`
ShortGroup string
// Version is the version CRD will have.
// Version is the API version being generated for the corresponding CRD.
Version string
// PreviousVersions is the list of API versions previously generated for this
// resource for multi-versioned managed resources. upjet will attempt to load
// the type definitions from these previous versions if configured.
PreviousVersions []string
// ControllerReconcileVersion is the CRD API version the associated
// controller will watch & reconcile. If left unspecified,
// defaults to the value of Version. This configuration parameter
// can be used to have a controller use an older
// API version of the generated CRD instead of the API version being
// generated. Because this configuration parameter's value defaults to
// the value of Version, by default the controllers will reconcile the
// currently generated API versions of their associated CRs.
ControllerReconcileVersion string
// Kind is the kind of the CRD.
Kind string
@ -435,6 +484,29 @@ type Resource struct {
// SchemaElementOption for configuring options for schema elements.
SchemaElementOptions SchemaElementOptions
// crdStorageVersion is the CRD storage API version.
// Use Resource.CRDStorageVersion to read the configured storage version
// which implements a defaulting to the current version being generated
// for backwards compatibility. This field is not exported to enforce
// defaulting, which is needed for backwards-compatibility.
crdStorageVersion string
// crdHubVersion is the conversion hub API version for the generated CRD.
// Use Resource.CRDHubVersion to read the configured hub version
// which implements a defaulting to the current version being generated
// for backwards compatibility. This field is not exported to enforce
// the defaulting behavior, which is needed for backwards-compatibility.
crdHubVersion string
// listConversionPaths maps the Terraform field paths of embedded objects
// that need to be converted into singleton lists (lists of
// at most one element) at runtime, to the corresponding CRD paths.
// Such fields are lists in the Terraform schema, however upjet generates
// them as nested objects, and we need to convert them back to lists
// at runtime before passing them to the Terraform stack and lists into
// embedded objects after reading the state from the Terraform stack.
listConversionPaths map[string]string
// TerraformConfigurationInjector allows a managed resource to inject
// configuration values in the Terraform configuration map obtained by
// deserializing its `spec.forProvider` value. Managed resources can
@ -447,14 +519,29 @@ type Resource struct {
// Terraform InstanceDiff is computed during reconciliation.
TerraformCustomDiff CustomDiff
// TerraformPluginFrameworkIsStateEmptyFn allows customizing the logic
// for determining whether a Terraform Plugin Framework state value should
// be considered empty/nil for resource existence checks. If not set, the
// default behavior uses tfStateValue.IsNull().
TerraformPluginFrameworkIsStateEmptyFn TerraformPluginFrameworkIsStateEmptyFn
// ServerSideApplyMergeStrategies configures the server-side apply merge
// strategy for the fields at the given map keys. The map key is
// a Terraform configuration argument path such as a.b.c, without any
// index notation (i.e., array/map components do not need indices).
ServerSideApplyMergeStrategies ServerSideApplyMergeStrategies
// Conversions is the list of CRD API conversion functions to be invoked
// in-chain by the installed conversion Webhook for the generated CRD.
// This list of conversion.Conversion registered here are responsible for
// doing the conversions between the hub & spoke CRD API versions.
Conversions []conversion.Conversion
// TerraformConversions is the list of conversions to be invoked when passing
// data from the Crossplane layer to the Terraform layer and when reading
// data (state) from the Terraform layer to be used in the Crossplane layer.
TerraformConversions []TerraformConversion
// useTerraformPluginSDKClient indicates that a plugin SDK external client should
// be generated instead of the Terraform CLI-forking client.
useTerraformPluginSDKClient bool
@ -488,7 +575,56 @@ type Resource struct {
// conflict. By convention, also used in upjet, the field name is preceded by
// the value of the generated Kind, for example:
// "TagParameters": "ClusterTagParameters"
// Deprecated: OverrideFieldNames has been deprecated in favor of loading
// the already existing type names from the older versions of the MR APIS
// via the PreviousVersions API.
OverrideFieldNames map[string]string
// requiredFields are the fields that will be marked as required in the
// generated CRD schema, although they are not required in the TF schema.
requiredFields []string
// UpdateLoopPrevention is a mechanism to prevent infinite reconciliation
// loops. This is especially useful in cases where external services,
// silently modify resource data without notifying the management layer
// (e.g., sanitized XML fields).
UpdateLoopPrevention UpdateLoopPrevention
}
// UpdateLoopPrevention is an interface that defines the behavior to prevent
// update loops. Implementations of this interface are responsible for analyzing
// diffs and determining whether an update should be blocked or allowed.
type UpdateLoopPrevention interface {
// UpdateLoopPreventionFunc analyzes a diff and decides whether the update
// should be blocked. It returns a result containing a reason for blocking
// the update if a loop is detected, or nil if the update can proceed.
//
// Parameters:
// - diff: The diff object representing changes between the desired and
// current state.
// - mg: The managed resource that is being reconciled.
//
// Returns:
// - *UpdateLoopPreventResult: Contains the reason for blocking the update
// if a loop is detected.
// - error: An error if there are issues analyzing the diff
// (e.g., invalid data).
UpdateLoopPreventionFunc(diff *terraform.InstanceDiff, mg xpresource.Managed) (*UpdateLoopPreventResult, error)
}
// UpdateLoopPreventResult provides the result of an update loop prevention
// check. If a loop is detected, it includes a reason explaining why the update
// was blocked.
type UpdateLoopPreventResult struct {
// Reason provides a human-readable explanation of why the update was
// blocked. This message can be displayed to the user or logged for
// debugging purposes.
Reason string
}
// RequiredFields returns slice of the marked as required fieldpaths.
func (r *Resource) RequiredFields() []string {
return r.requiredFields
}
// ShouldUseTerraformPluginSDKClient returns whether to generate an SDKv2-based
@ -514,6 +650,13 @@ type CustomDiff func(diff *terraform.InstanceDiff, state *terraform.InstanceStat
// the JSON tags and tfMap is obtained by using the TF tags.
type ConfigurationInjector func(jsonMap map[string]any, tfMap map[string]any) error
// TerraformPluginFrameworkIsStateEmptyFn is a function that determines whether
// a Terraform Plugin Framework state value should be considered empty/nil for the
// purpose of determining resource existence. This allows providers to implement
// custom logic to handle cases where the standard IsNull() check is insufficient,
// such as when provider interceptors add fields like region to all state values.
type TerraformPluginFrameworkIsStateEmptyFn func(ctx context.Context, tfStateValue tftypes.Value, resourceSchema rschema.Schema) (bool, error)
// SchemaElementOptions represents schema element options for the
// schema elements of a Resource.
type SchemaElementOptions map[string]*SchemaElementOption
@ -532,7 +675,103 @@ func (m SchemaElementOptions) AddToObservation(el string) bool {
return m[el] != nil && m[el].AddToObservation
}
// TFListConversionPaths returns the Resource's runtime Terraform list
// conversion paths in fieldpath syntax.
func (r *Resource) TFListConversionPaths() []string {
l := make([]string, 0, len(r.listConversionPaths))
for k := range r.listConversionPaths {
l = append(l, k)
}
return l
}
// CRDListConversionPaths returns the Resource's runtime CRD list
// conversion paths in fieldpath syntax.
func (r *Resource) CRDListConversionPaths() []string {
l := make([]string, 0, len(r.listConversionPaths))
for _, v := range r.listConversionPaths {
l = append(l, v)
}
return l
}
// CRDStorageVersion returns the CRD storage version if configured. If not,
// returns the Version being generated as the default value.
func (r *Resource) CRDStorageVersion() string {
if r.crdStorageVersion != "" {
return r.crdStorageVersion
}
return r.Version
}
// SetCRDStorageVersion configures the CRD storage version for a Resource.
// If unset, the default storage version is the current Version
// being generated.
func (r *Resource) SetCRDStorageVersion(v string) {
r.crdStorageVersion = v
}
// CRDHubVersion returns the CRD hub version if configured. If not,
// returns the Version being generated as the default value.
func (r *Resource) CRDHubVersion() string {
if r.crdHubVersion != "" {
return r.crdHubVersion
}
return r.Version
}
// SetCRDHubVersion configures the CRD API conversion hub version
// for a Resource.
// If unset, the default hub version is the current Version
// being generated.
func (r *Resource) SetCRDHubVersion(v string) {
r.crdHubVersion = v
}
// AddSingletonListConversion configures the list at the specified Terraform
// field path and the specified CRD field path as an embedded object.
// crdPath is the field path expression for the CRD schema and tfPath is
// the field path expression for the Terraform schema corresponding to the
// singleton list to be converted to an embedded object.
// At runtime, upjet will convert such objects back and forth
// from/to singleton lists while communicating with the Terraform stack.
// The specified fieldpath expression must be a wildcard expression such as
// `conditions[*]` or a 0-indexed expression such as `conditions[0]`. Other
// index values are not allowed as this function deals with singleton lists.
func (r *Resource) AddSingletonListConversion(tfPath, crdPath string) {
// SchemaElementOptions.SetEmbeddedObject does not expect the indices and
// because we are dealing with singleton lists here, we only expect wildcards
// or the zero-index.
nPath := strings.ReplaceAll(tfPath, "[*]", "")
nPath = strings.ReplaceAll(nPath, "[0]", "")
r.SchemaElementOptions.SetEmbeddedObject(nPath)
r.listConversionPaths[tfPath] = crdPath
}
// RemoveSingletonListConversion removes the singleton list conversion
// for the specified Terraform configuration path. Also unsets the path's
// embedding mode. The specified fieldpath expression must be a Terraform
// field path with or without the wildcard segments. Returns true if
// the path has already been registered for singleton list conversion.
func (r *Resource) RemoveSingletonListConversion(tfPath string) bool {
nPath := strings.ReplaceAll(tfPath, "[*]", "")
nPath = strings.ReplaceAll(nPath, "[0]", "")
for p := range r.listConversionPaths {
n := strings.ReplaceAll(p, "[*]", "")
n = strings.ReplaceAll(n, "[0]", "")
if n == nPath {
delete(r.listConversionPaths, p)
if r.SchemaElementOptions[n] != nil {
r.SchemaElementOptions[n].EmbeddedObject = false
}
return true
}
}
return false
}
// SetEmbeddedObject sets the EmbeddedObject for the specified key.
// The key is a Terraform field path without the wildcard segments.
func (m SchemaElementOptions) SetEmbeddedObject(el string) {
if m[el] == nil {
m[el] = &SchemaElementOption{}

View File

@ -9,11 +9,11 @@ import (
"fmt"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"sigs.k8s.io/controller-runtime/pkg/client"
)
@ -24,7 +24,7 @@ const (
provider = "ACoolProvider"
)
func TestTagger_Initialize(t *testing.T) {
func TestTaggerInitialize(t *testing.T) {
errBoom := errors.New("boom")
type args struct {
@ -112,3 +112,187 @@ func TestSetExternalTagsWithPaved(t *testing.T) {
})
}
}
func TestAddSingletonListConversion(t *testing.T) {
type args struct {
r func() *Resource
tfPath string
crdPath string
}
type want struct {
r func() *Resource
}
cases := map[string]struct {
reason string
args
want
}{
"AddNonWildcardTFPath": {
reason: "A non-wildcard TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "singleton_list",
crdPath: "singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("singleton_list", "singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["singleton_list"] = "singletonList"
return r
},
},
},
"AddWildcardTFPath": {
reason: "A wildcard TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "parent[*].singleton_list",
crdPath: "parent[*].singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["parent.singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["parent[*].singleton_list"] = "parent[*].singletonList"
return r
},
},
},
"AddIndexedTFPath": {
reason: "An indexed TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "parent[0].singleton_list",
crdPath: "parent[0].singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[0].singleton_list", "parent[0].singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["parent.singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["parent[0].singleton_list"] = "parent[0].singletonList"
return r
},
},
},
}
for n, tc := range cases {
t.Run(n, func(t *testing.T) {
r := tc.args.r()
r.AddSingletonListConversion(tc.args.tfPath, tc.args.crdPath)
wantR := tc.want.r()
if diff := cmp.Diff(wantR.listConversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("%s\nAddSingletonListConversion(tfPath): -wantConversionPaths, +gotConversionPaths: \n%s", tc.reason, diff)
}
if diff := cmp.Diff(wantR.SchemaElementOptions, r.SchemaElementOptions); diff != "" {
t.Errorf("%s\nAddSingletonListConversion(tfPath): -wantSchemaElementOptions, +gotSchemaElementOptions: \n%s", tc.reason, diff)
}
})
}
}
func TestRemoveSingletonListConversion(t *testing.T) {
type args struct {
r func() *Resource
tfPath string
}
type want struct {
removed bool
r func() *Resource
}
cases := map[string]struct {
reason string
args
want
}{
"RemoveWildcardListConversion": {
reason: "An existing wildcard list conversion can successfully be removed.",
args: args{
tfPath: "parent[*].singleton_list",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
removed: true,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
return r
},
},
},
"RemoveIndexedListConversion": {
reason: "An existing indexed list conversion can successfully be removed.",
args: args{
tfPath: "parent[0].singleton_list",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[0].singleton_list", "parent[0].singletonList")
return r
},
},
want: want{
removed: true,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
return r
},
},
},
"NonExistingListConversion": {
reason: "A list conversion path that does not exist cannot be removed.",
args: args{
tfPath: "non-existent",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
removed: false,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
},
}
for n, tc := range cases {
t.Run(n, func(t *testing.T) {
r := tc.args.r()
got := r.RemoveSingletonListConversion(tc.args.tfPath)
if diff := cmp.Diff(tc.want.removed, got); diff != "" {
t.Errorf("%s\nRemoveSingletonListConversion(tfPath): -wantRemoved, +gotRemoved: \n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.r().listConversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("%s\nRemoveSingletonListConversion(tfPath): -wantConversionPaths, +gotConversionPaths: \n%s", tc.reason, diff)
}
})
}
}

View File

@ -0,0 +1,77 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"github.com/crossplane/upjet/v2/pkg/schema/traverser"
)
var _ ResourceSetter = &SingletonListEmbedder{}
// ResourceSetter allows the context Resource to be set for a traverser.
type ResourceSetter interface {
SetResource(r *Resource)
}
// ResourceSchema represents a provider's resource schema.
type ResourceSchema map[string]*Resource
// TraverseTFSchemas traverses the Terraform schemas of all the resources of
// the Provider `p` using the specified visitors. Reports any errors
// encountered.
func (s ResourceSchema) TraverseTFSchemas(visitors ...traverser.SchemaTraverser) error {
for name, cfg := range s {
if err := TraverseSchemas(name, cfg, visitors...); err != nil {
return errors.Wrapf(err, "failed to traverse the schema of the Terraform resource with name %q", name)
}
}
return nil
}
// TraverseSchemas visits the specified schema belonging to the Terraform
// resource with the given name and given upjet resource configuration using
// the specified visitors. If any visitors report an error, traversal is
// stopped and the error is reported to the caller.
func TraverseSchemas(tfName string, r *Resource, visitors ...traverser.SchemaTraverser) error {
// set the upjet Resource configuration as context for the visitors that
// satisfy the ResourceSetter interface.
for _, v := range visitors {
if rs, ok := v.(ResourceSetter); ok {
rs.SetResource(r)
}
}
return traverser.Traverse(tfName, r.TerraformResource, visitors...)
}
type resourceContext struct {
r *Resource
}
func (rc *resourceContext) SetResource(r *Resource) {
rc.r = r
}
// SingletonListEmbedder is a schema traverser for embedding singleton lists
// in the Terraform schema as objects.
type SingletonListEmbedder struct {
resourceContext
traverser.NoopTraverser
}
func (l *SingletonListEmbedder) VisitResource(r *traverser.ResourceNode) error {
// this visitor only works on sets and lists with the MaxItems constraint
// of 1.
if r.Schema.Type != schema.TypeList && r.Schema.Type != schema.TypeSet {
return nil
}
if r.Schema.MaxItems != 1 {
return nil
}
l.r.AddSingletonListConversion(traverser.FieldPathWithWildcard(r.TFPath), traverser.FieldPathWithWildcard(r.CRDPath))
return nil
}

View File

@ -0,0 +1,173 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)
func TestSingletonListEmbedder(t *testing.T) {
type args struct {
resource *schema.Resource
name string
}
type want struct {
err error
schemaOpts SchemaElementOptions
conversionPaths map[string]string
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulRootLevelSingletonListEmbedding": {
reason: "Successfully embed a root-level singleton list in the resource schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"singleton_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{
"singleton_list": {
EmbeddedObject: true,
},
},
conversionPaths: map[string]string{
"singleton_list": "singletonList",
},
},
},
"NoEmbeddingForMultiItemList": {
reason: "Do not embed a list with a MaxItems constraint greater than 1.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"multiitem_list": {
Type: schema.TypeList,
MaxItems: 2,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{},
conversionPaths: map[string]string{},
},
},
"NoEmbeddingForNonList": {
reason: "Do not embed a non-list schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"invalid": {
Type: schema.TypeInvalid,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{},
conversionPaths: map[string]string{},
},
},
"SuccessfulNestedSingletonListEmbedding": {
reason: "Successfully embed a nested singleton list in the resource schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"parent_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"child_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{
"parent_list": {
EmbeddedObject: true,
},
"parent_list.child_list": {
EmbeddedObject: true,
},
},
conversionPaths: map[string]string{
"parent_list": "parentList",
"parent_list[*].child_list": "parentList[*].childList",
},
},
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
e := &SingletonListEmbedder{}
r := DefaultResource(tt.args.name, tt.args.resource, nil, nil)
s := ResourceSchema{
tt.args.name: r,
}
err := s.TraverseTFSchemas(e)
if diff := cmp.Diff(tt.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\ntraverseSchemas(name, schema, ...): -wantErr, +gotErr:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(tt.want.schemaOpts, r.SchemaElementOptions); diff != "" {
t.Errorf("\n%s\ntraverseSchemas(name, schema, ...): -wantOptions, +gotOptions:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(tt.want.conversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("\n%s\ntraverseSchemas(name, schema, ...): -wantPaths, +gotPaths:\n%s", tt.reason, diff)
}
})
}
}

View File

@ -0,0 +1,72 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"github.com/pkg/errors"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
)
// Mode denotes the mode of the runtime Terraform conversion, e.g.,
// conversion from Crossplane parameters to Terraform arguments, or
// conversion from Terraform state to Crossplane state.
type Mode int
const (
ToTerraform Mode = iota
FromTerraform
)
// String returns a string representation of the conversion mode.
func (m Mode) String() string {
switch m {
case ToTerraform:
return "toTerraform"
case FromTerraform:
return "fromTerraform"
default:
return "unknown"
}
}
type TerraformConversion interface {
Convert(params map[string]any, r *Resource, mode Mode) (map[string]any, error)
}
// ApplyTFConversions applies the configured Terraform conversions on the
// specified params map in the given mode, i.e., from Crossplane layer to the
// Terraform layer or vice versa.
func (r *Resource) ApplyTFConversions(params map[string]any, mode Mode) (map[string]any, error) {
var err error
for _, c := range r.TerraformConversions {
params, err = c.Convert(params, r, mode)
if err != nil {
return nil, err
}
}
return params, nil
}
type singletonListConversion struct{}
// NewTFSingletonConversion initializes a new TerraformConversion to convert
// between singleton lists and embedded objects in the exchanged data
// at runtime between the Crossplane & Terraform layers.
func NewTFSingletonConversion() TerraformConversion {
return singletonListConversion{}
}
func (s singletonListConversion) Convert(params map[string]any, r *Resource, mode Mode) (map[string]any, error) {
var err error
var m map[string]any
switch mode {
case FromTerraform:
m, err = conversion.Convert(params, r.TFListConversionPaths(), conversion.ToEmbeddedObject, nil)
case ToTerraform:
m, err = conversion.Convert(params, r.TFListConversionPaths(), conversion.ToSingletonList, nil)
}
return m, errors.Wrapf(err, "failed to convert between Crossplane and Terraform layers in mode %q", mode)
}

View File

@ -7,8 +7,8 @@ package controller
import (
"context"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
@ -16,9 +16,9 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
ctrl "sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/crossplane/upjet/pkg/controller/handler"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
const (
@ -27,6 +27,13 @@ const (
errReconcileRequestFmt = "cannot request the reconciliation of the resource %s/%s after an async %s"
)
// crossplane-runtime error constants
const (
errXPReconcileCreate = "create failed"
errXPReconcileUpdate = "update failed"
errXPReconcileDelete = "delete failed"
)
const (
rateLimiterCallback = "asyncCallback"
)
@ -106,12 +113,11 @@ type APICallbacks struct {
enableStatusUpdates bool
}
func (ac *APICallbacks) callbackFn(name, op string) terraform.CallbackFn {
func (ac *APICallbacks) callbackFn(nn types.NamespacedName, op string) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
nn := types.NamespacedName{Name: name}
tr := ac.newTerraformed()
if kErr := ac.kube.Get(ctx, nn, tr); kErr != nil {
return errors.Wrapf(kErr, errGetFmt, tr.GetObjectKind().GroupVersionKind().String(), name, op)
return errors.Wrapf(kErr, errGetFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
}
// For the no-fork architecture, we will need to be able to report
// reconciliation errors. The proper place is the `Synced`
@ -119,22 +125,36 @@ func (ac *APICallbacks) callbackFn(name, op string) terraform.CallbackFn {
// to do so. So we keep the `LastAsyncOperation` condition.
// TODO: move this to the `Synced` condition.
tr.SetConditions(resource.LastAsyncOperationCondition(err))
if err != nil {
wrapMsg := ""
switch op {
case "create":
wrapMsg = errXPReconcileCreate
case "update":
wrapMsg = errXPReconcileUpdate
case "destroy":
wrapMsg = errXPReconcileDelete
}
tr.SetConditions(xpv1.ReconcileError(errors.Wrap(err, wrapMsg)))
} else {
tr.SetConditions(xpv1.ReconcileSuccess())
}
if ac.enableStatusUpdates {
tr.SetConditions(resource.AsyncOperationFinishedCondition())
}
uErr := errors.Wrapf(ac.kube.Status().Update(ctx, tr), errUpdateStatusFmt, tr.GetObjectKind().GroupVersionKind().String(), name, op)
uErr := errors.Wrapf(ac.kube.Status().Update(ctx, tr), errUpdateStatusFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
if ac.eventHandler != nil {
rateLimiter := handler.NoRateLimiter
switch {
case err != nil:
rateLimiter = rateLimiterCallback
default:
ac.eventHandler.Forget(rateLimiterCallback, name)
ac.eventHandler.Forget(rateLimiterCallback, nn)
}
// TODO: use the errors.Join from
// github.com/crossplane/crossplane-runtime.
if ok := ac.eventHandler.RequestReconcile(rateLimiter, name, nil); !ok {
return errors.Errorf(errReconcileRequestFmt, tr.GetObjectKind().GroupVersionKind().String(), name, op)
if ok := ac.eventHandler.RequestReconcile(rateLimiter, nn, nil); !ok {
return errors.Errorf(errReconcileRequestFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
}
}
return uErr
@ -142,7 +162,7 @@ func (ac *APICallbacks) callbackFn(name, op string) terraform.CallbackFn {
}
// Create makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Create(name string) terraform.CallbackFn {
func (ac *APICallbacks) Create(name types.NamespacedName) terraform.CallbackFn {
// request will be requeued although the managed reconciler already
// requeues with exponential back-off during the creation phase
// because the upjet external client returns ResourceExists &
@ -154,12 +174,12 @@ func (ac *APICallbacks) Create(name string) terraform.CallbackFn {
}
// Update makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Update(name string) terraform.CallbackFn {
func (ac *APICallbacks) Update(name types.NamespacedName) terraform.CallbackFn {
return ac.callbackFn(name, "update")
}
// Destroy makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Destroy(name string) terraform.CallbackFn {
func (ac *APICallbacks) Destroy(name types.NamespacedName) terraform.CallbackFn {
// request will be requeued although the managed reconciler requeues
// with exponential back-off during the deletion phase because
// during the async deletion operation, external client's

View File

@ -8,17 +8,18 @@ import (
"context"
"testing"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
ctrl "sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/resource/fake"
tjerrors "github.com/crossplane/upjet/pkg/terraform/errors"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
tjerrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
func TestAPICallbacksCreate(t *testing.T) {
@ -88,14 +89,14 @@ func TestAPICallbacksCreate(t *testing.T) {
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/name", "create"),
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "create"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Create("name")(tc.args.err, context.TODO())
err := e.Create(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}
@ -170,14 +171,14 @@ func TestAPICallbacksUpdate(t *testing.T) {
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/name", "update"),
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "update"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Update("name")(tc.args.err, context.TODO())
err := e.Update(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nUpdate(...): -want error, +got error:\n%s", tc.reason, diff)
}
@ -252,14 +253,290 @@ func TestAPICallbacks_Destroy(t *testing.T) {
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/name", "destroy"),
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "destroy"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Destroy("name")(tc.args.err, context.TODO())
err := e.Destroy(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nDestroy(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacksCreate_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"CreateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewApplyFailed(nil),
},
},
"CreateOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "create"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Create(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacksUpdate_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"UpdateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewApplyFailed(nil),
},
},
"ApplyOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "update"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Update(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nUpdate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacks_Destroy_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"DestroyOperationFailed": {
reason: "It should update the condition with error if async destroy failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewDestroyFailed(nil)), got); diff != "" {
t.Errorf("\nDestroy(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewDestroyFailed(nil),
},
},
"DestroyOperationSucceeded": {
reason: "It should update the condition with success if the destroy operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nDestroy(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "destroy"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Destroy(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nDestroy(...): -want error, +got error:\n%s", tc.reason, diff)
}

View File

@ -5,29 +5,58 @@
package conversion
import (
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
"github.com/crossplane/upjet/pkg/config/conversion"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
)
const (
errFmtPrioritizedManagedConversion = "cannot apply the PrioritizedManagedConversion for the %q object"
errFmtPavedConversion = "cannot apply the PavedConversion for the %q object"
errFmtManagedConversion = "cannot apply the ManagedConversion for the %q object"
errFmtGetGVK = "cannot get the GVK for the %s object of type %T"
)
// RoundTrip round-trips from `src` to `dst` via an unstructured map[string]any
// representation of the `src` object and applies the registered webhook
// conversion functions of this registry.
func (r *registry) RoundTrip(dst, src resource.Terraformed) error { //nolint:gocyclo // considered breaking this according to the converters and I did not like it
if dst.GetObjectKind().GroupVersionKind().Version == "" {
gvk, err := apiutil.GVKForObject(dst, r.scheme)
if err != nil && !runtime.IsNotRegisteredError(err) {
return errors.Wrapf(err, errFmtGetGVK, "destination", dst)
}
if err == nil {
dst.GetObjectKind().SetGroupVersionKind(gvk)
}
}
if src.GetObjectKind().GroupVersionKind().Version == "" {
gvk, err := apiutil.GVKForObject(src, r.scheme)
if err != nil && !runtime.IsNotRegisteredError(err) {
return errors.Wrapf(err, errFmtGetGVK, "source", src)
}
if err == nil {
src.GetObjectKind().SetGroupVersionKind(gvk)
}
}
// first PrioritizedManagedConversions are run in their registration order
for _, c := range r.GetConversions(dst) {
if pc, ok := c.(conversion.PrioritizedManagedConversion); ok {
if _, err := pc.ConvertManaged(src, dst); err != nil {
return errors.Wrapf(err, errFmtPrioritizedManagedConversion, dst.GetTerraformResourceType())
}
}
}
srcMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(src)
if err != nil {
return errors.Wrap(err, "cannot convert the conversion source object into the map[string]any representation")
}
gvk := dst.GetObjectKind().GroupVersionKind()
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(srcMap, dst); err != nil {
return errors.Wrap(err, "cannot convert the map[string]any representation of the source object to the conversion target object")
}
// restore the original GVK for the conversion destination
dst.GetObjectKind().SetGroupVersionKind(gvk)
// now we will try to run the registered webhook conversions
dstMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(dst)
if err != nil {
@ -35,23 +64,28 @@ func (r *registry) RoundTrip(dst, src resource.Terraformed) error { //nolint:goc
}
srcPaved := fieldpath.Pave(srcMap)
dstPaved := fieldpath.Pave(dstMap)
// then run the PavedConversions
for _, c := range r.GetConversions(dst) {
if pc, ok := c.(conversion.PavedConversion); ok {
if _, err := pc.ConvertPaved(srcPaved, dstPaved); err != nil {
return errors.Wrapf(err, "cannot apply the PavedConversion for the %q object", dst.GetTerraformResourceType())
return errors.Wrapf(err, errFmtPavedConversion, dst.GetTerraformResourceType())
}
}
}
// convert the map[string]any representation of the conversion target back to
// the original type.
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(dstMap, dst); err != nil {
return errors.Wrap(err, "cannot convert the map[string]any representation of the conversion target object to the target object")
return errors.Wrap(err, "cannot convert the map[string]any representation of the conversion target back to the object itself")
}
// finally at the third stage, run the ManagedConverters
for _, c := range r.GetConversions(dst) {
if tc, ok := c.(conversion.ManagedConversion); ok {
if _, ok := tc.(conversion.PrioritizedManagedConversion); ok {
continue // then already run in the first stage
}
if _, err := tc.ConvertManaged(src, dst); err != nil {
return errors.Wrapf(err, "cannot apply the TerraformedConversion for the %q object", dst.GetTerraformResourceType())
return errors.Wrapf(err, errFmtManagedConversion, dst.GetTerraformResourceType())
}
}
}

View File

@ -8,14 +8,18 @@ import (
"fmt"
"testing"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/config/conversion"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
)
const (
@ -25,6 +29,7 @@ const (
val2 = "val2"
commonKey = "commonKey"
commonVal = "commonVal"
errTest = "test error"
)
func TestRoundTrip(t *testing.T) {
@ -45,8 +50,9 @@ func TestRoundTrip(t *testing.T) {
"SuccessfulRoundTrip": {
reason: "Source object is successfully copied into the target object.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
@ -58,6 +64,7 @@ func TestRoundTrip(t *testing.T) {
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key1, val1))),
conversions: []conversion.Conversion{
conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil),
// Because the parameters of the fake.Terraformed is an unstructured
// map, all the fields of source (including key1) are successfully
// copied into dst by registry.RoundTrip.
@ -74,7 +81,83 @@ func TestRoundTrip(t *testing.T) {
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key2, val1))),
},
},
"SuccessfulRoundTripWithNonWildcardConversions": {
reason: "Source object is successfully converted into the target object with a set of non-wildcard conversions.",
args: args{
dst: fake.NewTerraformed(fake.WithTypeMeta(metav1.TypeMeta{})),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key1, val1)), fake.WithTypeMeta(metav1.TypeMeta{})),
conversions: []conversion.Conversion{
conversion.NewIdentityConversionExpandPaths(fake.Version, fake.Version, nil),
// Because the parameters of the fake.Terraformed is an unstructured
// map, all the fields of source (including key1) are successfully
// copied into dst by registry.RoundTrip.
// This conversion deletes the copied key "key1".
conversion.NewCustomConverter(fake.Version, fake.Version, func(_, target xpresource.Managed) error {
tr := target.(*fake.Terraformed)
delete(tr.Parameters, key1)
return nil
}),
conversion.NewFieldRenameConversion(fake.Version, fmt.Sprintf("parameterizable.parameters.%s", key1), fake.Version, fmt.Sprintf("parameterizable.parameters.%s", key2)),
},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key2, val1)), fake.WithTypeMeta(metav1.TypeMeta{
Kind: fake.Kind,
APIVersion: fake.GroupVersion.String(),
})),
},
},
"RoundTripFailedPrioritizedConversion": {
reason: "Should return an error if a PrioritizedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedPrioritizedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtPrioritizedManagedConversion, ""),
},
},
"RoundTripFailedPavedConversion": {
reason: "Should return an error if a PavedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedPavedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtPavedConversion, ""),
},
},
"RoundTripFailedManagedConversion": {
reason: "Should return an error if a ManagedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedManagedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtManagedConversion, ""),
},
},
"RoundTripWithExcludedFields": {
reason: "Source object is successfully copied into the target object with certain fields excluded.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1, key2, val2))),
conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, []string{"parameterizable.parameters"}, key2)},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
},
},
}
s := runtime.NewScheme()
if err := fake.AddToScheme(s); err != nil {
t.Fatalf("Failed to register the fake.Terraformed object with the runtime scheme")
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
p := &config.Provider{
@ -84,8 +167,10 @@ func TestRoundTrip(t *testing.T) {
},
},
}
r := &registry{}
if err := r.RegisterConversions(p); err != nil {
r := &registry{
scheme: s,
}
if err := r.RegisterConversions(p, nil); err != nil {
t.Fatalf("\n%s\nRegisterConversions(p): Failed to register the conversions with the registry.\n", tc.reason)
}
err := r.RoundTrip(tc.args.dst, tc.args.src)
@ -101,3 +186,35 @@ func TestRoundTrip(t *testing.T) {
})
}
}
type failedPrioritizedConversion struct{}
func (failedPrioritizedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedPrioritizedConversion) ConvertManaged(_, _ xpresource.Managed) (bool, error) {
return false, errors.New(errTest)
}
func (failedPrioritizedConversion) Prioritized() {}
type failedPavedConversion struct{}
func (failedPavedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedPavedConversion) ConvertPaved(_, _ *fieldpath.Paved) (bool, error) {
return false, errors.New(errTest)
}
type failedManagedConversion struct{}
func (failedManagedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedManagedConversion) ConvertManaged(_, _ xpresource.Managed) (bool, error) {
return false, errors.New(errTest)
}

View File

@ -6,10 +6,11 @@ package conversion
import (
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/config/conversion"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
)
const (
@ -20,16 +21,19 @@ var instance *registry
// registry represents the conversion hook registry for a provider.
type registry struct {
provider *config.Provider
providerCluster *config.Provider
providerNamespaced *config.Provider
scheme *runtime.Scheme
}
// RegisterConversions registers the API version conversions from the specified
// provider configuration with this registry.
func (r *registry) RegisterConversions(provider *config.Provider) error {
if r.provider != nil {
func (r *registry) RegisterConversions(providerCluster, providerNamespaced *config.Provider) error {
if r.providerCluster != nil || r.providerNamespaced != nil {
return errors.New(errAlreadyRegistered)
}
r.provider = provider
r.providerCluster = providerCluster
r.providerNamespaced = providerNamespaced
return nil
}
@ -37,10 +41,17 @@ func (r *registry) RegisterConversions(provider *config.Provider) error {
// registry for the specified Terraformed resource.
func (r *registry) GetConversions(tr resource.Terraformed) []conversion.Conversion {
t := tr.GetTerraformResourceType()
if r == nil || r.provider == nil || r.provider.Resources[t] == nil {
p := r.providerCluster
if tr.GetNamespace() != "" {
p = r.providerNamespaced
}
if p == nil || p.Resources[t] == nil {
return nil
}
return r.provider.Resources[t].Conversions
return p.Resources[t].Conversions
}
// GetConversions returns the conversion.Conversions registered for the
@ -50,11 +61,16 @@ func GetConversions(tr resource.Terraformed) []conversion.Conversion {
}
// RegisterConversions registers the API version conversions from the specified
// provider configuration.
func RegisterConversions(provider *config.Provider) error {
// provider configuration. The specified scheme should contain the registrations
// for the types whose versions are to be converted. If a registration for a
// Go schema is not found in the specified registry, RoundTrip does not error
// but only wildcard conversions must be used with the registry.
func RegisterConversions(providerCluster, providerNamespaced *config.Provider, scheme *runtime.Scheme) error {
if instance != nil {
return errors.New(errAlreadyRegistered)
}
instance = &registry{}
return instance.RegisterConversions(provider)
instance = &registry{
scheme: scheme,
}
return instance.RegisterConversions(providerCluster, providerNamespaced)
}

View File

@ -8,21 +8,22 @@ import (
"context"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/controller/handler"
"github.com/crossplane/upjet/pkg/metrics"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/resource/json"
"github.com/crossplane/upjet/pkg/terraform"
tferrors "github.com/crossplane/upjet/pkg/terraform/errors"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
const (
@ -125,7 +126,7 @@ func (c *Connector) Connect(ctx context.Context, mg xpresource.Managed) (managed
providerHandle: ws.ProviderHandle,
eventHandler: c.eventHandler,
kube: c.kube,
logger: c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "gvk", mg.GetObjectKind().GroupVersionKind().String()),
logger: c.logger.WithValues("uid", mg.GetUID(), "namespace", mg.GetNamespace(), "name", mg.GetName(), "gvk", mg.GetObjectKind().GroupVersionKind().String()),
}, nil
}
@ -140,7 +141,7 @@ type external struct {
logger logging.Logger
}
func (e *external) scheduleProvider(name string) (bool, error) {
func (e *external) scheduleProvider(name types.NamespacedName) (bool, error) {
if e.providerScheduler == nil || e.workspace == nil {
return false, nil
}
@ -176,7 +177,11 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
// and serial.
// TODO(muvaf): Look for ways to reduce the cyclomatic complexity without
// increasing the difficulty of understanding the flow.
requeued, err := e.scheduleProvider(mg.GetName())
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalObservation{}, errors.Wrapf(err, "cannot schedule a native provider during observe: %s", mg.GetUID())
}
@ -308,7 +313,11 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
tr.SetConditions(xpv1.Available())
e.logger.Debug("Resource is marked as available.")
if e.eventHandler != nil {
e.eventHandler.RequestReconcile(rateLimiterStatus, mg.GetName(), nil)
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
e.eventHandler.RequestReconcile(rateLimiterStatus, name, nil)
}
return managed.ExternalObservation{
ResourceExists: true,
@ -328,8 +337,16 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
// now we do a Workspace.Refresh
default:
if e.eventHandler != nil {
e.eventHandler.Forget(rateLimiterStatus, mg.GetName())
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
e.eventHandler.Forget(rateLimiterStatus, name)
}
// TODO(cem): Consider skipping diff calculation (terraform plan) to
// avoid potential config validation errors in the import path. See
// https://github.com/crossplane/upjet/pull/461
plan, err := e.workspace.Plan(ctx)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, errPlan)
@ -352,7 +369,11 @@ func addTTR(mg xpresource.Managed) {
}
func (e *external) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
requeued, err := e.scheduleProvider(mg.GetName())
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalCreation{}, errors.Wrapf(err, "cannot schedule a native provider during create: %s", mg.GetUID())
}
@ -361,7 +382,7 @@ func (e *external) Create(ctx context.Context, mg xpresource.Managed) (managed.E
}
defer e.stopProvider()
if e.config.UseAsync {
return managed.ExternalCreation{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Create(mg.GetName())), errStartAsyncApply)
return managed.ExternalCreation{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Create(name)), errStartAsyncApply)
}
tr, ok := mg.(resource.Terraformed)
if !ok {
@ -387,7 +408,11 @@ func (e *external) Create(ctx context.Context, mg xpresource.Managed) (managed.E
}
func (e *external) Update(ctx context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
requeued, err := e.scheduleProvider(mg.GetName())
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrapf(err, "cannot schedule a native provider during update: %s", mg.GetUID())
}
@ -396,7 +421,7 @@ func (e *external) Update(ctx context.Context, mg xpresource.Managed) (managed.E
}
defer e.stopProvider()
if e.config.UseAsync {
return managed.ExternalUpdate{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Update(mg.GetName())), errStartAsyncApply)
return managed.ExternalUpdate{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Update(name)), errStartAsyncApply)
}
tr, ok := mg.(resource.Terraformed)
if !ok {
@ -413,19 +438,27 @@ func (e *external) Update(ctx context.Context, mg xpresource.Managed) (managed.E
return managed.ExternalUpdate{}, errors.Wrap(tr.SetObservation(attr), "cannot set observation")
}
func (e *external) Delete(ctx context.Context, mg xpresource.Managed) error {
requeued, err := e.scheduleProvider(mg.GetName())
func (e *external) Delete(ctx context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return errors.Wrapf(err, "cannot schedule a native provider during delete: %s", mg.GetUID())
return managed.ExternalDelete{}, errors.Wrapf(err, "cannot schedule a native provider during delete: %s", mg.GetUID())
}
if requeued {
return nil
return managed.ExternalDelete{}, nil
}
defer e.stopProvider()
if e.config.UseAsync {
return errors.Wrap(e.workspace.DestroyAsync(e.callback.Destroy(mg.GetName())), errStartAsyncDestroy)
return managed.ExternalDelete{}, errors.Wrap(e.workspace.DestroyAsync(e.callback.Destroy(name)), errStartAsyncDestroy)
}
return errors.Wrap(e.workspace.Destroy(ctx), errDestroy)
return managed.ExternalDelete{}, errors.Wrap(e.workspace.Destroy(ctx), errDestroy)
}
func (e *external) Disconnect(_ context.Context) error {
return nil
}
func (e *external) Import(ctx context.Context, tr resource.Terraformed) (managed.ExternalObservation, error) {

View File

@ -6,20 +6,24 @@ package controller
import (
"context"
"fmt"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/controller/handler"
"github.com/crossplane/upjet/pkg/metrics"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/terraform"
tferrors "github.com/crossplane/upjet/pkg/terraform/errors"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
// TerraformPluginFrameworkAsyncConnector is a managed reconciler Connecter
@ -39,7 +43,8 @@ func NewTerraformPluginFrameworkAsyncConnector(kube client.Client,
ots *OperationTrackerStore,
sf terraform.SetupFn,
cfg *config.Resource,
opts ...TerraformPluginFrameworkAsyncOption) *TerraformPluginFrameworkAsyncConnector {
opts ...TerraformPluginFrameworkAsyncOption,
) *TerraformPluginFrameworkAsyncConnector {
nfac := &TerraformPluginFrameworkAsyncConnector{
TerraformPluginFrameworkConnector: NewTerraformPluginFrameworkConnector(kube, sf, cfg, ots),
}
@ -116,7 +121,7 @@ func (n *terraformPluginFrameworkAsyncExternalClient) Observe(ctx context.Contex
ResourceUpToDate: true,
}, nil
}
n.opTracker.LastOperation.Flush()
n.opTracker.LastOperation.Clear(true)
o, err := n.terraformPluginFrameworkExternalClient.Observe(ctx, mg)
// clear any previously reported LastAsyncOperation error condition here,
@ -124,81 +129,152 @@ func (n *terraformPluginFrameworkAsyncExternalClient) Observe(ctx context.Contex
// not scheduled to be deleted.
if err == nil && o.ResourceExists && o.ResourceUpToDate && !meta.WasDeleted(mg) {
mg.(resource.Terraformed).SetConditions(resource.LastAsyncOperationCondition(nil))
mg.(resource.Terraformed).SetConditions(xpv1.ReconcileSuccess())
n.opTracker.LastOperation.Clear(false)
}
return o, err
}
func (n *terraformPluginFrameworkAsyncExternalClient) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
// panicHandler wraps an error, so that deferred functions that will
// be executed on a panic can access the error more conveniently.
type panicHandler struct {
err error
}
// recoverIfPanic recovers from panics, if any. Upon recovery, the
// error is set to a recovery message. Otherwise, the error is left
// unmodified. Calls to this function should be defferred directly:
// `defer ph.recoverIfPanic()`. Panic recovery won't work if the call
// is wrapped in another function call, such as `defer func() {
// ph.recoverIfPanic() }()`. On recovery, API machinery panic handlers
// run. The implementation follows the outline of panic recovery
// mechanism in controller-runtime:
// https://github.com/kubernetes-sigs/controller-runtime/blob/v0.17.3/pkg/internal/controller/controller.go#L105-L112
func (ph *panicHandler) recoverIfPanic(ctx context.Context) {
if r := recover(); r != nil {
for _, fn := range utilruntime.PanicHandlers {
fn(ctx, r)
}
ph.err = fmt.Errorf("recovered from panic: %v", r)
}
}
func (n *terraformPluginFrameworkAsyncExternalClient) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("create") {
return managed.ExternalCreation{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncCreateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Create(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async create starting...")
_, err := n.terraformPluginFrameworkExternalClient.Create(ctx, mg)
err = tferrors.NewAsyncCreateFailed(err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Create(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginFrameworkExternalClient.Create(ctx, mg)
}()
return managed.ExternalCreation{}, nil
return managed.ExternalCreation{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
func (n *terraformPluginFrameworkAsyncExternalClient) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("update") {
return managed.ExternalUpdate{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncUpdateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Update(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async update starting...")
_, err := n.terraformPluginFrameworkExternalClient.Update(ctx, mg)
err = tferrors.NewAsyncUpdateFailed(err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Update(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginFrameworkExternalClient.Update(ctx, mg)
}()
return managed.ExternalUpdate{}, nil
return managed.ExternalUpdate{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Delete(_ context.Context, mg xpresource.Managed) error {
func (n *terraformPluginFrameworkAsyncExternalClient) Delete(_ context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
switch {
case n.opTracker.LastOperation.Type == "delete":
n.opTracker.logger.Debug("The previous delete operation is still ongoing")
return nil
return managed.ExternalDelete{}, nil
case !n.opTracker.LastOperation.MarkStart("delete"):
return errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
return managed.ExternalDelete{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncDeleteFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Destroy(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async delete starting...")
err := tferrors.NewAsyncDeleteFailed(n.terraformPluginFrameworkExternalClient.Delete(ctx, mg))
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Destroy(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginFrameworkExternalClient.Delete(ctx, mg)
}()
return managed.ExternalDelete{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Disconnect(_ context.Context) error {
return nil
}

View File

@ -8,19 +8,21 @@ import (
"context"
"time"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/controller/handler"
"github.com/crossplane/upjet/pkg/metrics"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/terraform"
tferrors "github.com/crossplane/upjet/pkg/terraform/errors"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
var defaultAsyncTimeout = 1 * time.Hour
@ -121,7 +123,7 @@ func (n *terraformPluginSDKAsyncExternal) Observe(ctx context.Context, mg xpreso
ResourceUpToDate: true,
}, nil
}
n.opTracker.LastOperation.Flush()
n.opTracker.LastOperation.Clear(true)
o, err := n.terraformPluginSDKExternal.Observe(ctx, mg)
// clear any previously reported LastAsyncOperation error condition here,
@ -129,81 +131,127 @@ func (n *terraformPluginSDKAsyncExternal) Observe(ctx context.Context, mg xpreso
// not scheduled to be deleted.
if err == nil && o.ResourceExists && o.ResourceUpToDate && !meta.WasDeleted(mg) {
mg.(resource.Terraformed).SetConditions(resource.LastAsyncOperationCondition(nil))
mg.(resource.Terraformed).SetConditions(xpv1.ReconcileSuccess())
n.opTracker.LastOperation.Clear(false)
}
return o, err
}
func (n *terraformPluginSDKAsyncExternal) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
func (n *terraformPluginSDKAsyncExternal) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("create") {
return managed.ExternalCreation{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncCreateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Create(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async create starting...", "tfID", n.opTracker.GetTfID())
_, err := n.terraformPluginSDKExternal.Create(ctx, mg)
err = tferrors.NewAsyncCreateFailed(err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Create(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginSDKExternal.Create(ctx, mg)
}()
return managed.ExternalCreation{}, nil
return managed.ExternalCreation{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
func (n *terraformPluginSDKAsyncExternal) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("update") {
return managed.ExternalUpdate{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncUpdateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Update(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async update starting...", "tfID", n.opTracker.GetTfID())
_, err := n.terraformPluginSDKExternal.Update(ctx, mg)
err = tferrors.NewAsyncUpdateFailed(err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Update(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginSDKExternal.Update(ctx, mg)
}()
return managed.ExternalUpdate{}, nil
return managed.ExternalUpdate{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Delete(_ context.Context, mg xpresource.Managed) error {
func (n *terraformPluginSDKAsyncExternal) Delete(_ context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
switch {
case n.opTracker.LastOperation.Type == "delete":
n.opTracker.logger.Debug("The previous delete operation is still ongoing", "tfID", n.opTracker.GetTfID())
return nil
return managed.ExternalDelete{}, nil
case !n.opTracker.LastOperation.MarkStart("delete"):
return errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
return managed.ExternalDelete{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncDeleteFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Destroy(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async delete starting...", "tfID", n.opTracker.GetTfID())
err := tferrors.NewAsyncDeleteFailed(n.terraformPluginSDKExternal.Delete(ctx, mg))
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
if cErr := n.callback.Destroy(mg.GetName())(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
_, ph.err = n.terraformPluginSDKExternal.Delete(ctx, mg)
}()
return managed.ExternalDelete{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Disconnect(_ context.Context) error {
return nil
}

View File

@ -8,18 +8,19 @@ import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
tf "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/resource/fake"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
var (
@ -227,7 +228,7 @@ func TestAsyncTerraformPluginSDKCreate(t *testing.T) {
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
CreateFn: func(s string) terraform.CallbackFn {
CreateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
@ -271,7 +272,7 @@ func TestAsyncTerraformPluginSDKUpdate(t *testing.T) {
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
UpdateFn: func(s string) terraform.CallbackFn {
UpdateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
@ -315,7 +316,7 @@ func TestAsyncTerraformPluginSDKDelete(t *testing.T) {
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
DestroyFn: func(s string) terraform.CallbackFn {
DestroyFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
@ -327,7 +328,7 @@ func TestAsyncTerraformPluginSDKDelete(t *testing.T) {
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKAsyncExternal := prepareTerraformPluginSDKAsyncExternal(tc.args.r, tc.args.cfg, tc.args.fns)
err := terraformPluginSDKAsyncExternal.Delete(context.TODO(), tc.args.obj)
_, err := terraformPluginSDKAsyncExternal.Delete(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}

View File

@ -8,24 +8,25 @@ import (
"context"
"testing"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/logging"
xpmeta "github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
xpmeta "github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/resource/fake"
"github.com/crossplane/upjet/pkg/resource/json"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
const (
@ -98,20 +99,20 @@ func (s StoreFns) Workspace(ctx context.Context, c resource.SecretClient, tr res
}
type CallbackFns struct {
CreateFn func(string) terraform.CallbackFn
UpdateFn func(string) terraform.CallbackFn
DestroyFn func(string) terraform.CallbackFn
CreateFn func(types.NamespacedName) terraform.CallbackFn
UpdateFn func(types.NamespacedName) terraform.CallbackFn
DestroyFn func(types.NamespacedName) terraform.CallbackFn
}
func (c CallbackFns) Create(name string) terraform.CallbackFn {
func (c CallbackFns) Create(name types.NamespacedName) terraform.CallbackFn {
return c.CreateFn(name)
}
func (c CallbackFns) Update(name string) terraform.CallbackFn {
func (c CallbackFns) Update(name types.NamespacedName) terraform.CallbackFn {
return c.UpdateFn(name)
}
func (c CallbackFns) Destroy(name string) terraform.CallbackFn {
func (c CallbackFns) Destroy(name types.NamespacedName) terraform.CallbackFn {
return c.DestroyFn(name)
}
@ -645,7 +646,7 @@ func TestCreate(t *testing.T) {
UseAsync: true,
},
c: CallbackFns{
CreateFn: func(s string) terraform.CallbackFn {
CreateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -718,7 +719,7 @@ func TestUpdate(t *testing.T) {
UseAsync: true,
},
c: CallbackFns{
UpdateFn: func(s string) terraform.CallbackFn {
UpdateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -782,7 +783,7 @@ func TestDelete(t *testing.T) {
UseAsync: true,
},
c: CallbackFns{
DestroyFn: func(_ string) terraform.CallbackFn {
DestroyFn: func(_ types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -816,7 +817,7 @@ func TestDelete(t *testing.T) {
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := &external{workspace: tc.w, callback: tc.c, config: tc.cfg}
err := e.Delete(context.TODO(), tc.args.obj)
_, err := e.Delete(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}

View File

@ -13,11 +13,11 @@ import (
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
fwdiag "github.com/hashicorp/terraform-plugin-framework/diag"
fwprovider "github.com/hashicorp/terraform-plugin-framework/provider"
"github.com/hashicorp/terraform-plugin-framework/providerserver"
@ -27,13 +27,14 @@ import (
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/metrics"
"github.com/crossplane/upjet/pkg/resource"
upjson "github.com/crossplane/upjet/pkg/resource/json"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
upjson "github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// TerraformPluginFrameworkConnector is an external client, with credentials and
@ -108,8 +109,8 @@ type terraformPluginFrameworkExternalClient struct {
// Connect makes sure the underlying client is ready to issue requests to the
// provider API.
func (c *TerraformPluginFrameworkConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) { //nolint:gocyclo
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), mg.GetName())
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), metrics.NameForManaged(mg))
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "namespace", mg.GetNamespace(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
logger.Debug("Connecting to the service provider")
start := time.Now()
ts, err := c.getTerraformSetup(ctx, c.kube, mg)
@ -123,7 +124,7 @@ func (c *TerraformPluginFrameworkConnector) Connect(ctx context.Context, mg xpre
externalName := meta.GetExternalName(tr)
params, err := getExtendedParameters(ctx, tr, externalName, c.config, ts, c.isManagementPoliciesEnabled, c.kube)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", mg.GetName())
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", client.ObjectKeyFromObject(mg))
}
resourceSchema, err := c.getResourceSchema(ctx)
@ -201,6 +202,10 @@ func (c *TerraformPluginFrameworkConnector) getResourceSchema(ctx context.Contex
// at the terraform setup layer with the relevant provider meta if needed
// by the provider implementation.
func (c *TerraformPluginFrameworkConnector) configureProvider(ctx context.Context, ts terraform.Setup) (tfprotov5.ProviderServer, error) {
if ts.FrameworkProvider == nil {
return nil, fmt.Errorf("cannot retrieve framework provider")
}
var schemaResp fwprovider.SchemaResponse
ts.FrameworkProvider.Schema(ctx, fwprovider.SchemaRequest{}, &schemaResp)
if schemaResp.Diagnostics.HasError() {
@ -228,6 +233,22 @@ func (c *TerraformPluginFrameworkConnector) configureProvider(ctx context.Contex
return providerServer, nil
}
// Filter diffs that have unknown plan values, which correspond to
// computed fields, and null plan values, which correspond to
// not-specified fields. Such cases cause unnecessary diff detection
// when only computed attributes or not-specified argument diffs
// exist in the raw diff and no actual diff exists in the
// parametrizable attributes.
func (n *terraformPluginFrameworkExternalClient) filteredDiffExists(rawDiff []tftypes.ValueDiff) bool {
filteredDiff := make([]tftypes.ValueDiff, 0)
for _, diff := range rawDiff {
if diff.Value1 != nil && diff.Value1.IsKnown() && !diff.Value1.IsNull() {
filteredDiff = append(filteredDiff, diff)
}
}
return len(filteredDiff) > 0
}
// getDiffPlanResponse calls the underlying native TF provider's PlanResourceChange RPC,
// and returns the planned state and whether a diff exists.
// If plan response contains non-empty RequiresReplace (i.e. the resource needs
@ -270,18 +291,7 @@ func (n *terraformPluginFrameworkExternalClient) getDiffPlanResponse(ctx context
return nil, false, errors.Wrap(err, "cannot compare prior state and plan")
}
// filter diffs that has unknown plan value which corresponds to
// computed values. These cause unnecessary diff detection when only computed
// attribute diffs exist in the raw diff and no actual diff exists in the
// parametrizable attributes
filteredDiff := make([]tftypes.ValueDiff, 0)
for _, diff := range rawDiff {
if diff.Value1.IsKnown() {
filteredDiff = append(filteredDiff, diff)
}
}
return planResponse, len(filteredDiff) > 0, nil
return planResponse, n.filteredDiffExists(rawDiff), nil
}
func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) { //nolint:gocyclo
@ -313,7 +323,31 @@ func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg
}
n.opTracker.SetFrameworkTFState(readResponse.NewState)
resourceExists := !tfStateValue.IsNull()
// Determine if the resource exists based on Terraform state
var resourceExists bool
if !tfStateValue.IsNull() {
// Resource state is not null, assume it exists
resourceExists = true
// If a custom empty state check function is configured, use it to verify existence
if n.config.TerraformPluginFrameworkIsStateEmptyFn != nil {
isEmpty, err := n.config.TerraformPluginFrameworkIsStateEmptyFn(ctx, tfStateValue, n.resourceSchema)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot check if TF State is empty")
}
// Override existence based on custom check result
resourceExists = !isEmpty
// If custom check determines resource doesn't exist, reset state to nil
if !resourceExists {
nilTfValue := tftypes.NewValue(n.resourceValueTerraformType, nil)
nildynamicValue, err := tfprotov5.NewDynamicValue(n.resourceValueTerraformType, nilTfValue)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot create nil dynamic value")
}
n.opTracker.SetFrameworkTFState(&nildynamicValue)
}
}
}
var stateValueMap map[string]any
if resourceExists {
@ -324,6 +358,9 @@ func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg
}
}
// TODO(cem): Consider skipping diff calculation to avoid potential config
// validation errors in the import path. See
// https://github.com/crossplane/upjet/pull/461
planResponse, hasDiff, err := n.getDiffPlanResponse(ctx, tfStateValue)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot calculate diff")
@ -348,10 +385,16 @@ func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot marshal the attributes of the new state for late-initialization")
}
specUpdateRequired, err = mg.(resource.Terraformed).LateInitialize(buff)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot late-initialize the managed resource")
policySet := sets.New[xpv1.ManagementAction](mg.(resource.Terraformed).GetManagementPolicies()...)
policyHasLateInit := policySet.HasAny(xpv1.ManagementActionLateInitialize, xpv1.ManagementActionAll)
if policyHasLateInit {
specUpdateRequired, err = mg.(resource.Terraformed).LateInitialize(buff)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot late-initialize the managed resource")
}
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Errorf("could not set observation: %v", err)
@ -361,7 +404,7 @@ func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
if !hasDiff {
n.metricRecorder.SetReconcileTime(mg.GetName())
n.metricRecorder.SetReconcileTime(metrics.NameForManaged(mg))
}
if !specUpdateRequired {
resource.SetUpToDateCondition(mg, !hasDiff)
@ -508,17 +551,17 @@ func (n *terraformPluginFrameworkExternalClient) Update(ctx context.Context, mg
return managed.ExternalUpdate{}, nil
}
func (n *terraformPluginFrameworkExternalClient) Delete(ctx context.Context, _ xpresource.Managed) error {
func (n *terraformPluginFrameworkExternalClient) Delete(ctx context.Context, _ xpresource.Managed) (managed.ExternalDelete, error) {
n.logger.Debug("Deleting the external resource")
tfConfigDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return errors.Wrap(err, "cannot construct dynamic value for TF Config")
return managed.ExternalDelete{}, errors.Wrap(err, "cannot construct dynamic value for TF Config")
}
// set an empty planned state, this corresponds to deleting
plannedState, err := tfprotov5.NewDynamicValue(n.resourceValueTerraformType, tftypes.NewValue(n.resourceValueTerraformType, nil))
if err != nil {
return errors.Wrap(err, "cannot set the planned state for deletion")
return managed.ExternalDelete{}, errors.Wrap(err, "cannot set the planned state for deletion")
}
applyRequest := &tfprotov5.ApplyResourceChangeRequest{
@ -530,32 +573,28 @@ func (n *terraformPluginFrameworkExternalClient) Delete(ctx context.Context, _ x
start := time.Now()
applyResponse, err := n.server.ApplyResourceChange(ctx, applyRequest)
if err != nil {
return errors.Wrap(err, "cannot delete resource")
return managed.ExternalDelete{}, errors.Wrap(err, "cannot delete resource")
}
metrics.ExternalAPITime.WithLabelValues("delete").Observe(time.Since(start).Seconds())
if fatalDiags := getFatalDiagnostics(applyResponse.Diagnostics); fatalDiags != nil {
return errors.Wrap(fatalDiags, "resource deletion call returned error diags")
return managed.ExternalDelete{}, errors.Wrap(fatalDiags, "resource deletion call returned error diags")
}
n.opTracker.SetFrameworkTFState(applyResponse.NewState)
newStateAfterApplyVal, err := applyResponse.NewState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return errors.Wrap(err, "cannot unmarshal state after deletion")
return managed.ExternalDelete{}, errors.Wrap(err, "cannot unmarshal state after deletion")
}
// mark the resource as logically deleted if the TF call clears the state
n.opTracker.SetDeleted(newStateAfterApplyVal.IsNull())
return nil
return managed.ExternalDelete{}, nil
}
func (n *terraformPluginFrameworkExternalClient) setExternalName(mg xpresource.Managed, stateValueMap map[string]interface{}) (bool, error) {
id, ok := stateValueMap["id"]
if !ok || id.(string) == "" {
return false, nil
}
newName, err := n.config.ExternalName.GetExternalNameFn(stateValueMap)
if err != nil {
return false, errors.Wrapf(err, "failed to compute the external-name from the state map of the resource with the ID %s", id)
return false, errors.Wrap(err, "failed to compute the external-name from the state map")
}
oldName := meta.GetExternalName(mg)
// we have to make sure the newly set external-name is recorded
@ -701,3 +740,7 @@ func protov5DynamicValueFromMap(data map[string]any, terraformType tftypes.Type)
return &dynamicValue, nil
}
func (n *terraformPluginFrameworkExternalClient) Disconnect(_ context.Context) error {
return nil
}

View File

@ -8,9 +8,9 @@ import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/provider"
@ -25,9 +25,9 @@ import (
"github.com/pkg/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/resource/fake"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
func newBaseObject() fake.Terraformed {
@ -502,7 +502,7 @@ func TestTPFDelete(t *testing.T) {
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
tpfExternal := prepareTPFExternalWithTestConfig(tc.testConfiguration)
err := tpfExternal.Delete(context.TODO(), &tc.testConfiguration.obj)
_, err := tpfExternal.Delete(context.TODO(), &tc.testConfiguration.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
@ -532,6 +532,51 @@ type mockTPFProviderServer struct {
ReadDataSourceFn func(ctx context.Context, request *tfprotov5.ReadDataSourceRequest) (*tfprotov5.ReadDataSourceResponse, error)
}
func (m *mockTPFProviderServer) UpgradeResourceIdentity(_ context.Context, _ *tfprotov5.UpgradeResourceIdentityRequest) (*tfprotov5.UpgradeResourceIdentityResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetResourceIdentitySchemas(_ context.Context, _ *tfprotov5.GetResourceIdentitySchemasRequest) (*tfprotov5.GetResourceIdentitySchemasResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) MoveResourceState(_ context.Context, _ *tfprotov5.MoveResourceStateRequest) (*tfprotov5.MoveResourceStateResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) CallFunction(_ context.Context, _ *tfprotov5.CallFunctionRequest) (*tfprotov5.CallFunctionResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetFunctions(_ context.Context, _ *tfprotov5.GetFunctionsRequest) (*tfprotov5.GetFunctionsResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ValidateEphemeralResourceConfig(_ context.Context, _ *tfprotov5.ValidateEphemeralResourceConfigRequest) (*tfprotov5.ValidateEphemeralResourceConfigResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) OpenEphemeralResource(_ context.Context, _ *tfprotov5.OpenEphemeralResourceRequest) (*tfprotov5.OpenEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) RenewEphemeralResource(_ context.Context, _ *tfprotov5.RenewEphemeralResourceRequest) (*tfprotov5.RenewEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) CloseEphemeralResource(_ context.Context, _ *tfprotov5.CloseEphemeralResourceRequest) (*tfprotov5.CloseEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetMetadata(_ context.Context, _ *tfprotov5.GetMetadataRequest) (*tfprotov5.GetMetadataResponse, error) {
// TODO implement me
panic("implement me")

View File

@ -10,12 +10,12 @@ import (
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/hashicorp/go-cty/cty"
tfdiag "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
@ -25,11 +25,11 @@ import (
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/metrics"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/resource/json"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
type TerraformPluginSDKConnector struct {
@ -111,37 +111,42 @@ type Resource interface {
}
type terraformPluginSDKExternal struct {
ts terraform.Setup
resourceSchema Resource
config *config.Resource
instanceDiff *tf.InstanceDiff
params map[string]any
rawConfig cty.Value
logger logging.Logger
metricRecorder *metrics.MetricRecorder
opTracker *AsyncTracker
ts terraform.Setup
resourceSchema Resource
config *config.Resource
instanceDiff *tf.InstanceDiff
params map[string]any
rawConfig cty.Value
logger logging.Logger
metricRecorder *metrics.MetricRecorder
opTracker *AsyncTracker
isManagementPoliciesEnabled bool
}
func getExtendedParameters(ctx context.Context, tr resource.Terraformed, externalName string, config *config.Resource, ts terraform.Setup, initParamsMerged bool, kube client.Client) (map[string]any, error) {
func getExtendedParameters(ctx context.Context, tr resource.Terraformed, externalName string, cfg *config.Resource, ts terraform.Setup, initParamsMerged bool, kube client.Client) (map[string]any, error) {
params, err := tr.GetMergedParameters(initParamsMerged)
if err != nil {
return nil, errors.Wrap(err, "cannot get merged parameters")
}
params, err = cfg.ApplyTFConversions(params, config.ToTerraform)
if err != nil {
return nil, errors.Wrap(err, "cannot apply tf conversions")
}
if err = resource.GetSensitiveParameters(ctx, &APISecretClient{kube: kube}, tr, params, tr.GetConnectionDetailsMapping()); err != nil {
return nil, errors.Wrap(err, "cannot store sensitive parameters into params")
}
config.ExternalName.SetIdentifierArgumentFn(params, externalName)
if config.TerraformConfigurationInjector != nil {
cfg.ExternalName.SetIdentifierArgumentFn(params, externalName)
if cfg.TerraformConfigurationInjector != nil {
m, err := getJSONMap(tr)
if err != nil {
return nil, errors.Wrap(err, "cannot get JSON map for the managed resource's spec.forProvider value")
}
if err := config.TerraformConfigurationInjector(m, params); err != nil {
if err := cfg.TerraformConfigurationInjector(m, params); err != nil {
return nil, errors.Wrap(err, "cannot invoke the configured TerraformConfigurationInjector")
}
}
tfID, err := config.ExternalName.GetIDFn(ctx, externalName, params, ts.Map())
tfID, err := cfg.ExternalName.GetIDFn(ctx, externalName, params, ts.Map())
if err != nil {
return nil, errors.Wrap(err, "cannot get ID")
}
@ -150,21 +155,21 @@ func getExtendedParameters(ctx context.Context, tr resource.Terraformed, externa
// not all providers may have this attribute
// TODO: tags-tags_all implementation is AWS specific.
// Consider making this logic independent of provider.
if config.TerraformResource != nil {
if _, ok := config.TerraformResource.CoreConfigSchema().Attributes["tags_all"]; ok {
if cfg.TerraformResource != nil {
if _, ok := cfg.TerraformResource.CoreConfigSchema().Attributes["tags_all"]; ok {
params["tags_all"] = params["tags"]
}
}
return params, nil
}
func (c *TerraformPluginSDKConnector) processParamsWithStateFunc(schemaMap map[string]*schema.Schema, params map[string]any) map[string]any {
func (c *TerraformPluginSDKConnector) processParamsWithHCLParser(schemaMap map[string]*schema.Schema, params map[string]any) map[string]any {
if params == nil {
return params
}
for key, param := range params {
if sc, ok := schemaMap[key]; ok {
params[key] = c.applyStateFuncToParam(sc, param)
params[key] = c.applyHCLParserToParam(sc, param)
} else {
params[key] = param
}
@ -172,11 +177,11 @@ func (c *TerraformPluginSDKConnector) processParamsWithStateFunc(schemaMap map[s
return params
}
func (c *TerraformPluginSDKConnector) applyStateFuncToParam(sc *schema.Schema, param any) any { //nolint:gocyclo
func (c *TerraformPluginSDKConnector) applyHCLParserToParam(sc *schema.Schema, param any) any { //nolint:gocyclo
if param == nil {
return param
}
switch sc.Type {
switch sc.Type { //nolint:exhaustive
case schema.TypeMap:
if sc.Elem == nil {
return param
@ -185,7 +190,7 @@ func (c *TerraformPluginSDKConnector) applyStateFuncToParam(sc *schema.Schema, p
// TypeMap only supports schema in Elem
if mapSchema, ok := sc.Elem.(*schema.Schema); ok && okParam {
for pk, pv := range pmap {
pmap[pk] = c.applyStateFuncToParam(mapSchema, pv)
pmap[pk] = c.applyHCLParserToParam(mapSchema, pv)
}
return pmap
}
@ -196,13 +201,13 @@ func (c *TerraformPluginSDKConnector) applyStateFuncToParam(sc *schema.Schema, p
pArray, okParam := param.([]any)
if setSchema, ok := sc.Elem.(*schema.Schema); ok && okParam {
for i, p := range pArray {
pArray[i] = c.applyStateFuncToParam(setSchema, p)
pArray[i] = c.applyHCLParserToParam(setSchema, p)
}
return pArray
} else if setResource, ok := sc.Elem.(*schema.Resource); ok {
for i, p := range pArray {
if resParam, okRParam := p.(map[string]any); okRParam {
pArray[i] = c.processParamsWithStateFunc(setResource.Schema, resParam)
pArray[i] = c.processParamsWithHCLParser(setResource.Schema, resParam)
}
}
}
@ -216,16 +221,6 @@ func (c *TerraformPluginSDKConnector) applyStateFuncToParam(sc *schema.Schema, p
param = hclProccessedParam
}
}
if sc.StateFunc != nil {
return sc.StateFunc(param)
}
return param
case schema.TypeBool, schema.TypeInt, schema.TypeFloat:
if sc.StateFunc != nil {
return sc.StateFunc(param)
}
return param
case schema.TypeInvalid:
return param
default:
return param
@ -234,8 +229,8 @@ func (c *TerraformPluginSDKConnector) applyStateFuncToParam(sc *schema.Schema, p
}
func (c *TerraformPluginSDKConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) { //nolint:gocyclo
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), mg.GetName())
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), metrics.NameForManaged(mg))
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "namespace", mg.GetNamespace(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
logger.Debug("Connecting to the service provider")
start := time.Now()
ts, err := c.getTerraformSetup(ctx, c.kube, mg)
@ -250,9 +245,9 @@ func (c *TerraformPluginSDKConnector) Connect(ctx context.Context, mg xpresource
externalName := meta.GetExternalName(tr)
params, err := getExtendedParameters(ctx, tr, externalName, c.config, ts, c.isManagementPoliciesEnabled, c.kube)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", mg.GetName())
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", client.ObjectKeyFromObject(mg))
}
params = c.processParamsWithStateFunc(c.config.TerraformResource.Schema, params)
params = c.processParamsWithHCLParser(c.config.TerraformResource.Schema, params)
schemaBlock := c.config.TerraformResource.CoreConfigSchema()
rawConfig, err := schema.JSONMapToStateValue(params, schemaBlock)
@ -265,6 +260,10 @@ func (c *TerraformPluginSDKConnector) Connect(ctx context.Context, mg xpresource
if err != nil {
return nil, errors.Wrap(err, "failed to get the observation")
}
tfState, err = c.config.ApplyTFConversions(tfState, config.ToTerraform)
if err != nil {
return nil, errors.Wrap(err, "failed to run the API converters on the Terraform state")
}
copyParams := len(tfState) == 0
if err = resource.GetSensitiveParameters(ctx, &APISecretClient{kube: c.kube}, tr, tfState, tr.GetConnectionDetailsMapping()); err != nil {
return nil, errors.Wrap(err, "cannot store sensitive parameters into tfState")
@ -300,14 +299,15 @@ func (c *TerraformPluginSDKConnector) Connect(ctx context.Context, mg xpresource
}
return &terraformPluginSDKExternal{
ts: ts,
resourceSchema: c.config.TerraformResource,
config: c.config,
params: params,
rawConfig: rawConfig,
logger: logger,
metricRecorder: c.metricRecorder,
opTracker: opTracker,
ts: ts,
resourceSchema: c.config.TerraformResource,
config: c.config,
params: params,
rawConfig: rawConfig,
logger: logger,
metricRecorder: c.metricRecorder,
opTracker: opTracker,
isManagementPoliciesEnabled: c.isManagementPoliciesEnabled,
}, nil
}
@ -426,6 +426,11 @@ func (n *terraformPluginSDKExternal) getResourceDataDiff(tr resource.Terraformed
if err != nil {
return nil, errors.Wrap(err, "failed to get *terraform.InstanceDiff")
}
// Sanitize Identity field in Diff.
// This causes continuous diff loop.
if instanceDiff != nil {
instanceDiff.Identity = nil
}
if n.config.TerraformCustomDiff != nil {
instanceDiff, err = n.config.TerraformCustomDiff(instanceDiff, s, resourceConfig)
if err != nil {
@ -466,6 +471,7 @@ func (n *terraformPluginSDKExternal) getResourceDataDiff(tr resource.Terraformed
}
func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) { //nolint:gocyclo
var err error
n.logger.Debug("Observing the external resource")
if meta.WasDeleted(mg) && n.opTracker.IsDeleted() {
@ -498,21 +504,29 @@ func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.
diffState.Attributes = nil
diffState.ID = ""
}
instanceDiff, err := n.getResourceDataDiff(mg.(resource.Terraformed), ctx, diffState, resourceExists)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot compute the instance diff")
}
if instanceDiff == nil {
instanceDiff = tf.NewInstanceDiff()
}
n.instanceDiff = instanceDiff
noDiff := instanceDiff.Empty()
var connDetails managed.ConnectionDetails
n.instanceDiff = nil
policySet := sets.New[xpv1.ManagementAction](mg.(resource.Terraformed).GetManagementPolicies()...)
observeOnlyPolicy := sets.New(xpv1.ManagementActionObserve)
isObserveOnlyPolicy := policySet.Equal(observeOnlyPolicy)
if !isObserveOnlyPolicy || !n.isManagementPoliciesEnabled {
n.instanceDiff, err = n.getResourceDataDiff(mg.(resource.Terraformed), ctx, diffState, resourceExists)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot compute the instance diff")
}
}
if n.instanceDiff == nil {
n.instanceDiff = tf.NewInstanceDiff()
}
hasDiff := !n.instanceDiff.Empty()
if !resourceExists && mg.GetDeletionTimestamp() != nil {
gvk := mg.GetObjectKind().GroupVersionKind()
metrics.DeletionTime.WithLabelValues(gvk.Group, gvk.Version, gvk.Kind).Observe(time.Since(mg.GetDeletionTimestamp().Time).Seconds())
}
var connDetails managed.ConnectionDetails
specUpdateRequired := false
if resourceExists {
if mg.GetCondition(xpv1.TypeReady).Status == corev1.ConditionUnknown ||
@ -521,12 +535,23 @@ func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.
}
mg.SetConditions(xpv1.Available())
// we get the connection details from the observed state before
// the conversion because the sensitive paths assume the native Terraform
// schema.
connDetails, err = resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot convert the singleton lists in the observed state value map into embedded objects")
}
buff, err := json.TFParser.Marshal(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot marshal the attributes of the new state for late-initialization")
}
policySet := sets.New[xpv1.ManagementAction](mg.(resource.Terraformed).GetManagementPolicies()...)
policyHasLateInit := policySet.HasAny(xpv1.ManagementActionLateInitialize, xpv1.ManagementActionAll)
if policyHasLateInit {
specUpdateRequired, err = mg.(resource.Terraformed).LateInitialize(buff)
@ -539,16 +564,12 @@ func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.
if err != nil {
return managed.ExternalObservation{}, errors.Errorf("could not set observation: %v", err)
}
connDetails, err = resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
if noDiff {
n.metricRecorder.SetReconcileTime(mg.GetName())
if !hasDiff {
n.metricRecorder.SetReconcileTime(metrics.NameForManaged(mg))
}
if !specUpdateRequired {
resource.SetUpToDateCondition(mg, noDiff)
resource.SetUpToDateCondition(mg, !hasDiff)
}
// check for an external-name change
if nameChanged, err := n.setExternalName(mg, stateValueMap); err != nil {
@ -560,7 +581,7 @@ func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.
return managed.ExternalObservation{
ResourceExists: resourceExists,
ResourceUpToDate: noDiff,
ResourceUpToDate: !hasDiff,
ConnectionDetails: connDetails,
ResourceLateInitialized: specUpdateRequired,
}, nil
@ -583,7 +604,7 @@ func (n *terraformPluginSDKExternal) setExternalName(mg xpresource.Managed, stat
return oldName != newName, nil
}
func (n *terraformPluginSDKExternal) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
func (n *terraformPluginSDKExternal) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:gocyclo // easier to follow as a unit
n.logger.Debug("Creating the external resource")
start := time.Now()
newState, diag := n.resourceSchema.Apply(ctx, n.opTracker.GetTfState(), n.instanceDiff, n.ts.Meta)
@ -633,15 +654,23 @@ func (n *terraformPluginSDKExternal) Create(ctx context.Context, mg xpresource.M
if _, err := n.setExternalName(mg, stateValueMap); err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "failed to set the external-name of the managed resource during create")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalCreation{}, errors.Errorf("could not set observation: %v", err)
}
// we get the connection details from the observed state before
// the conversion because the sensitive paths assume the native Terraform
// schema.
conn, err := resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot get connection details")
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot convert the singleton lists in the state value map of the newly created resource into embedded objects")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalCreation{}, errors.Errorf("could not set observation: %v", err)
}
return managed.ExternalCreation{ConnectionDetails: conn}, nil
}
@ -666,6 +695,16 @@ func (n *terraformPluginSDKExternal) assertNoForceNew() error {
}
func (n *terraformPluginSDKExternal) Update(ctx context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
if n.config.UpdateLoopPrevention != nil {
preventResult, err := n.config.UpdateLoopPrevention.UpdateLoopPreventionFunc(n.instanceDiff, mg)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrapf(err, "failed to apply the update loop prevention function for %s", n.config.Name)
}
if preventResult != nil {
return managed.ExternalUpdate{}, errors.Errorf("update operation was blocked because of a possible update loop: %s", preventResult.Reason)
}
}
n.logger.Debug("Updating the external resource")
if err := n.assertNoForceNew(); err != nil {
@ -685,6 +724,11 @@ func (n *terraformPluginSDKExternal) Update(ctx context.Context, mg xpresource.M
return managed.ExternalUpdate{}, err
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot convert the singleton lists for the updated resource state value map into embedded objects")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalUpdate{}, errors.Errorf("failed to set observation: %v", err)
@ -692,7 +736,7 @@ func (n *terraformPluginSDKExternal) Update(ctx context.Context, mg xpresource.M
return managed.ExternalUpdate{}, nil
}
func (n *terraformPluginSDKExternal) Delete(ctx context.Context, _ xpresource.Managed) error {
func (n *terraformPluginSDKExternal) Delete(ctx context.Context, _ xpresource.Managed) (managed.ExternalDelete, error) {
n.logger.Debug("Deleting the external resource")
if n.instanceDiff == nil {
n.instanceDiff = tf.NewInstanceDiff()
@ -703,11 +747,15 @@ func (n *terraformPluginSDKExternal) Delete(ctx context.Context, _ xpresource.Ma
newState, diag := n.resourceSchema.Apply(ctx, n.opTracker.GetTfState(), n.instanceDiff, n.ts.Meta)
metrics.ExternalAPITime.WithLabelValues("delete").Observe(time.Since(start).Seconds())
if diag != nil && diag.HasError() {
return errors.Errorf("failed to delete the resource: %v", diag)
return managed.ExternalDelete{}, errors.Errorf("failed to delete the resource: %v", diag)
}
n.opTracker.SetTfState(newState)
// mark the resource as logically deleted if the TF call clears the state
n.opTracker.SetDeleted(newState == nil)
return managed.ExternalDelete{}, nil
}
func (n *terraformPluginSDKExternal) Disconnect(_ context.Context) error {
return nil
}

View File

@ -9,10 +9,10 @@ import (
"testing"
"time"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
@ -21,9 +21,9 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/resource/fake"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
var (
@ -396,7 +396,7 @@ func TestTerraformPluginSDKDelete(t *testing.T) {
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKExternal := prepareTerraformPluginSDKExternal(tc.args.r, tc.args.cfg)
err := terraformPluginSDKExternal.Delete(context.TODO(), &tc.args.obj)
_, err := terraformPluginSDKExternal.Delete(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}

View File

@ -7,7 +7,7 @@ package controller
import (
"context"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
)

View File

@ -9,7 +9,7 @@ import (
"sync"
"time"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/workqueue"
"sigs.k8s.io/controller-runtime/pkg/event"
@ -23,8 +23,8 @@ const NoRateLimiter = ""
// objects and allows upjet components to queue reconcile requests.
type EventHandler struct {
innerHandler handler.EventHandler
queue workqueue.RateLimitingInterface
rateLimiterMap map[string]workqueue.RateLimiter
queue workqueue.TypedRateLimitingInterface[reconcile.Request]
rateLimiterMap map[string]workqueue.TypedRateLimiter[reconcile.Request]
logger logging.Logger
mu *sync.RWMutex
}
@ -44,7 +44,7 @@ func NewEventHandler(opts ...Option) *EventHandler {
eh := &EventHandler{
innerHandler: &handler.EnqueueRequestForObject{},
mu: &sync.RWMutex{},
rateLimiterMap: make(map[string]workqueue.RateLimiter),
rateLimiterMap: make(map[string]workqueue.TypedRateLimiter[reconcile.Request]),
}
for _, o := range opts {
o(eh)
@ -54,7 +54,7 @@ func NewEventHandler(opts ...Option) *EventHandler {
// RequestReconcile requeues a reconciliation request for the specified name.
// Returns true if the reconcile request was successfully queued.
func (e *EventHandler) RequestReconcile(rateLimiterName, name string, failureLimit *int) bool {
func (e *EventHandler) RequestReconcile(rateLimiterName string, name types.NamespacedName, failureLimit *int) bool {
e.mu.Lock()
defer e.mu.Unlock()
if e.queue == nil {
@ -62,15 +62,13 @@ func (e *EventHandler) RequestReconcile(rateLimiterName, name string, failureLim
}
logger := e.logger.WithValues("name", name)
item := reconcile.Request{
NamespacedName: types.NamespacedName{
Name: name,
},
NamespacedName: name,
}
var when time.Duration = 0
if rateLimiterName != NoRateLimiter {
rateLimiter := e.rateLimiterMap[rateLimiterName]
if rateLimiter == nil {
rateLimiter = workqueue.DefaultControllerRateLimiter()
rateLimiter = workqueue.DefaultTypedControllerRateLimiter[reconcile.Request]()
e.rateLimiterMap[rateLimiterName] = rateLimiter
}
if failureLimit != nil && rateLimiter.NumRequeues(item) > *failureLimit {
@ -86,7 +84,7 @@ func (e *EventHandler) RequestReconcile(rateLimiterName, name string, failureLim
// Forget indicates that the reconcile retries is finished for
// the specified name.
func (e *EventHandler) Forget(rateLimiterName, name string) {
func (e *EventHandler) Forget(rateLimiterName string, name types.NamespacedName) {
e.mu.RLock()
defer e.mu.RUnlock()
rateLimiter := e.rateLimiterMap[rateLimiterName]
@ -94,13 +92,11 @@ func (e *EventHandler) Forget(rateLimiterName, name string) {
return
}
rateLimiter.Forget(reconcile.Request{
NamespacedName: types.NamespacedName{
Name: name,
},
NamespacedName: name,
})
}
func (e *EventHandler) setQueue(limitingInterface workqueue.RateLimitingInterface) {
func (e *EventHandler) setQueue(limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.mu.Lock()
defer e.mu.Unlock()
if e.queue == nil {
@ -108,26 +104,26 @@ func (e *EventHandler) setQueue(limitingInterface workqueue.RateLimitingInterfac
}
}
func (e *EventHandler) Create(ctx context.Context, ev event.CreateEvent, limitingInterface workqueue.RateLimitingInterface) {
func (e *EventHandler) Create(ctx context.Context, ev event.CreateEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Create event.", "name", ev.Object.GetName(), "queueLength", limitingInterface.Len())
e.logger.Debug("Calling the inner handler for Create event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Create(ctx, ev, limitingInterface)
}
func (e *EventHandler) Update(ctx context.Context, ev event.UpdateEvent, limitingInterface workqueue.RateLimitingInterface) {
func (e *EventHandler) Update(ctx context.Context, ev event.UpdateEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Update event.", "name", ev.ObjectOld.GetName(), "queueLength", limitingInterface.Len())
e.logger.Debug("Calling the inner handler for Update event.", "name", ev.ObjectOld.GetName(), "namespace", ev.ObjectOld.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Update(ctx, ev, limitingInterface)
}
func (e *EventHandler) Delete(ctx context.Context, ev event.DeleteEvent, limitingInterface workqueue.RateLimitingInterface) {
func (e *EventHandler) Delete(ctx context.Context, ev event.DeleteEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Delete event.", "name", ev.Object.GetName(), "queueLength", limitingInterface.Len())
e.logger.Debug("Calling the inner handler for Delete event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Delete(ctx, ev, limitingInterface)
}
func (e *EventHandler) Generic(ctx context.Context, ev event.GenericEvent, limitingInterface workqueue.RateLimitingInterface) {
func (e *EventHandler) Generic(ctx context.Context, ev event.GenericEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Generic event.", "name", ev.Object.GetName(), "queueLength", limitingInterface.Len())
e.logger.Debug("Calling the inner handler for Generic event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Generic(ctx, ev, limitingInterface)
}

View File

@ -7,9 +7,11 @@ package controller
import (
"context"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/terraform"
"k8s.io/apimachinery/pkg/types"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// TODO(muvaf): It's a bit weird that the functions return the struct of a
@ -40,7 +42,7 @@ type Store interface {
// CallbackProvider provides functions that can be called with the result of
// async operations.
type CallbackProvider interface {
Create(name string) terraform.CallbackFn
Update(name string) terraform.CallbackFn
Destroy(name string) terraform.CallbackFn
Create(name types.NamespacedName) terraform.CallbackFn
Update(name types.NamespacedName) terraform.CallbackFn
Destroy(name types.NamespacedName) terraform.CallbackFn
}

View File

@ -8,14 +8,14 @@ import (
"sync"
"sync/atomic"
"github.com/crossplane/crossplane-runtime/pkg/logging"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/hashicorp/terraform-plugin-go/tfprotov5"
tfsdk "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"k8s.io/apimachinery/pkg/types"
"github.com/crossplane/upjet/pkg/resource"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// AsyncTracker holds information for a managed resource to track the
@ -189,7 +189,7 @@ func (ops *OperationTrackerStore) Tracker(tr resource.Terraformed) *AsyncTracker
defer ops.mu.Unlock()
tracker, ok := ops.store[tr.GetUID()]
if !ok {
l := ops.logger.WithValues("trackerUID", tr.GetUID(), "resourceName", tr.GetName(), "gvk", tr.GetObjectKind().GroupVersionKind().String())
l := ops.logger.WithValues("trackerUID", tr.GetUID(), "resourceName", tr.GetName(), "resourceNamespace", tr.GetNamespace(), "gvk", tr.GetObjectKind().GroupVersionKind().String())
ops.store[tr.GetUID()] = NewAsyncTracker(WithAsyncTrackerLogger(l))
tracker = ops.store[tr.GetUID()]
}

View File

@ -5,14 +5,12 @@
package controller
import (
"crypto/tls"
"time"
"github.com/crossplane/crossplane-runtime/pkg/controller"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/crossplane/crossplane-runtime/v2/pkg/controller"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// Options contains incriminating options for a given Upjet controller instance.
@ -34,14 +32,6 @@ type Options struct {
// preparing the auth token for Terraform CLI.
SetupFn terraform.SetupFn
// SecretStoreConfigGVK is the GroupVersionKind for the Secret StoreConfig
// resource. Setting this enables External Secret Stores for the controller
// by adding connection.DetailsManager as a ConnectionPublisher.
SecretStoreConfigGVK *schema.GroupVersionKind
// ESSOptions for External Secret Stores.
ESSOptions *ESSOptions
// PollJitter adds the specified jitter to the configured reconcile period
// of the up-to-date resources in managed.Reconciler.
PollJitter time.Duration
@ -50,9 +40,3 @@ type Options struct {
// provider's controllerruntime.Manager.
StartWebhooks bool
}
// ESSOptions for External Secret Stores.
type ESSOptions struct {
TLSConfig *tls.Config
TLSSecretName *string
}

View File

@ -0,0 +1,179 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"bytes"
"fmt"
"io"
"log"
"os"
"path/filepath"
"strings"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
k8sschema "k8s.io/apimachinery/pkg/runtime/schema"
kyaml "k8s.io/apimachinery/pkg/util/yaml"
"sigs.k8s.io/yaml"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
)
// ConvertSingletonListToEmbeddedObject generates the example manifests for
// the APIs with converted singleton lists in their new API versions with the
// embedded objects. All manifests under `startPath` are scanned and the
// header at the specified path `licenseHeaderPath` is used for the converted
// example manifests.
func ConvertSingletonListToEmbeddedObject(pc *config.Provider, startPath, licenseHeaderPath string) error {
resourceRegistry := prepareResourceRegistry(pc)
var license string
var lErr error
if licenseHeaderPath != "" {
license, lErr = getLicenseHeader(licenseHeaderPath)
if lErr != nil {
return errors.Wrap(lErr, "failed to get license header")
}
}
err := filepath.Walk(startPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
return errors.Wrapf(err, "walk failed: %s", startPath)
}
var convertedFileContent string
if !info.IsDir() && strings.HasSuffix(info.Name(), ".yaml") {
log.Printf("Converting: %s\n", path)
content, err := os.ReadFile(filepath.Clean(path))
if err != nil {
return errors.Wrapf(err, "failed to read the %s file", path)
}
examples, err := decodeExamples(string(content))
if err != nil {
return errors.Wrap(err, "failed to decode examples")
}
rootResource := resourceRegistry[fmt.Sprintf("%s/%s", examples[0].GroupVersionKind().Kind, examples[0].GroupVersionKind().Group)]
if rootResource == nil {
log.Printf("Warning: Skipping %s because the corresponding resource could not be found in the provider", path)
return nil
}
newPath := strings.ReplaceAll(path, examples[0].GroupVersionKind().Version, rootResource.Version)
if path == newPath {
return nil
}
annotationValue := strings.ToLower(fmt.Sprintf("%s/%s/%s", rootResource.ShortGroup, rootResource.Version, rootResource.Kind))
for _, e := range examples {
if resource, ok := resourceRegistry[fmt.Sprintf("%s/%s", e.GroupVersionKind().Kind, e.GroupVersionKind().Group)]; ok {
conversionPaths := resource.CRDListConversionPaths()
if conversionPaths != nil && e.GroupVersionKind().Version != resource.Version {
for i, cp := range conversionPaths {
// Here, for the manifests to be converted, only `forProvider
// is converted, assuming the `initProvider` field is empty in the
// spec.
conversionPaths[i] = "spec.forProvider." + cp
}
converted, err := conversion.Convert(e.Object, conversionPaths, conversion.ToEmbeddedObject, nil)
if err != nil {
return errors.Wrapf(err, "failed to convert example to embedded object in manifest %s", path)
}
e.Object = converted
e.SetGroupVersionKind(k8sschema.GroupVersionKind{
Group: e.GroupVersionKind().Group,
Version: resource.Version,
Kind: e.GetKind(),
})
}
annotations := e.GetAnnotations()
if annotations == nil {
annotations = make(map[string]string)
log.Printf("Missing annotations: %s", path)
}
annotations["meta.upbound.io/example-id"] = annotationValue
e.SetAnnotations(annotations)
}
}
convertedFileContent = license + "\n\n"
if err := writeExampleContent(path, convertedFileContent, examples, newPath); err != nil {
return errors.Wrap(err, "failed to write example content")
}
}
return nil
})
if err != nil {
log.Printf("Error walking the path %q: %v\n", startPath, err)
}
return nil
}
func writeExampleContent(path string, convertedFileContent string, examples []*unstructured.Unstructured, newPath string) error {
for i, e := range examples {
var convertedData []byte
e := e
convertedData, err := yaml.Marshal(&e)
if err != nil {
return errors.Wrap(err, "failed to marshal example to yaml")
}
if i == len(examples)-1 {
convertedFileContent += string(convertedData)
} else {
convertedFileContent += string(convertedData) + "\n---\n\n"
}
}
dir := filepath.Dir(newPath)
// Create all necessary directories if they do not exist
if err := os.MkdirAll(dir, os.ModePerm); err != nil {
return errors.Wrap(err, "failed to create directory")
}
f, err := os.Create(filepath.Clean(newPath))
if err != nil {
return errors.Wrap(err, "failed to create file")
}
if _, err := f.WriteString(convertedFileContent); err != nil {
return errors.Wrap(err, "failed to write to file")
}
log.Printf("Converted: %s\n", path)
return nil
}
func getLicenseHeader(licensePath string) (string, error) {
licenseData, err := os.ReadFile(licensePath)
if err != nil {
return "", errors.Wrapf(err, "failed to read license file: %s", licensePath)
}
return string(licenseData), nil
}
func prepareResourceRegistry(pc *config.Provider) map[string]*config.Resource {
reg := map[string]*config.Resource{}
for _, r := range pc.Resources {
reg[fmt.Sprintf("%s/%s.%s", r.Kind, r.ShortGroup, pc.RootGroup)] = r
}
return reg
}
func decodeExamples(content string) ([]*unstructured.Unstructured, error) {
var manifests []*unstructured.Unstructured
decoder := kyaml.NewYAMLOrJSONDecoder(bytes.NewBufferString(content), 1024)
for {
u := &unstructured.Unstructured{}
if err := decoder.Decode(&u); err != nil {
if errors.Is(err, io.EOF) {
break
}
return nil, errors.Wrap(err, "cannot decode manifest")
}
if u != nil {
manifests = append(manifests, u)
}
}
return manifests, nil
}

View File

@ -14,17 +14,17 @@ import (
"sort"
"strings"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
xpmeta "github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpmeta "github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"sigs.k8s.io/yaml"
"github.com/crossplane/upjet/pkg/config"
"github.com/crossplane/upjet/pkg/registry/reference"
"github.com/crossplane/upjet/pkg/resource/json"
tjtypes "github.com/crossplane/upjet/pkg/types"
"github.com/crossplane/upjet/pkg/types/name"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/registry/reference"
"github.com/crossplane/upjet/v2/pkg/resource/json"
tjtypes "github.com/crossplane/upjet/v2/pkg/types"
"github.com/crossplane/upjet/v2/pkg/types/name"
)
var (
@ -42,27 +42,53 @@ const (
// Generates example manifests for Terraform resources under examples-generated.
type Generator struct {
reference.Injector
rootDir string
exampleDir string
configResources map[string]*config.Resource
resources map[string]*reference.PavedWithManifest
exampleNamespace string
localSecretRefs bool
}
type GeneratorOption func(*Generator)
// WithLocalSecretRefs configures the example generator to
// generate examples with local secret references,
// i.e. no namespace specified.
func WithLocalSecretRefs() GeneratorOption {
return func(g *Generator) {
g.localSecretRefs = true
}
}
// WithNamespacedExamples configures the example generator to
// generate examples with the default namespace
func WithNamespacedExamples() GeneratorOption {
return func(g *Generator) {
g.exampleNamespace = defaultNamespace
}
}
// NewGenerator returns a configured Generator
func NewGenerator(rootDir, modulePath, shortName string, configResources map[string]*config.Resource) *Generator {
return &Generator{
func NewGenerator(exampleDir, apisModulePath, shortName string, configResources map[string]*config.Resource, opts ...GeneratorOption) *Generator {
g := &Generator{
Injector: reference.Injector{
ModulePath: modulePath,
ModulePath: apisModulePath,
ProviderShortName: shortName,
},
rootDir: rootDir,
exampleDir: exampleDir,
configResources: configResources,
resources: make(map[string]*reference.PavedWithManifest),
}
for _, opt := range opts {
opt(g)
}
return g
}
// StoreExamples stores the generated example manifests under examples-generated in
// their respective API groups.
func (eg *Generator) StoreExamples() error { // nolint:gocyclo
func (eg *Generator) StoreExamples() error { //nolint:gocyclo
for rn, pm := range eg.resources {
manifestDir := filepath.Dir(pm.ManifestPath)
if err := os.MkdirAll(manifestDir, 0750); err != nil {
@ -98,7 +124,7 @@ func (eg *Generator) StoreExamples() error { // nolint:gocyclo
// e.g. meta.upbound.io/example-id: ec2/v1beta1/instance
eGroup := fmt.Sprintf("%s/%s/%s", strings.ToLower(r.ShortGroup), r.Version, strings.ToLower(r.Kind))
pmd := paveCRManifest(exampleParams, dr.Config,
reference.NewRefPartsFromResourceName(dn).ExampleName, dr.Group, dr.Version, eGroup)
reference.NewRefPartsFromResourceName(dn).ExampleName, dr.Group, dr.Version, eGroup, eg.exampleNamespace, eg.localSecretRefs)
if err := eg.writeManifest(&buff, pmd, context); err != nil {
return errors.Wrapf(err, "cannot store example manifest for %s dependency: %s", rn, dn)
}
@ -115,10 +141,10 @@ func (eg *Generator) StoreExamples() error { // nolint:gocyclo
return nil
}
func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, group, version, eGroup string) *reference.PavedWithManifest {
func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, group, version, eGroup, namespace string, localSecretRefs bool) *reference.PavedWithManifest {
delete(exampleParams, "depends_on")
delete(exampleParams, "lifecycle")
transformFields(r, exampleParams, r.ExternalName.OmittedFields, "")
transformFields(r, exampleParams, r.ExternalName.OmittedFields, "", localSecretRefs)
metadata := map[string]any{
"labels": map[string]string{
labelExampleName: eName,
@ -127,6 +153,9 @@ func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, gro
annotationExampleGroup: eGroup,
},
}
if namespace != "" {
metadata["namespace"] = namespace
}
example := map[string]any{
"apiVersion": fmt.Sprintf("%s/%s", group, version),
"kind": r.Kind,
@ -184,8 +213,8 @@ func (eg *Generator) Generate(group, version string, r *config.Resource) error {
groupPrefix := strings.ToLower(strings.Split(group, ".")[0])
// e.g. gvk = ec2/v1beta1/instance
gvk := fmt.Sprintf("%s/%s/%s", groupPrefix, version, strings.ToLower(r.Kind))
pm := paveCRManifest(rm.Examples[0].Paved.UnstructuredContent(), r, rm.Examples[0].Name, group, version, gvk)
manifestDir := filepath.Join(eg.rootDir, "examples-generated", groupPrefix, r.Version)
pm := paveCRManifest(rm.Examples[0].Paved.UnstructuredContent(), r, rm.Examples[0].Name, group, version, gvk, eg.exampleNamespace, eg.localSecretRefs)
manifestDir := filepath.Join(eg.exampleDir, groupPrefix, r.Version)
pm.ManifestPath = filepath.Join(manifestDir, fmt.Sprintf("%s.yaml", strings.ToLower(r.Kind)))
eg.resources[fmt.Sprintf("%s.%s", r.Name, reference.Wildcard)] = pm
return nil
@ -206,7 +235,7 @@ func isStatus(r *config.Resource, attr string) bool {
return tjtypes.IsObservation(s)
}
func transformFields(r *config.Resource, params map[string]any, omittedFields []string, namePrefix string) { // nolint:gocyclo
func transformFields(r *config.Resource, params map[string]any, omittedFields []string, namePrefix string, localSecretRefs bool) { //nolint:gocyclo
for n := range params {
hName := getHierarchicalName(namePrefix, n)
if isStatus(r, hName) {
@ -224,7 +253,7 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
for n, v := range params {
switch pT := v.(type) {
case map[string]any:
transformFields(r, pT, omittedFields, getHierarchicalName(namePrefix, n))
transformFields(r, pT, omittedFields, getHierarchicalName(namePrefix, n), localSecretRefs)
case []any:
for _, e := range pT {
@ -232,7 +261,7 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
if !ok {
continue
}
transformFields(r, eM, omittedFields, getHierarchicalName(namePrefix, n))
transformFields(r, eM, omittedFields, getHierarchicalName(namePrefix, n), localSecretRefs)
}
}
}
@ -250,11 +279,14 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
switch {
case sch.Sensitive:
secretName, secretKey := getSecretRef(v)
params[fn.LowerCamelComputed+"SecretRef"] = getRefField(v, map[string]any{
"name": secretName,
"namespace": defaultNamespace,
"key": secretKey,
})
ref := map[string]any{
"name": secretName,
"key": secretKey,
}
if !localSecretRefs {
ref["namespace"] = defaultNamespace
}
params[fn.LowerCamelComputed+"SecretRef"] = getRefField(v, ref)
case r.References[fieldPath] != config.Reference{}:
switch v.(type) {
case []any:

View File

@ -6,10 +6,11 @@ package metrics
import (
"context"
"fmt"
"sync"
"time"
"github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/apimachinery/pkg/runtime/schema"
@ -44,6 +45,20 @@ var (
Buckets: []float64{1, 5, 10, 15, 30, 60, 120, 300, 600, 1800, 3600},
}, []string{"operation"})
// ExternalAPICalls is a counter metric of the number of external
// API calls. "service" and "operation" labels could be used to
// classify calls into a two-level hierarchy, in which calls are
// "operations" that belong to a "service". Users should beware of
// performance implications of high cardinality that could occur
// when there are many services and operations. See:
// https://prometheus.io/docs/practices/naming/#labels
ExternalAPICalls = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "external_api_calls_total",
Help: "The number of external API calls.",
}, []string{"service", "operation"})
// DeletionTime is the histogram metric for collecting statistics on the
// intervals between the deletion timestamp and the moment when
// the resource is observed to be missing (actually deleted).
@ -110,6 +125,13 @@ type Observations struct {
observeReconcileDelay bool
}
func NameForManaged(mg resource.Managed) string {
if mg.GetNamespace() == "" {
return mg.GetName()
}
return fmt.Sprintf("%s/%s", mg.GetNamespace(), mg.GetName())
}
func NewMetricRecorder(gvk schema.GroupVersionKind, c cluster.Cluster, pollInterval time.Duration) *MetricRecorder {
return &MetricRecorder{
gvk: gvk,
@ -160,7 +182,7 @@ func (r *MetricRecorder) Start(ctx context.Context) error {
obj = final.Obj
}
managed := obj.(resource.Managed)
r.observations.Delete(managed.GetName())
r.observations.Delete(NameForManaged(managed))
},
})
if err != nil {
@ -174,5 +196,5 @@ func (r *MetricRecorder) Start(ctx context.Context) error {
}
func init() {
metrics.Registry.MustRegister(CLITime, CLIExecutions, TFProcesses, TTRMeasurements, ExternalAPITime, DeletionTime, ReconcileDelay)
metrics.Registry.MustRegister(CLITime, CLIExecutions, TFProcesses, TTRMeasurements, ExternalAPITime, ExternalAPICalls, DeletionTime, ReconcileDelay)
}

View File

@ -1,347 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"strconv"
v1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/resource/unstructured/claim"
"github.com/crossplane/crossplane-runtime/pkg/resource/unstructured/composite"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
const (
stepPauseManaged step = iota
stepPauseComposites
stepCreateNewManaged
stepNewCompositions
stepEditComposites
stepEditClaims
stepDeletionPolicyOrphan
stepRemoveFinalizers
stepDeleteOldManaged
stepStartManaged
stepStartComposites
// this must be the last step
stepAPIEnd
)
func getAPIMigrationSteps() []step {
steps := make([]step, 0, stepAPIEnd)
for i := step(0); i < stepAPIEnd; i++ {
steps = append(steps, i)
}
return steps
}
func getAPIMigrationStepsFileSystemMode() []step {
return []step{
stepCreateNewManaged,
stepNewCompositions,
stepEditComposites,
stepEditClaims,
stepStartManaged,
stepStartComposites,
// this must be the last step
stepAPIEnd,
}
}
func (pg *PlanGenerator) addStepsForManagedResource(u *UnstructuredWithMetadata) error {
if u.Metadata.Category != CategoryManaged {
if _, ok, err := toManagedResource(pg.registry.scheme, u.Object); err != nil || !ok {
// not a managed resource or unable to determine
// whether it's a managed resource
return nil //nolint:nilerr
}
}
qName := getQualifiedName(u.Object)
if err := pg.stepPauseManagedResource(u, qName); err != nil {
return err
}
if err := pg.stepOrphanManagedResource(u, qName); err != nil {
return err
}
if err := pg.stepRemoveFinalizersManagedResource(u, qName); err != nil {
return err
}
pg.stepDeleteOldManagedResource(u)
orphaned, err := pg.stepOrhanMR(*u)
if err != nil {
return err
}
if !orphaned {
return nil
}
_, err = pg.stepRevertOrhanMR(*u)
return err
}
func (pg *PlanGenerator) stepStartManagedResource(u *UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepStartManaged) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepStartManaged).Name, getQualifiedName(u.Object))
pg.stepAPI(stepStartManaged).Patch.Files = append(pg.stepAPI(stepStartManaged).Patch.Files, u.Metadata.Path)
return pg.pause(*u, false)
}
func (pg *PlanGenerator) stepPauseManagedResource(u *UnstructuredWithMetadata, qName string) error {
if !pg.stepEnabled(stepPauseManaged) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepPauseManaged).Name, qName)
pg.stepAPI(stepPauseManaged).Patch.Files = append(pg.stepAPI(stepPauseManaged).Patch.Files, u.Metadata.Path)
return pg.pause(*u, true)
}
func (pg *PlanGenerator) stepPauseComposite(u *UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepPauseComposites) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepPauseComposites).Name, getQualifiedName(u.Object))
pg.stepAPI(stepPauseComposites).Patch.Files = append(pg.stepAPI(stepPauseComposites).Patch.Files, u.Metadata.Path)
return pg.pause(*u, true)
}
func (pg *PlanGenerator) stepOrphanManagedResource(u *UnstructuredWithMetadata, qName string) error {
if !pg.stepEnabled(stepDeletionPolicyOrphan) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepDeletionPolicyOrphan).Name, qName)
pg.stepAPI(stepDeletionPolicyOrphan).Patch.Files = append(pg.stepAPI(stepDeletionPolicyOrphan).Patch.Files, u.Metadata.Path)
return errors.Wrap(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"spec": map[string]any{
"deletionPolicy": string(v1.DeletionOrphan),
},
}),
},
Metadata: u.Metadata,
}), errResourceOrphan)
}
func (pg *PlanGenerator) stepRemoveFinalizersManagedResource(u *UnstructuredWithMetadata, qName string) error {
if !pg.stepEnabled(stepRemoveFinalizers) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepRemoveFinalizers).Name, qName)
pg.stepAPI(stepRemoveFinalizers).Patch.Files = append(pg.stepAPI(stepRemoveFinalizers).Patch.Files, u.Metadata.Path)
return pg.removeFinalizers(*u)
}
func (pg *PlanGenerator) stepDeleteOldManagedResource(u *UnstructuredWithMetadata) {
if !pg.stepEnabled(stepDeleteOldManaged) {
return
}
pg.stepAPI(stepDeleteOldManaged).Delete.Resources = append(pg.stepAPI(stepDeleteOldManaged).Delete.Resources,
Resource{
GroupVersionKind: FromGroupVersionKind(u.Object.GroupVersionKind()),
Name: u.Object.GetName(),
})
}
func (pg *PlanGenerator) stepNewManagedResource(u *UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepCreateNewManaged) {
return nil
}
meta.AddAnnotations(&u.Object, map[string]string{meta.AnnotationKeyReconciliationPaused: "true"})
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepCreateNewManaged).Name, getQualifiedName(u.Object))
pg.stepAPI(stepCreateNewManaged).Apply.Files = append(pg.stepAPI(stepCreateNewManaged).Apply.Files, u.Metadata.Path)
if err := pg.target.Put(*u); err != nil {
return errors.Wrap(err, errResourceOutput)
}
return nil
}
func (pg *PlanGenerator) stepNewComposition(u *UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepNewCompositions) {
return nil
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepNewCompositions).Name, getQualifiedName(u.Object))
pg.stepAPI(stepNewCompositions).Apply.Files = append(pg.stepAPI(stepNewCompositions).Apply.Files, u.Metadata.Path)
if err := pg.target.Put(*u); err != nil {
return errors.Wrap(err, errCompositionOutput)
}
return nil
}
func (pg *PlanGenerator) stepStartComposites(composites []UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepStartComposites) {
return nil
}
for _, u := range composites {
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepStartComposites).Name, getQualifiedName(u.Object))
pg.stepAPI(stepStartComposites).Patch.Files = append(pg.stepAPI(stepStartComposites).Patch.Files, u.Metadata.Path)
if err := pg.pause(u, false); err != nil {
return errors.Wrap(err, errCompositeOutput)
}
}
return nil
}
func (pg *PlanGenerator) pause(u UnstructuredWithMetadata, isPaused bool) error {
return errors.Wrap(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"metadata": map[string]any{
"annotations": map[string]any{
meta.AnnotationKeyReconciliationPaused: strconv.FormatBool(isPaused),
},
},
}),
},
Metadata: Metadata{
Path: u.Metadata.Path,
},
}), errPause)
}
func (pg *PlanGenerator) removeFinalizers(u UnstructuredWithMetadata) error {
return errors.Wrap(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"metadata": map[string]any{
"finalizers": []any{},
},
}),
},
Metadata: Metadata{
Path: u.Metadata.Path,
},
}), errResourceRemoveFinalizer)
}
func (pg *PlanGenerator) stepEditComposites(composites []UnstructuredWithMetadata, convertedMap map[corev1.ObjectReference][]UnstructuredWithMetadata, convertedComposition map[string]string) error {
if !pg.stepEnabled(stepEditComposites) {
return nil
}
for _, u := range composites {
cp := composite.Unstructured{Unstructured: u.Object}
refs := cp.GetResourceReferences()
// compute new spec.resourceRefs so that the XR references the new MRs
newRefs := make([]corev1.ObjectReference, 0, len(refs))
for _, ref := range refs {
converted, ok := convertedMap[ref]
if !ok {
newRefs = append(newRefs, ref)
continue
}
for _, o := range converted {
gvk := o.Object.GroupVersionKind()
newRefs = append(newRefs, corev1.ObjectReference{
Kind: gvk.Kind,
Name: o.Object.GetName(),
APIVersion: gvk.GroupVersion().String(),
})
}
}
cp.SetResourceReferences(newRefs)
// compute new spec.compositionRef
if ref := cp.GetCompositionReference(); ref != nil && convertedComposition[ref.Name] != "" {
ref.Name = convertedComposition[ref.Name]
cp.SetCompositionReference(ref)
}
spec := u.Object.Object["spec"].(map[string]any)
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepEditComposites).Name, getQualifiedName(u.Object))
pg.stepAPI(stepEditComposites).Patch.Files = append(pg.stepAPI(stepEditComposites).Patch.Files, u.Metadata.Path)
if err := pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"spec": map[string]any{
keyResourceRefs: spec[keyResourceRefs],
keyCompositionRef: spec[keyCompositionRef]},
}),
},
Metadata: u.Metadata,
}); err != nil {
return errors.Wrap(err, errCompositeOutput)
}
}
return nil
}
func (pg *PlanGenerator) stepEditClaims(claims []UnstructuredWithMetadata, convertedComposition map[string]string) error {
if !pg.stepEnabled(stepEditClaims) {
return nil
}
for _, u := range claims {
cm := claim.Unstructured{Unstructured: u.Object}
if ref := cm.GetCompositionReference(); ref != nil && convertedComposition[ref.Name] != "" {
ref.Name = convertedComposition[ref.Name]
cm.SetCompositionReference(ref)
}
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", pg.stepAPI(stepEditClaims).Name, getQualifiedName(u.Object))
pg.stepAPI(stepEditClaims).Patch.Files = append(pg.stepAPI(stepEditClaims).Patch.Files, u.Metadata.Path)
if err := pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"spec": map[string]any{
keyCompositionRef: u.Object.Object["spec"].(map[string]any)[keyCompositionRef],
},
}),
},
Metadata: u.Metadata,
}); err != nil {
return errors.Wrap(err, errClaimOutput)
}
}
return nil
}
// NOTE: to cover different migration scenarios, we may use
// "migration templates" instead of a static plan. But a static plan should be
// fine as a start.
func (pg *PlanGenerator) stepAPI(s step) *Step { //nolint:gocyclo // all steps under a single clause for readability
stepKey := strconv.Itoa(int(s))
if pg.Plan.Spec.stepMap[stepKey] != nil {
return pg.Plan.Spec.stepMap[stepKey]
}
pg.Plan.Spec.stepMap[stepKey] = &Step{}
switch s { //nolint:exhaustive
case stepPauseManaged:
setPatchStep("pause-managed", pg.Plan.Spec.stepMap[stepKey])
case stepPauseComposites:
setPatchStep("pause-composites", pg.Plan.Spec.stepMap[stepKey])
case stepCreateNewManaged:
setApplyStep("create-new-managed", pg.Plan.Spec.stepMap[stepKey])
case stepNewCompositions:
setApplyStep("new-compositions", pg.Plan.Spec.stepMap[stepKey])
case stepEditComposites:
setPatchStep("edit-composites", pg.Plan.Spec.stepMap[stepKey])
case stepEditClaims:
setPatchStep("edit-claims", pg.Plan.Spec.stepMap[stepKey])
case stepDeletionPolicyOrphan:
setPatchStep("deletion-policy-orphan", pg.Plan.Spec.stepMap[stepKey])
case stepRemoveFinalizers:
setPatchStep("remove-finalizers", pg.Plan.Spec.stepMap[stepKey])
case stepDeleteOldManaged:
setDeleteStep("delete-old-managed", pg.Plan.Spec.stepMap[stepKey])
case stepStartManaged:
setPatchStep("start-managed", pg.Plan.Spec.stepMap[stepKey])
case stepStartComposites:
setPatchStep("start-composites", pg.Plan.Spec.stepMap[stepKey])
default:
panic(fmt.Sprintf(errInvalidStepFmt, s))
}
return pg.Plan.Spec.stepMap[stepKey]
}

View File

@ -1,32 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
const (
errEditCategory = "failed to put the edited resource of category %q: %s"
)
func (pg *PlanGenerator) stepEditCategory(source UnstructuredWithMetadata, t *UnstructuredWithMetadata) error {
s := pg.stepConfiguration(stepOrphanMRs)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
s.Patch.Files = append(s.Patch.Files, t.Metadata.Path)
patchMap, err := computeJSONMergePathDoc(source.Object, t.Object)
if err != nil {
return err
}
return errors.Wrapf(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(t.Object, patchMap),
},
Metadata: t.Metadata,
}), errEditCategory, source.Metadata.Category, source.Object.GetName())
}

View File

@ -1,170 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"strconv"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
"github.com/pkg/errors"
)
const (
// configuration migration steps follow any existing API migration steps
stepBackupMRs = iota + stepAPIEnd + 1
stepBackupComposites
stepBackupClaims
stepOrphanMRs
stepNewFamilyProvider
stepCheckHealthFamilyProvider
stepNewServiceScopedProvider
stepCheckHealthNewServiceScopedProvider
stepConfigurationPackageDisableDepResolution
stepEditPackageLock
stepDeleteMonolithicProvider
stepActivateFamilyProviderRevision
stepCheckInstallationFamilyProviderRevision
stepActivateServiceScopedProviderRevision
stepCheckInstallationServiceScopedProviderRevision
stepEditConfigurationMetadata
stepBuildConfiguration
stepPushConfiguration
stepEditConfigurationPackage
stepConfigurationPackageEnableDepResolution
stepRevertOrphanMRs
stepConfigurationEnd
)
func getConfigurationMigrationSteps() []step {
steps := make([]step, 0, stepConfigurationEnd-stepAPIEnd-1)
for i := stepAPIEnd + 1; i < stepConfigurationEnd; i++ {
steps = append(steps, i)
}
return steps
}
const (
errConfigurationMetadataOutput = "failed to output configuration YAML document"
)
func (pg *PlanGenerator) convertConfigurationMetadata(o UnstructuredWithMetadata) error {
isConverted := false
conf, err := toConfigurationMetadata(o.Object)
if err != nil {
return err
}
for _, confConv := range pg.registry.configurationMetaConverters {
if confConv.re == nil || confConv.converter == nil || !confConv.re.MatchString(o.Object.GetName()) {
continue
}
switch o.Object.GroupVersionKind().Version {
case "v1alpha1":
err = confConv.converter.ConfigurationMetadataV1Alpha1(conf.(*xpmetav1alpha1.Configuration))
default:
err = confConv.converter.ConfigurationMetadataV1(conf.(*xpmetav1.Configuration))
}
if err != nil {
return errors.Wrapf(err, "failed to call converter on Configuration: %s", conf.GetName())
}
// TODO: if a configuration converter only converts a specific version,
// (or does not convert the given configuration),
// we will have a false positive. Better to compute and check
// a diff here.
isConverted = true
}
if !isConverted {
return nil
}
return pg.stepEditConfigurationMetadata(o, &UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(conf),
Metadata: o.Metadata,
})
}
func (pg *PlanGenerator) stepConfiguration(s step) *Step {
return pg.stepConfigurationWithSubStep(s, false)
}
func (pg *PlanGenerator) configurationSubStep(s step) string {
ss := -1
subStep := pg.subSteps[s]
if subStep != "" {
s, err := strconv.Atoi(subStep)
if err == nil {
ss = s
}
}
pg.subSteps[s] = strconv.Itoa(ss + 1)
return pg.subSteps[s]
}
func (pg *PlanGenerator) stepConfigurationWithSubStep(s step, newSubStep bool) *Step { //nolint:gocyclo // easy to follow all steps here
stepKey := strconv.Itoa(int(s))
if newSubStep {
stepKey = fmt.Sprintf("%s.%s", stepKey, pg.configurationSubStep(s))
}
if pg.Plan.Spec.stepMap[stepKey] != nil {
return pg.Plan.Spec.stepMap[stepKey]
}
pg.Plan.Spec.stepMap[stepKey] = &Step{}
switch s { //nolint:exhaustive
case stepOrphanMRs:
setPatchStep("deletion-policy-orphan", pg.Plan.Spec.stepMap[stepKey])
case stepRevertOrphanMRs:
setPatchStep("deletion-policy-delete", pg.Plan.Spec.stepMap[stepKey])
case stepNewFamilyProvider:
setApplyStep("new-ssop", pg.Plan.Spec.stepMap[stepKey])
case stepNewServiceScopedProvider:
setApplyStep("new-ssop", pg.Plan.Spec.stepMap[stepKey])
case stepConfigurationPackageDisableDepResolution:
setPatchStep("disable-dependency-resolution", pg.Plan.Spec.stepMap[stepKey])
case stepConfigurationPackageEnableDepResolution:
setPatchStep("enable-dependency-resolution", pg.Plan.Spec.stepMap[stepKey])
case stepEditConfigurationPackage:
setPatchStep("edit-configuration-package", pg.Plan.Spec.stepMap[stepKey])
case stepEditPackageLock:
setPatchStep("edit-package-lock", pg.Plan.Spec.stepMap[stepKey])
case stepDeleteMonolithicProvider:
setDeleteStep("delete-monolithic-provider", pg.Plan.Spec.stepMap[stepKey])
case stepActivateFamilyProviderRevision:
setPatchStep("activate-ssop", pg.Plan.Spec.stepMap[stepKey])
case stepActivateServiceScopedProviderRevision:
setPatchStep("activate-ssop", pg.Plan.Spec.stepMap[stepKey])
case stepEditConfigurationMetadata:
setExecStep("edit-configuration-metadata", pg.Plan.Spec.stepMap[stepKey])
case stepBackupMRs:
setExecStep("backup-managed-resources", pg.Plan.Spec.stepMap[stepKey])
case stepBackupComposites:
setExecStep("backup-composite-resources", pg.Plan.Spec.stepMap[stepKey])
case stepBackupClaims:
setExecStep("backup-claim-resources", pg.Plan.Spec.stepMap[stepKey])
case stepCheckHealthFamilyProvider:
setExecStep("wait-for-healthy", pg.Plan.Spec.stepMap[stepKey])
case stepCheckHealthNewServiceScopedProvider:
setExecStep("wait-for-healthy", pg.Plan.Spec.stepMap[stepKey])
case stepCheckInstallationFamilyProviderRevision:
setExecStep("wait-for-installed", pg.Plan.Spec.stepMap[stepKey])
case stepCheckInstallationServiceScopedProviderRevision:
setExecStep("wait-for-installed", pg.Plan.Spec.stepMap[stepKey])
case stepBuildConfiguration:
setExecStep("build-configuration", pg.Plan.Spec.stepMap[stepKey])
case stepPushConfiguration:
setExecStep("push-configuration", pg.Plan.Spec.stepMap[stepKey])
default:
panic(fmt.Sprintf(errInvalidStepFmt, s))
}
return pg.Plan.Spec.stepMap[stepKey]
}
func (pg *PlanGenerator) stepEditConfigurationMetadata(source UnstructuredWithMetadata, target *UnstructuredWithMetadata) error {
s := pg.stepConfiguration(stepEditConfigurationMetadata)
target.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(target.Object))
s.Exec.Args = []string{"-c", fmt.Sprintf("cp %s %s", target.Metadata.Path, source.Metadata.Path)}
return errors.Wrap(pg.target.Put(*target), errConfigurationMetadataOutput)
}

View File

@ -1,143 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
v1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
const (
errSetDeletionPolicyFmt = "failed to put the patch file to set the deletion policy to %q: %s"
errEditConfigurationPackageFmt = `failed to put the edited Configuration package: %s`
)
func (pg *PlanGenerator) convertConfigurationPackage(o UnstructuredWithMetadata) error {
pkg, err := toConfigurationPackageV1(o.Object)
if err != nil {
return err
}
// add step for disabling the dependency resolution
// for the configuration package
s := pg.stepConfiguration(stepConfigurationPackageDisableDepResolution)
p := fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(o.Object))
s.Patch.Files = append(s.Patch.Files, p)
if err := pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(o.Object, map[string]any{
"spec": map[string]any{
"skipDependencyResolution": true,
},
}),
},
Metadata: Metadata{
Path: p,
},
}); err != nil {
return err
}
// add step for enabling the dependency resolution
// for the configuration package
s = pg.stepConfiguration(stepConfigurationPackageEnableDepResolution)
p = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(o.Object))
s.Patch.Files = append(s.Patch.Files, p)
if err := pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(o.Object, map[string]any{
"spec": map[string]any{
"skipDependencyResolution": false,
},
}),
},
Metadata: Metadata{
Path: p,
},
}); err != nil {
return err
}
// add the step for editing the configuration package
for _, pkgConv := range pg.registry.configurationPackageConverters {
if pkgConv.re == nil || pkgConv.converter == nil || !pkgConv.re.MatchString(pkg.Spec.Package) {
continue
}
err := pkgConv.converter.ConfigurationPackageV1(pkg)
if err != nil {
return errors.Wrapf(err, "failed to call converter on Configuration package: %s", pkg.Spec.Package)
}
// TODO: if a converter only converts a specific version,
// (or does not convert the given configuration),
// we will have a false positive. Better to compute and check
// a diff here.
target := &UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(pkg),
Metadata: o.Metadata,
}
if err := pg.stepEditConfigurationPackage(o, target); err != nil {
return err
}
}
return nil
}
func (pg *PlanGenerator) stepEditConfigurationPackage(source UnstructuredWithMetadata, t *UnstructuredWithMetadata) error {
if !pg.stepEnabled(stepEditConfigurationPackage) {
return nil
}
s := pg.stepConfigurationWithSubStep(stepEditConfigurationPackage, true)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
s.Patch.Files = append(s.Patch.Files, t.Metadata.Path)
patchMap, err := computeJSONMergePathDoc(source.Object, t.Object)
if err != nil {
return err
}
return errors.Wrapf(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(t.Object, patchMap),
},
Metadata: t.Metadata,
}), errEditConfigurationPackageFmt, t.Object.GetName())
}
func (pg *PlanGenerator) stepOrhanMR(u UnstructuredWithMetadata) (bool, error) {
return pg.stepSetDeletionPolicy(u, stepOrphanMRs, v1.DeletionOrphan, true)
}
func (pg *PlanGenerator) stepRevertOrhanMR(u UnstructuredWithMetadata) (bool, error) {
return pg.stepSetDeletionPolicy(u, stepRevertOrphanMRs, v1.DeletionDelete, false)
}
func (pg *PlanGenerator) stepSetDeletionPolicy(u UnstructuredWithMetadata, step step, policy v1.DeletionPolicy, checkCurrentPolicy bool) (bool, error) {
if !pg.stepEnabled(step) {
return false, nil
}
pv := fieldpath.Pave(u.Object.Object)
p, err := pv.GetString("spec.deletionPolicy")
if err != nil && !fieldpath.IsNotFound(err) {
return false, errors.Wrapf(err, "failed to get the current deletion policy from MR: %s", u.Object.GetName())
}
if checkCurrentPolicy && err == nil && v1.DeletionPolicy(p) == policy {
return false, nil
}
s := pg.stepConfiguration(step)
u.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(u.Object))
s.Patch.Files = append(s.Patch.Files, u.Metadata.Path)
return true, errors.Wrapf(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(u.Object, map[string]any{
"spec": map[string]any{
"deletionPolicy": string(policy),
},
}),
},
Metadata: u.Metadata,
}), errSetDeletionPolicyFmt, policy, u.Object.GetName())
}

View File

@ -1,304 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
xpmeta "github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
xppkgv1 "github.com/crossplane/crossplane/apis/pkg/v1"
xppkgv1beta1 "github.com/crossplane/crossplane/apis/pkg/v1beta1"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/json"
k8sjson "sigs.k8s.io/json"
)
const (
errFromUnstructured = "failed to convert from unstructured.Unstructured to the managed resource type"
errFromUnstructuredConfMeta = "failed to convert from unstructured.Unstructured to Crossplane Configuration metadata"
errFromUnstructuredConfPackage = "failed to convert from unstructured.Unstructured to Crossplane Configuration package"
errFromUnstructuredProvider = "failed to convert from unstructured.Unstructured to Crossplane Provider package"
errFromUnstructuredLock = "failed to convert from unstructured.Unstructured to Crossplane package lock"
errToUnstructured = "failed to convert from the managed resource type to unstructured.Unstructured"
errRawExtensionUnmarshal = "failed to unmarshal runtime.RawExtension"
errFmtPavedDelete = "failed to delete fieldpath %q from paved"
metadataAnnotationPaveKey = "metadata.annotations['%s']"
)
// CopyInto copies values of fields from the migration `source` object
// into the migration `target` object and fills in the target object's
// TypeMeta using the supplied `targetGVK`. While copying fields from
// migration source to migration target, the fields at the paths
// specified with `skipFieldPaths` array are skipped. This is a utility
// that can be used in the migration resource converter implementations.
// If a certain field with the same name in both the `source` and the `target`
// objects has different types in `source` and `target`, then it must be
// included in the `skipFieldPaths` and it must manually be handled in the
// conversion function.
func CopyInto(source any, target any, targetGVK schema.GroupVersionKind, skipFieldPaths ...string) (any, error) {
u := ToSanitizedUnstructured(source)
paved := fieldpath.Pave(u.Object)
skipFieldPaths = append(skipFieldPaths, "apiVersion", "kind",
fmt.Sprintf(metadataAnnotationPaveKey, xpmeta.AnnotationKeyExternalCreatePending),
fmt.Sprintf(metadataAnnotationPaveKey, xpmeta.AnnotationKeyExternalCreateSucceeded),
fmt.Sprintf(metadataAnnotationPaveKey, xpmeta.AnnotationKeyExternalCreateFailed),
fmt.Sprintf(metadataAnnotationPaveKey, corev1.LastAppliedConfigAnnotation),
)
for _, p := range skipFieldPaths {
if err := paved.DeleteField(p); err != nil {
return nil, errors.Wrapf(err, errFmtPavedDelete, p)
}
}
u.SetGroupVersionKind(targetGVK)
return target, errors.Wrap(runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, target), errFromUnstructured)
}
// sanitizeResource removes certain fields from the unstructured object.
// It turns out that certain fields, such as `metadata.creationTimestamp`
// are still serialized even if they have zero-values. This function
// removes such fields. We also unconditionally sanitize `status`
// so that the controller will populate it back.
func sanitizeResource(m map[string]any) map[string]any {
delete(m, "status")
if _, ok := m["metadata"]; !ok {
return m
}
metadata := m["metadata"].(map[string]any)
if v := metadata["creationTimestamp"]; v == nil {
delete(metadata, "creationTimestamp")
}
if len(metadata) == 0 {
delete(m, "metadata")
}
removeNilValuedKeys(m)
return m
}
// removeNilValuedKeys removes nil values from the specified map so that
// the serialized manifest do not contain corresponding superfluous YAML
// nulls.
func removeNilValuedKeys(m map[string]interface{}) {
for k, v := range m {
if v == nil {
delete(m, k)
continue
}
switch c := v.(type) {
case map[string]any:
removeNilValuedKeys(c)
case []any:
for _, e := range c {
if cm, ok := e.(map[string]interface{}); ok {
removeNilValuedKeys(cm)
}
}
}
}
}
// ToSanitizedUnstructured converts the specified managed resource to an
// unstructured.Unstructured. Before the converted object is
// returned, it's sanitized by removing certain fields
// (like status, metadata.creationTimestamp).
func ToSanitizedUnstructured(mg any) unstructured.Unstructured {
m, err := runtime.DefaultUnstructuredConverter.ToUnstructured(mg)
if err != nil {
panic(errors.Wrap(err, errToUnstructured))
}
return unstructured.Unstructured{
Object: sanitizeResource(m),
}
}
// FromRawExtension attempts to convert a runtime.RawExtension into
// an unstructured.Unstructured.
func FromRawExtension(r runtime.RawExtension) (unstructured.Unstructured, error) {
var m map[string]interface{}
if err := json.Unmarshal(r.Raw, &m); err != nil {
return unstructured.Unstructured{}, errors.Wrap(err, errRawExtensionUnmarshal)
}
return unstructured.Unstructured{
Object: m,
}, nil
}
// FromGroupVersionKind converts a schema.GroupVersionKind into
// a migration.GroupVersionKind.
func FromGroupVersionKind(gvk schema.GroupVersionKind) GroupVersionKind {
return GroupVersionKind{
Group: gvk.Group,
Version: gvk.Version,
Kind: gvk.Kind,
}
}
// ToComposition converts the specified unstructured.Unstructured to
// a Crossplane Composition.
// Workaround for:
// https://github.com/kubernetes-sigs/structured-merge-diff/issues/230
func ToComposition(u unstructured.Unstructured) (*xpv1.Composition, error) {
buff, err := json.Marshal(u.Object)
if err != nil {
return nil, errors.Wrap(err, "failed to marshal map to JSON")
}
c := &xpv1.Composition{}
return c, errors.Wrap(k8sjson.UnmarshalCaseSensitivePreserveInts(buff, c), "failed to unmarshal into a v1.Composition")
}
func addGVK(u unstructured.Unstructured, target map[string]any) map[string]any {
if target == nil {
target = make(map[string]any)
}
target["apiVersion"] = u.GetAPIVersion()
target["kind"] = u.GetKind()
return target
}
func addNameGVK(u unstructured.Unstructured, target map[string]any) map[string]any {
target = addGVK(u, target)
m := target["metadata"]
if m == nil {
m = make(map[string]any)
}
metadata := m.(map[string]any)
metadata["name"] = u.GetName()
if len(u.GetNamespace()) != 0 {
metadata["namespace"] = u.GetNamespace()
}
target["metadata"] = m
return target
}
func toManagedResource(c runtime.ObjectCreater, u unstructured.Unstructured) (resource.Managed, bool, error) {
gvk := u.GroupVersionKind()
if gvk == xpv1.CompositionGroupVersionKind {
return nil, false, nil
}
obj, err := c.New(gvk)
if err != nil {
return nil, false, errors.Wrapf(err, errFmtNewObject, gvk)
}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, obj); err != nil {
return nil, false, errors.Wrap(err, errFromUnstructured)
}
mg, ok := obj.(resource.Managed)
return mg, ok, nil
}
func toConfigurationPackageV1(u unstructured.Unstructured) (*xppkgv1.Configuration, error) {
conf := &xppkgv1.Configuration{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, conf); err != nil {
return nil, errors.Wrap(err, errFromUnstructuredConfPackage)
}
return conf, nil
}
func toConfigurationMetadataV1(u unstructured.Unstructured) (*xpmetav1.Configuration, error) {
conf := &xpmetav1.Configuration{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, conf); err != nil {
return nil, errors.Wrap(err, errFromUnstructuredConfMeta)
}
return conf, nil
}
func toConfigurationMetadataV1Alpha1(u unstructured.Unstructured) (*xpmetav1alpha1.Configuration, error) {
conf := &xpmetav1alpha1.Configuration{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, conf); err != nil {
return nil, errors.Wrap(err, errFromUnstructuredConfMeta)
}
return conf, nil
}
func toConfigurationMetadata(u unstructured.Unstructured) (metav1.Object, error) {
var conf metav1.Object
var err error
switch u.GroupVersionKind().Version {
case "v1alpha1":
conf, err = toConfigurationMetadataV1Alpha1(u)
default:
conf, err = toConfigurationMetadataV1(u)
}
return conf, err
}
func toProviderPackage(u unstructured.Unstructured) (*xppkgv1.Provider, error) {
pkg := &xppkgv1.Provider{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, pkg); err != nil {
return nil, errors.Wrap(err, errFromUnstructuredProvider)
}
return pkg, nil
}
func getCategory(u unstructured.Unstructured) Category {
switch u.GroupVersionKind() {
case xpv1.CompositionGroupVersionKind:
return CategoryComposition
default:
return categoryUnknown
}
}
func toPackageLock(u unstructured.Unstructured) (*xppkgv1beta1.Lock, error) {
lock := &xppkgv1beta1.Lock{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, lock); err != nil {
return nil, errors.Wrap(err, errFromUnstructuredLock)
}
return lock, nil
}
// ConvertComposedTemplatePatchesMap converts the composed templates with given conversionMap
// Key of the conversionMap points to the source field
// Value of the conversionMap points to the target field
func ConvertComposedTemplatePatchesMap(sourceTemplate xpv1.ComposedTemplate, conversionMap map[string]string) []xpv1.Patch {
var patchesToAdd []xpv1.Patch
for _, p := range sourceTemplate.Patches {
switch p.Type { //nolint:exhaustive
case xpv1.PatchTypeFromCompositeFieldPath, xpv1.PatchTypeCombineFromComposite, xpv1.PatchTypeFromEnvironmentFieldPath, xpv1.PatchTypeCombineFromEnvironment, "":
{
if p.ToFieldPath != nil {
if to, ok := conversionMap[*p.ToFieldPath]; ok {
patchesToAdd = append(patchesToAdd, xpv1.Patch{
Type: p.Type,
FromFieldPath: p.FromFieldPath,
ToFieldPath: &to,
Transforms: p.Transforms,
Policy: p.Policy,
Combine: p.Combine,
})
}
}
}
case xpv1.PatchTypeToCompositeFieldPath, xpv1.PatchTypeCombineToComposite, xpv1.PatchTypeToEnvironmentFieldPath, xpv1.PatchTypeCombineToEnvironment:
{
if p.FromFieldPath != nil {
if to, ok := conversionMap[*p.FromFieldPath]; ok {
patchesToAdd = append(patchesToAdd, xpv1.Patch{
Type: p.Type,
FromFieldPath: &to,
ToFieldPath: p.ToFieldPath,
Transforms: p.Transforms,
Policy: p.Policy,
Combine: p.Combine,
})
}
}
}
}
}
return patchesToAdd
}

View File

@ -1,21 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import "fmt"
type errUnsupportedStepType struct {
planStep Step
}
func (e errUnsupportedStepType) Error() string {
return fmt.Sprintf("executor does not support steps of type %q in step: %s", e.planStep.Type, e.planStep.Name)
}
func NewUnsupportedStepTypeError(s Step) error {
return errUnsupportedStepType{
planStep: s,
}
}

View File

@ -1,80 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"github.com/pkg/errors"
)
func (pg *PlanGenerator) stepBackupAllResources() {
pg.stepBackupManagedResources()
pg.stepBackupCompositeResources()
pg.stepBackupClaims()
}
func (pg *PlanGenerator) stepBackupManagedResources() {
s := pg.stepConfiguration(stepBackupMRs)
s.Exec.Args = []string{"-c", "kubectl get managed -o yaml > backup/managed-resources.yaml"}
}
func (pg *PlanGenerator) stepBackupCompositeResources() {
s := pg.stepConfiguration(stepBackupComposites)
s.Exec.Args = []string{"-c", "kubectl get composite -o yaml > backup/composite-resources.yaml"}
}
func (pg *PlanGenerator) stepBackupClaims() {
s := pg.stepConfiguration(stepBackupClaims)
s.Exec.Args = []string{"-c", "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"}
}
func (pg *PlanGenerator) stepCheckHealthOfNewProvider(source UnstructuredWithMetadata, targets []*UnstructuredWithMetadata) error {
for _, t := range targets {
var s *Step
isFamilyConfig, err := checkContainsFamilyConfigProvider(targets)
if err != nil {
return errors.Wrapf(err, "could not decide whether the provider family config")
}
if isFamilyConfig {
s = pg.stepConfigurationWithSubStep(stepCheckHealthFamilyProvider, true)
} else {
s = pg.stepConfigurationWithSubStep(stepCheckHealthNewServiceScopedProvider, true)
}
s.Exec.Args = []string{"-c", fmt.Sprintf("kubectl wait provider.pkg %s --for condition=Healthy", t.Object.GetName())}
t.Object.Object = addGVK(source.Object, t.Object.Object)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
}
return nil
}
func (pg *PlanGenerator) stepCheckInstallationOfNewProvider(source UnstructuredWithMetadata, targets []*UnstructuredWithMetadata) error {
for _, t := range targets {
var s *Step
isFamilyConfig, err := checkContainsFamilyConfigProvider(targets)
if err != nil {
return errors.Wrapf(err, "could not decide whether the provider family config")
}
if isFamilyConfig {
s = pg.stepConfigurationWithSubStep(stepCheckInstallationFamilyProviderRevision, true)
} else {
s = pg.stepConfigurationWithSubStep(stepCheckInstallationServiceScopedProviderRevision, true)
}
s.Exec.Args = []string{"-c", fmt.Sprintf("kubectl wait provider.pkg %s --for condition=Installed", t.Object.GetName())}
t.Object.Object = addGVK(source.Object, t.Object.Object)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
}
return nil
}
func (pg *PlanGenerator) stepBuildConfiguration() {
s := pg.stepConfiguration(stepBuildConfiguration)
s.Exec.Args = []string{"-c", "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"}
}
func (pg *PlanGenerator) stepPushConfiguration() {
s := pg.stepConfiguration(stepPushConfiguration)
s.Exec.Args = []string{"-c", "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"}
}

View File

@ -1,621 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
// Code generated by MockGen. DO NOT EDIT.
// Source: github.com/crossplane/crossplane-runtime/pkg/resource (interfaces: Managed)
// Package mocks is a generated GoMock package.
package mocks
import (
reflect "reflect"
v1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
gomock "github.com/golang/mock/gomock"
v10 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
)
// MockManaged is a mock of Managed interface.
type MockManaged struct {
ctrl *gomock.Controller
recorder *MockManagedMockRecorder
}
// MockManagedMockRecorder is the mock recorder for MockManaged.
type MockManagedMockRecorder struct {
mock *MockManaged
}
// NewMockManaged creates a new mock instance.
func NewMockManaged(ctrl *gomock.Controller) *MockManaged {
mock := &MockManaged{ctrl: ctrl}
mock.recorder = &MockManagedMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockManaged) EXPECT() *MockManagedMockRecorder {
return m.recorder
}
// DeepCopyObject mocks base method.
func (m *MockManaged) DeepCopyObject() runtime.Object {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "DeepCopyObject")
ret0, _ := ret[0].(runtime.Object)
return ret0
}
// DeepCopyObject indicates an expected call of DeepCopyObject.
func (mr *MockManagedMockRecorder) DeepCopyObject() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DeepCopyObject", reflect.TypeOf((*MockManaged)(nil).DeepCopyObject))
}
// GetAnnotations mocks base method.
func (m *MockManaged) GetAnnotations() map[string]string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAnnotations")
ret0, _ := ret[0].(map[string]string)
return ret0
}
// GetAnnotations indicates an expected call of GetAnnotations.
func (mr *MockManagedMockRecorder) GetAnnotations() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAnnotations", reflect.TypeOf((*MockManaged)(nil).GetAnnotations))
}
// GetCondition mocks base method.
func (m *MockManaged) GetCondition(arg0 v1.ConditionType) v1.Condition {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCondition", arg0)
ret0, _ := ret[0].(v1.Condition)
return ret0
}
// GetCondition indicates an expected call of GetCondition.
func (mr *MockManagedMockRecorder) GetCondition(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCondition", reflect.TypeOf((*MockManaged)(nil).GetCondition), arg0)
}
// GetCreationTimestamp mocks base method.
func (m *MockManaged) GetCreationTimestamp() v10.Time {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCreationTimestamp")
ret0, _ := ret[0].(v10.Time)
return ret0
}
// GetCreationTimestamp indicates an expected call of GetCreationTimestamp.
func (mr *MockManagedMockRecorder) GetCreationTimestamp() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCreationTimestamp", reflect.TypeOf((*MockManaged)(nil).GetCreationTimestamp))
}
// GetDeletionGracePeriodSeconds mocks base method.
func (m *MockManaged) GetDeletionGracePeriodSeconds() *int64 {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeletionGracePeriodSeconds")
ret0, _ := ret[0].(*int64)
return ret0
}
// GetDeletionGracePeriodSeconds indicates an expected call of GetDeletionGracePeriodSeconds.
func (mr *MockManagedMockRecorder) GetDeletionGracePeriodSeconds() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeletionGracePeriodSeconds", reflect.TypeOf((*MockManaged)(nil).GetDeletionGracePeriodSeconds))
}
// GetDeletionPolicy mocks base method.
func (m *MockManaged) GetDeletionPolicy() v1.DeletionPolicy {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeletionPolicy")
ret0, _ := ret[0].(v1.DeletionPolicy)
return ret0
}
// GetDeletionPolicy indicates an expected call of GetDeletionPolicy.
func (mr *MockManagedMockRecorder) GetDeletionPolicy() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeletionPolicy", reflect.TypeOf((*MockManaged)(nil).GetDeletionPolicy))
}
// GetDeletionTimestamp mocks base method.
func (m *MockManaged) GetDeletionTimestamp() *v10.Time {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeletionTimestamp")
ret0, _ := ret[0].(*v10.Time)
return ret0
}
// GetDeletionTimestamp indicates an expected call of GetDeletionTimestamp.
func (mr *MockManagedMockRecorder) GetDeletionTimestamp() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeletionTimestamp", reflect.TypeOf((*MockManaged)(nil).GetDeletionTimestamp))
}
// GetFinalizers mocks base method.
func (m *MockManaged) GetFinalizers() []string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetFinalizers")
ret0, _ := ret[0].([]string)
return ret0
}
// GetFinalizers indicates an expected call of GetFinalizers.
func (mr *MockManagedMockRecorder) GetFinalizers() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetFinalizers", reflect.TypeOf((*MockManaged)(nil).GetFinalizers))
}
// GetGenerateName mocks base method.
func (m *MockManaged) GetGenerateName() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetGenerateName")
ret0, _ := ret[0].(string)
return ret0
}
// GetGenerateName indicates an expected call of GetGenerateName.
func (mr *MockManagedMockRecorder) GetGenerateName() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetGenerateName", reflect.TypeOf((*MockManaged)(nil).GetGenerateName))
}
// GetGeneration mocks base method.
func (m *MockManaged) GetGeneration() int64 {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetGeneration")
ret0, _ := ret[0].(int64)
return ret0
}
// GetGeneration indicates an expected call of GetGeneration.
func (mr *MockManagedMockRecorder) GetGeneration() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetGeneration", reflect.TypeOf((*MockManaged)(nil).GetGeneration))
}
// GetLabels mocks base method.
func (m *MockManaged) GetLabels() map[string]string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetLabels")
ret0, _ := ret[0].(map[string]string)
return ret0
}
// GetLabels indicates an expected call of GetLabels.
func (mr *MockManagedMockRecorder) GetLabels() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetLabels", reflect.TypeOf((*MockManaged)(nil).GetLabels))
}
// GetManagedFields mocks base method.
func (m *MockManaged) GetManagedFields() []v10.ManagedFieldsEntry {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetManagedFields")
ret0, _ := ret[0].([]v10.ManagedFieldsEntry)
return ret0
}
// GetManagedFields indicates an expected call of GetManagedFields.
func (mr *MockManagedMockRecorder) GetManagedFields() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetManagedFields", reflect.TypeOf((*MockManaged)(nil).GetManagedFields))
}
// GetManagementPolicies mocks base method.
func (m *MockManaged) GetManagementPolicies() v1.ManagementPolicies {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetManagementPolicies")
ret0, _ := ret[0].(v1.ManagementPolicies)
return ret0
}
// GetManagementPolicies indicates an expected call of GetManagementPolicies.
func (mr *MockManagedMockRecorder) GetManagementPolicies() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetManagementPolicies", reflect.TypeOf((*MockManaged)(nil).GetManagementPolicies))
}
// GetName mocks base method.
func (m *MockManaged) GetName() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetName")
ret0, _ := ret[0].(string)
return ret0
}
// GetName indicates an expected call of GetName.
func (mr *MockManagedMockRecorder) GetName() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetName", reflect.TypeOf((*MockManaged)(nil).GetName))
}
// GetNamespace mocks base method.
func (m *MockManaged) GetNamespace() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetNamespace")
ret0, _ := ret[0].(string)
return ret0
}
// GetNamespace indicates an expected call of GetNamespace.
func (mr *MockManagedMockRecorder) GetNamespace() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetNamespace", reflect.TypeOf((*MockManaged)(nil).GetNamespace))
}
// GetObjectKind mocks base method.
func (m *MockManaged) GetObjectKind() schema.ObjectKind {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetObjectKind")
ret0, _ := ret[0].(schema.ObjectKind)
return ret0
}
// GetObjectKind indicates an expected call of GetObjectKind.
func (mr *MockManagedMockRecorder) GetObjectKind() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetObjectKind", reflect.TypeOf((*MockManaged)(nil).GetObjectKind))
}
// GetOwnerReferences mocks base method.
func (m *MockManaged) GetOwnerReferences() []v10.OwnerReference {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetOwnerReferences")
ret0, _ := ret[0].([]v10.OwnerReference)
return ret0
}
// GetOwnerReferences indicates an expected call of GetOwnerReferences.
func (mr *MockManagedMockRecorder) GetOwnerReferences() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetOwnerReferences", reflect.TypeOf((*MockManaged)(nil).GetOwnerReferences))
}
// GetProviderConfigReference mocks base method.
func (m *MockManaged) GetProviderConfigReference() *v1.Reference {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetProviderConfigReference")
ret0, _ := ret[0].(*v1.Reference)
return ret0
}
// GetProviderConfigReference indicates an expected call of GetProviderConfigReference.
func (mr *MockManagedMockRecorder) GetProviderConfigReference() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetProviderConfigReference", reflect.TypeOf((*MockManaged)(nil).GetProviderConfigReference))
}
// GetPublishConnectionDetailsTo mocks base method.
func (m *MockManaged) GetPublishConnectionDetailsTo() *v1.PublishConnectionDetailsTo {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPublishConnectionDetailsTo")
ret0, _ := ret[0].(*v1.PublishConnectionDetailsTo)
return ret0
}
// GetPublishConnectionDetailsTo indicates an expected call of GetPublishConnectionDetailsTo.
func (mr *MockManagedMockRecorder) GetPublishConnectionDetailsTo() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPublishConnectionDetailsTo", reflect.TypeOf((*MockManaged)(nil).GetPublishConnectionDetailsTo))
}
// GetResourceVersion mocks base method.
func (m *MockManaged) GetResourceVersion() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetResourceVersion")
ret0, _ := ret[0].(string)
return ret0
}
// GetResourceVersion indicates an expected call of GetResourceVersion.
func (mr *MockManagedMockRecorder) GetResourceVersion() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetResourceVersion", reflect.TypeOf((*MockManaged)(nil).GetResourceVersion))
}
// GetSelfLink mocks base method.
func (m *MockManaged) GetSelfLink() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetSelfLink")
ret0, _ := ret[0].(string)
return ret0
}
// GetSelfLink indicates an expected call of GetSelfLink.
func (mr *MockManagedMockRecorder) GetSelfLink() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetSelfLink", reflect.TypeOf((*MockManaged)(nil).GetSelfLink))
}
// GetUID mocks base method.
func (m *MockManaged) GetUID() types.UID {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetUID")
ret0, _ := ret[0].(types.UID)
return ret0
}
// GetUID indicates an expected call of GetUID.
func (mr *MockManagedMockRecorder) GetUID() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetUID", reflect.TypeOf((*MockManaged)(nil).GetUID))
}
// GetWriteConnectionSecretToReference mocks base method.
func (m *MockManaged) GetWriteConnectionSecretToReference() *v1.SecretReference {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetWriteConnectionSecretToReference")
ret0, _ := ret[0].(*v1.SecretReference)
return ret0
}
// GetWriteConnectionSecretToReference indicates an expected call of GetWriteConnectionSecretToReference.
func (mr *MockManagedMockRecorder) GetWriteConnectionSecretToReference() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetWriteConnectionSecretToReference", reflect.TypeOf((*MockManaged)(nil).GetWriteConnectionSecretToReference))
}
// SetAnnotations mocks base method.
func (m *MockManaged) SetAnnotations(arg0 map[string]string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetAnnotations", arg0)
}
// SetAnnotations indicates an expected call of SetAnnotations.
func (mr *MockManagedMockRecorder) SetAnnotations(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetAnnotations", reflect.TypeOf((*MockManaged)(nil).SetAnnotations), arg0)
}
// SetConditions mocks base method.
func (m *MockManaged) SetConditions(arg0 ...v1.Condition) {
m.ctrl.T.Helper()
varargs := []interface{}{}
for _, a := range arg0 {
varargs = append(varargs, a)
}
m.ctrl.Call(m, "SetConditions", varargs...)
}
// SetConditions indicates an expected call of SetConditions.
func (mr *MockManagedMockRecorder) SetConditions(arg0 ...interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetConditions", reflect.TypeOf((*MockManaged)(nil).SetConditions), arg0...)
}
// SetCreationTimestamp mocks base method.
func (m *MockManaged) SetCreationTimestamp(arg0 v10.Time) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetCreationTimestamp", arg0)
}
// SetCreationTimestamp indicates an expected call of SetCreationTimestamp.
func (mr *MockManagedMockRecorder) SetCreationTimestamp(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetCreationTimestamp", reflect.TypeOf((*MockManaged)(nil).SetCreationTimestamp), arg0)
}
// SetDeletionGracePeriodSeconds mocks base method.
func (m *MockManaged) SetDeletionGracePeriodSeconds(arg0 *int64) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetDeletionGracePeriodSeconds", arg0)
}
// SetDeletionGracePeriodSeconds indicates an expected call of SetDeletionGracePeriodSeconds.
func (mr *MockManagedMockRecorder) SetDeletionGracePeriodSeconds(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetDeletionGracePeriodSeconds", reflect.TypeOf((*MockManaged)(nil).SetDeletionGracePeriodSeconds), arg0)
}
// SetDeletionPolicy mocks base method.
func (m *MockManaged) SetDeletionPolicy(arg0 v1.DeletionPolicy) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetDeletionPolicy", arg0)
}
// SetDeletionPolicy indicates an expected call of SetDeletionPolicy.
func (mr *MockManagedMockRecorder) SetDeletionPolicy(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetDeletionPolicy", reflect.TypeOf((*MockManaged)(nil).SetDeletionPolicy), arg0)
}
// SetDeletionTimestamp mocks base method.
func (m *MockManaged) SetDeletionTimestamp(arg0 *v10.Time) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetDeletionTimestamp", arg0)
}
// SetDeletionTimestamp indicates an expected call of SetDeletionTimestamp.
func (mr *MockManagedMockRecorder) SetDeletionTimestamp(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetDeletionTimestamp", reflect.TypeOf((*MockManaged)(nil).SetDeletionTimestamp), arg0)
}
// SetFinalizers mocks base method.
func (m *MockManaged) SetFinalizers(arg0 []string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetFinalizers", arg0)
}
// SetFinalizers indicates an expected call of SetFinalizers.
func (mr *MockManagedMockRecorder) SetFinalizers(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetFinalizers", reflect.TypeOf((*MockManaged)(nil).SetFinalizers), arg0)
}
// SetGenerateName mocks base method.
func (m *MockManaged) SetGenerateName(arg0 string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetGenerateName", arg0)
}
// SetGenerateName indicates an expected call of SetGenerateName.
func (mr *MockManagedMockRecorder) SetGenerateName(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetGenerateName", reflect.TypeOf((*MockManaged)(nil).SetGenerateName), arg0)
}
// SetGeneration mocks base method.
func (m *MockManaged) SetGeneration(arg0 int64) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetGeneration", arg0)
}
// SetGeneration indicates an expected call of SetGeneration.
func (mr *MockManagedMockRecorder) SetGeneration(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetGeneration", reflect.TypeOf((*MockManaged)(nil).SetGeneration), arg0)
}
// SetLabels mocks base method.
func (m *MockManaged) SetLabels(arg0 map[string]string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetLabels", arg0)
}
// SetLabels indicates an expected call of SetLabels.
func (mr *MockManagedMockRecorder) SetLabels(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetLabels", reflect.TypeOf((*MockManaged)(nil).SetLabels), arg0)
}
// SetManagedFields mocks base method.
func (m *MockManaged) SetManagedFields(arg0 []v10.ManagedFieldsEntry) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetManagedFields", arg0)
}
// SetManagedFields indicates an expected call of SetManagedFields.
func (mr *MockManagedMockRecorder) SetManagedFields(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetManagedFields", reflect.TypeOf((*MockManaged)(nil).SetManagedFields), arg0)
}
// SetManagementPolicies mocks base method.
func (m *MockManaged) SetManagementPolicies(arg0 v1.ManagementPolicies) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetManagementPolicies", arg0)
}
// SetManagementPolicies indicates an expected call of SetManagementPolicies.
func (mr *MockManagedMockRecorder) SetManagementPolicies(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetManagementPolicies", reflect.TypeOf((*MockManaged)(nil).SetManagementPolicies), arg0)
}
// SetName mocks base method.
func (m *MockManaged) SetName(arg0 string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetName", arg0)
}
// SetName indicates an expected call of SetName.
func (mr *MockManagedMockRecorder) SetName(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetName", reflect.TypeOf((*MockManaged)(nil).SetName), arg0)
}
// SetNamespace mocks base method.
func (m *MockManaged) SetNamespace(arg0 string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetNamespace", arg0)
}
// SetNamespace indicates an expected call of SetNamespace.
func (mr *MockManagedMockRecorder) SetNamespace(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetNamespace", reflect.TypeOf((*MockManaged)(nil).SetNamespace), arg0)
}
// SetOwnerReferences mocks base method.
func (m *MockManaged) SetOwnerReferences(arg0 []v10.OwnerReference) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetOwnerReferences", arg0)
}
// SetOwnerReferences indicates an expected call of SetOwnerReferences.
func (mr *MockManagedMockRecorder) SetOwnerReferences(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetOwnerReferences", reflect.TypeOf((*MockManaged)(nil).SetOwnerReferences), arg0)
}
// SetProviderConfigReference mocks base method.
func (m *MockManaged) SetProviderConfigReference(arg0 *v1.Reference) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetProviderConfigReference", arg0)
}
// SetProviderConfigReference indicates an expected call of SetProviderConfigReference.
func (mr *MockManagedMockRecorder) SetProviderConfigReference(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetProviderConfigReference", reflect.TypeOf((*MockManaged)(nil).SetProviderConfigReference), arg0)
}
// SetPublishConnectionDetailsTo mocks base method.
func (m *MockManaged) SetPublishConnectionDetailsTo(arg0 *v1.PublishConnectionDetailsTo) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetPublishConnectionDetailsTo", arg0)
}
// SetPublishConnectionDetailsTo indicates an expected call of SetPublishConnectionDetailsTo.
func (mr *MockManagedMockRecorder) SetPublishConnectionDetailsTo(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetPublishConnectionDetailsTo", reflect.TypeOf((*MockManaged)(nil).SetPublishConnectionDetailsTo), arg0)
}
// SetResourceVersion mocks base method.
func (m *MockManaged) SetResourceVersion(arg0 string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetResourceVersion", arg0)
}
// SetResourceVersion indicates an expected call of SetResourceVersion.
func (mr *MockManagedMockRecorder) SetResourceVersion(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetResourceVersion", reflect.TypeOf((*MockManaged)(nil).SetResourceVersion), arg0)
}
// SetSelfLink mocks base method.
func (m *MockManaged) SetSelfLink(arg0 string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetSelfLink", arg0)
}
// SetSelfLink indicates an expected call of SetSelfLink.
func (mr *MockManagedMockRecorder) SetSelfLink(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetSelfLink", reflect.TypeOf((*MockManaged)(nil).SetSelfLink), arg0)
}
// SetUID mocks base method.
func (m *MockManaged) SetUID(arg0 types.UID) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetUID", arg0)
}
// SetUID indicates an expected call of SetUID.
func (mr *MockManagedMockRecorder) SetUID(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetUID", reflect.TypeOf((*MockManaged)(nil).SetUID), arg0)
}
// SetWriteConnectionSecretToReference mocks base method.
func (m *MockManaged) SetWriteConnectionSecretToReference(arg0 *v1.SecretReference) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetWriteConnectionSecretToReference", arg0)
}
// SetWriteConnectionSecretToReference indicates an expected call of SetWriteConnectionSecretToReference.
func (mr *MockManagedMockRecorder) SetWriteConnectionSecretToReference(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetWriteConnectionSecretToReference", reflect.TypeOf((*MockManaged)(nil).SetWriteConnectionSecretToReference), arg0)
}

View File

@ -1,130 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
//go:generate go run github.com/golang/mock/mockgen -copyright_file ../../../hack/boilerplate.txt -destination=./mocks/mock.go -package mocks github.com/crossplane/crossplane-runtime/pkg/resource Managed
package fake
import (
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/crossplane/upjet/pkg/migration/fake/mocks"
)
const (
MigrationSourceGroup = "fakesourceapi"
MigrationSourceVersion = "v1alpha1"
MigrationSourceKind = "VPC"
MigrationTargetGroup = "faketargetapi"
MigrationTargetVersion = "v1alpha1"
MigrationTargetKind = "VPC"
)
var (
MigrationSourceGVK = schema.GroupVersionKind{
Group: MigrationSourceGroup,
Version: MigrationSourceVersion,
Kind: MigrationSourceKind,
}
MigrationTargetGVK = schema.GroupVersionKind{
Group: MigrationTargetGroup,
Version: MigrationTargetVersion,
Kind: MigrationTargetKind,
}
)
type MigrationSourceObject struct {
mocks.MockManaged
// cannot inline v1.TypeMeta here as mocks.MockManaged is also inlined
APIVersion string `json:"apiVersion,omitempty"`
Kind string `json:"kind,omitempty"`
// cannot inline v1.ObjectMeta here as mocks.MockManaged is also inlined
ObjectMeta ObjectMeta `json:"metadata,omitempty"`
Spec SourceSpec `json:"spec"`
Status Status `json:"status,omitempty"`
}
type SourceSpec struct {
xpv1.ResourceSpec `json:",inline"`
ForProvider SourceSpecParameters `json:"forProvider"`
}
type EmbeddedParameter struct {
Param *string `json:"param,omitempty"`
}
type SourceSpecParameters struct {
Region *string `json:"region,omitempty"`
CIDRBlock string `json:"cidrBlock"`
Tags []Tag `json:"tags,omitempty"`
TestParam *EmbeddedParameter `json:",inline"`
}
type Tag struct {
Key string `json:"key"`
Value string `json:"value"`
}
type Status struct {
xpv1.ResourceStatus `json:",inline"`
AtProvider Observation `json:"atProvider,omitempty"`
}
type Observation struct{}
func (m *MigrationSourceObject) GetName() string {
return m.ObjectMeta.Name
}
type MigrationTargetObject struct {
mocks.MockManaged
// cannot inline v1.TypeMeta here as mocks.MockManaged is also inlined
APIVersion string `json:"apiVersion,omitempty"`
Kind string `json:"kind,omitempty"`
// cannot inline v1.ObjectMeta here as mocks.MockManaged is also inlined
ObjectMeta ObjectMeta `json:"metadata,omitempty"`
Spec TargetSpec `json:"spec"`
Status Status `json:"status,omitempty"`
}
type ObjectMeta struct {
Name string `json:"name,omitempty"`
GenerateName string `json:"generateName,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
}
type TargetSpec struct {
xpv1.ResourceSpec `json:",inline"`
ForProvider TargetSpecParameters `json:"forProvider"`
}
type TargetSpecParameters struct {
Region *string `json:"region,omitempty"`
CIDRBlock string `json:"cidrBlock"`
Tags map[string]string `json:"tags,omitempty"`
TestParam EmbeddedParameter `json:",inline"`
}
type targetObjectKind struct{}
func (t *targetObjectKind) SetGroupVersionKind(_ schema.GroupVersionKind) {}
func (t *targetObjectKind) GroupVersionKind() schema.GroupVersionKind {
return MigrationTargetGVK
}
func (m *MigrationTargetObject) GetObjectKind() schema.ObjectKind {
return &targetObjectKind{}
}
func (m *MigrationTargetObject) GetName() string {
return m.ObjectMeta.Name
}
func (m *MigrationTargetObject) GetGenerateName() string {
return m.ObjectMeta.GenerateName
}

View File

@ -1,178 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"bytes"
"fmt"
"os"
"path/filepath"
"github.com/pkg/errors"
"github.com/spf13/afero"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/yaml"
sigsyaml "sigs.k8s.io/yaml"
)
var (
_ Source = &FileSystemSource{}
_ Target = &FileSystemTarget{}
)
// FileSystemSource is a source implementation to read resources from filesystem
type FileSystemSource struct {
index int
items []UnstructuredWithMetadata
afero afero.Afero
}
// FileSystemSourceOption allows you to configure FileSystemSource
type FileSystemSourceOption func(*FileSystemSource)
// FsWithFileSystem configures the filesystem to use. Used mostly for testing.
func FsWithFileSystem(f afero.Fs) FileSystemSourceOption {
return func(fs *FileSystemSource) {
fs.afero = afero.Afero{Fs: f}
}
}
// NewFileSystemSource returns a FileSystemSource
func NewFileSystemSource(dir string, opts ...FileSystemSourceOption) (*FileSystemSource, error) {
fs := &FileSystemSource{
afero: afero.Afero{Fs: afero.NewOsFs()},
}
for _, f := range opts {
f(fs)
}
if err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return errors.Wrap(err, fmt.Sprintf("cannot read %s", path))
}
if info.IsDir() {
return nil
}
data, err := fs.afero.ReadFile(path)
if err != nil {
return errors.Wrap(err, "cannot read source file")
}
decoder := yaml.NewYAMLOrJSONDecoder(bytes.NewBufferString(string(data)), 1024)
u := &unstructured.Unstructured{}
if err := decoder.Decode(&u); err != nil {
return errors.Wrap(err, "cannot decode read data")
}
fs.items = append(fs.items, UnstructuredWithMetadata{
Object: *u,
Metadata: Metadata{
Path: path,
Category: getCategory(*u),
},
})
return nil
}); err != nil {
return nil, errors.Wrap(err, "cannot read source directory")
}
return fs, nil
}
// HasNext checks the next item
func (fs *FileSystemSource) HasNext() (bool, error) {
return fs.index < len(fs.items), nil
}
// Next returns the next item of slice
func (fs *FileSystemSource) Next() (UnstructuredWithMetadata, error) {
if hasNext, _ := fs.HasNext(); hasNext {
item := fs.items[fs.index]
fs.index++
return item, nil
}
return UnstructuredWithMetadata{}, errors.New("no more elements")
}
// Reset resets the source so that resources can be reread from the beginning.
func (fs *FileSystemSource) Reset() error {
fs.index = 0
return nil
}
// FileSystemTarget is a target implementation to write/patch/delete resources to file system
type FileSystemTarget struct {
afero afero.Afero
parent string
}
// FileSystemTargetOption allows you to configure FileSystemTarget
type FileSystemTargetOption func(*FileSystemTarget)
// FtWithFileSystem configures the filesystem to use. Used mostly for testing.
func FtWithFileSystem(f afero.Fs) FileSystemTargetOption {
return func(ft *FileSystemTarget) {
ft.afero = afero.Afero{Fs: f}
}
}
// WithParentDirectory configures the parent directory for the FileSystemTarget
func WithParentDirectory(parent string) FileSystemTargetOption {
return func(ft *FileSystemTarget) {
ft.parent = parent
}
}
// NewFileSystemTarget returns a FileSystemTarget
func NewFileSystemTarget(opts ...FileSystemTargetOption) *FileSystemTarget {
ft := &FileSystemTarget{
afero: afero.Afero{Fs: afero.NewOsFs()},
}
for _, f := range opts {
f(ft)
}
return ft
}
// Put writes input to filesystem
func (ft *FileSystemTarget) Put(o UnstructuredWithMetadata) error {
b, err := sigsyaml.Marshal(o.Object.Object)
if err != nil {
return errors.Wrap(err, "cannot marshal object")
}
if err := os.MkdirAll(filepath.Join(ft.parent, filepath.Dir(o.Metadata.Path)), 0o750); err != nil {
return errors.Wrapf(err, "cannot mkdirall: %s", filepath.Dir(o.Metadata.Path))
}
if o.Metadata.Parents != "" {
f, err := ft.afero.OpenFile(filepath.Join(ft.parent, o.Metadata.Path), os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600)
if err != nil {
return errors.Wrap(err, "cannot open file")
}
defer f.Close() //nolint:errcheck
if _, err = fmt.Fprintf(f, "\n---\n\n%s", string(b)); err != nil {
return errors.Wrap(err, "cannot write file")
}
} else {
f, err := ft.afero.Create(filepath.Join(ft.parent, o.Metadata.Path))
if err != nil {
return errors.Wrap(err, "cannot create file")
}
if _, err := f.Write(b); err != nil {
return errors.Wrap(err, "cannot write file")
}
}
return nil
}
// Delete deletes a file from filesystem
func (ft *FileSystemTarget) Delete(o UnstructuredWithMetadata) error {
return ft.afero.Remove(o.Metadata.Path)
}

View File

@ -1,260 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
"github.com/spf13/afero"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
var (
unstructuredAwsVpc = map[string]interface{}{
"apiVersion": "ec2.aws.crossplane.io/v1beta1",
"kind": "VPC",
"metadata": map[string]interface{}{
"name": "sample-vpc",
},
"spec": map[string]interface{}{
"forProvider": map[string]interface{}{
"region": "us-west-1",
"cidrBlock": "172.16.0.0/16",
},
},
}
unstructuredResourceGroup = map[string]interface{}{
"apiVersion": "azure.crossplane.io/v1beta1",
"kind": "ResourceGroup",
"metadata": map[string]interface{}{
"name": "example-resources",
},
"spec": map[string]interface{}{
"forProvider": map[string]interface{}{
"location": "West Europe",
},
},
}
)
func TestNewFileSystemSource(t *testing.T) {
type args struct {
dir string
a func() afero.Afero
}
type want struct {
fs *FileSystemSource
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
dir: "testdata",
a: func() afero.Afero {
fss := afero.Afero{Fs: afero.NewMemMapFs()}
_ = fss.WriteFile("testdata/source/awsvpc.yaml",
[]byte("apiVersion: ec2.aws.crossplane.io/v1beta1\nkind: VPC\nmetadata:\n name: sample-vpc\nspec:\n forProvider:\n cidrBlock: 172.16.0.0/16\n region: us-west-1\n"),
0600)
_ = fss.WriteFile("testdata/source/resourcegroup.yaml",
[]byte("apiVersion: azure.crossplane.io/v1beta1\nkind: ResourceGroup\nmetadata:\n name: example-resources\nspec:\n forProvider:\n location: West Europe\n"),
0600)
return fss
},
},
want: want{
fs: &FileSystemSource{
index: 0,
items: []UnstructuredWithMetadata{
{
Object: unstructured.Unstructured{
Object: unstructuredAwsVpc,
},
Metadata: Metadata{
Path: "testdata/source/awsvpc.yaml",
},
},
{
Object: unstructured.Unstructured{
Object: unstructuredResourceGroup,
},
Metadata: Metadata{
Path: "testdata/source/resourcegroup.yaml",
},
},
},
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
files := tc.args.a()
fs, err := NewFileSystemSource("testdata/source", FsWithFileSystem(files))
if err != nil {
t.Fatalf("Failed to initialize a new FileSystemSource: %v", err)
}
if diff := cmp.Diff(tc.want.err, err); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
if diff := cmp.Diff(tc.want.fs.items, fs.items); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
})
}
}
func TestFileSystemTarget_Put(t *testing.T) {
type args struct {
o UnstructuredWithMetadata
a func() afero.Afero
}
type want struct {
data string
err error
}
cases := map[string]struct {
args
want
}{
"Write": {
args: args{
o: UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "ec2.aws.upbound.io/v1beta1",
"kind": "VPC",
"metadata": map[string]interface{}{
"name": "sample-vpc",
},
"spec": map[string]interface{}{
"forProvider": map[string]interface{}{
"region": "us-west-1",
"cidrBlock": "172.16.0.0/16",
},
},
},
},
Metadata: Metadata{
Path: "testdata/source/awsvpc.yaml",
},
},
a: func() afero.Afero {
return afero.Afero{Fs: afero.NewMemMapFs()}
},
},
want: want{
data: "apiVersion: ec2.aws.upbound.io/v1beta1\nkind: VPC\nmetadata:\n name: sample-vpc\nspec:\n forProvider:\n cidrBlock: 172.16.0.0/16\n region: us-west-1\n",
err: nil,
},
},
"Append": {
args: args{
o: UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "azure.crossplane.io/v1beta1",
"kind": "ResourceGroup",
"metadata": map[string]interface{}{
"name": "example-resources",
},
"spec": map[string]interface{}{
"forProvider": map[string]interface{}{
"location": "West Europe",
},
},
},
},
Metadata: Metadata{
Path: "testdata/source/awsvpc.yaml",
Parents: "parent metadata",
},
},
a: func() afero.Afero {
fss := afero.Afero{Fs: afero.NewMemMapFs()}
_ = fss.WriteFile("testdata/source/awsvpc.yaml",
[]byte("apiVersion: ec2.aws.upbound.io/v1beta1\nkind: VPC\nmetadata:\n name: sample-vpc\nspec:\n forProvider:\n cidrBlock: 172.16.0.0/16\n region: us-west-1\n"),
0600)
return fss
},
},
want: want{
data: "apiVersion: ec2.aws.upbound.io/v1beta1\nkind: VPC\nmetadata:\n name: sample-vpc\nspec:\n forProvider:\n cidrBlock: 172.16.0.0/16\n region: us-west-1\n\n---\n\napiVersion: azure.crossplane.io/v1beta1\nkind: ResourceGroup\nmetadata:\n name: example-resources\nspec:\n forProvider:\n location: West Europe\n",
err: nil,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
files := tc.args.a()
ft := NewFileSystemTarget(FtWithFileSystem(files))
if err := ft.Put(tc.args.o); err != nil {
t.Error(err)
}
b, err := ft.afero.ReadFile("testdata/source/awsvpc.yaml")
if diff := cmp.Diff(tc.want.err, err); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
if diff := cmp.Diff(tc.want.data, string(b)); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
})
}
}
func TestFileSystemTarget_Delete(t *testing.T) {
type args struct {
o UnstructuredWithMetadata
a func() afero.Afero
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
o: UnstructuredWithMetadata{
Metadata: Metadata{
Path: "testdata/source/awsvpc.yaml",
},
},
a: func() afero.Afero {
fss := afero.Afero{Fs: afero.NewMemMapFs()}
_ = fss.WriteFile("testdata/source/awsvpc.yaml",
[]byte("apiVersion: ec2.aws.upbound.io/v1beta1\nkind: VPC\nmetadata:\n name: sample-vpc\nspec:\n forProvider:\n cidrBlock: 172.16.0.0/16\n region: us-west-1\n"),
0600)
return fss
},
},
want: want{
err: errors.New(fmt.Sprintf("%s: %s", "open testdata/source/awsvpc.yaml", afero.ErrFileNotFound)),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
files := tc.args.a()
ft := NewFileSystemTarget(FtWithFileSystem(files))
if err := ft.Delete(tc.args.o); err != nil {
t.Error(err)
}
_, err := ft.afero.ReadFile("testdata/source/awsvpc.yaml")
if diff := cmp.Diff(tc.want.err.Error(), err.Error()); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
})
}
}

View File

@ -1,111 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"os"
"github.com/crossplane/crossplane-runtime/pkg/logging"
"github.com/pkg/errors"
"k8s.io/utils/exec"
)
const (
errForkExecutorNotSupported = "step type should be Exec or step's manualExecution should be non-empty"
errStepFailedFmt = "failed to execute the step %q"
)
var _ Executor = &forkExecutor{}
// forkExecutor executes Exec steps or steps with the `manualExecution` hints
// by forking processes.
type forkExecutor struct {
executor exec.Interface
logger logging.Logger
cwd string
}
// ForkExecutorOption allows you to configure forkExecutor objects.
type ForkExecutorOption func(executor *forkExecutor)
// WithLogger sets the logger of forkExecutor.
func WithLogger(l logging.Logger) ForkExecutorOption {
return func(e *forkExecutor) {
e.logger = l
}
}
// WithExecutor sets the executor of ForkExecutor.
func WithExecutor(e exec.Interface) ForkExecutorOption {
return func(fe *forkExecutor) {
fe.executor = e
}
}
// WithWorkingDir sets the current working directory for the executor.
func WithWorkingDir(dir string) ForkExecutorOption {
return func(e *forkExecutor) {
e.cwd = dir
}
}
// NewForkExecutor returns a new fork executor using a process forker.
func NewForkExecutor(opts ...ForkExecutorOption) Executor {
fe := &forkExecutor{
executor: exec.New(),
logger: logging.NewNopLogger(),
}
for _, f := range opts {
f(fe)
}
return fe
}
func (f forkExecutor) Init(_ map[string]any) error {
return nil
}
func (f forkExecutor) Step(s Step, ctx map[string]any) error {
var cmd exec.Cmd
switch {
case s.Type == StepTypeExec:
f.logger.Debug("Command to be executed", "command", s.Exec.Command, "args", s.Exec.Args)
return errors.Wrapf(f.exec(ctx, f.executor.Command(s.Exec.Command, s.Exec.Args...)), errStepFailedFmt, s.Name)
// TODO: we had better have separate executors to handle the other types of
// steps
case len(s.ManualExecution) != 0:
for _, c := range s.ManualExecution {
f.logger.Debug("Command to be executed", "command", "sh", "args", []string{"-c", c})
cmd = f.executor.Command("sh", "-c", c)
if err := f.exec(ctx, cmd); err != nil {
return errors.Wrapf(err, errStepFailedFmt, s.Name)
}
}
return nil
default:
return errors.Wrap(NewUnsupportedStepTypeError(s), errForkExecutorNotSupported)
}
}
func (f forkExecutor) exec(ctx map[string]any, cmd exec.Cmd) error {
cmd.SetEnv(os.Environ())
if f.cwd != "" {
cmd.SetDir(f.cwd)
}
buff, err := cmd.CombinedOutput()
logMsg := "Successfully executed command"
if err != nil {
logMsg = "Command execution failed"
}
f.logger.Debug(logMsg, "output", string(buff))
if ctx != nil {
ctx[KeyContextDiagnostics] = buff
}
return errors.Wrapf(err, "failed to execute command")
}
func (f forkExecutor) Destroy() error {
return nil
}

View File

@ -1,110 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"testing"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
k8sExec "k8s.io/utils/exec"
testingexec "k8s.io/utils/exec/testing"
)
var (
backupManagedStep = Step{
Name: "backup-managed-resources",
Type: StepTypeExec,
Exec: &ExecStep{
Command: "sh",
Args: []string{"-c", "kubectl get managed -o yaml"},
},
}
wrongCommand = Step{
Name: "wrong-command",
Type: StepTypeExec,
Exec: &ExecStep{
Command: "sh",
Args: []string{"-c", "nosuchcommand"},
},
}
wrongStepType = Step{
Name: "wrong-step-type",
Type: StepTypeDelete,
}
)
var errorWrongCommand = errors.New("exit status 127")
func newFakeExec(err error) *testingexec.FakeExec {
return &testingexec.FakeExec{
CommandScript: []testingexec.FakeCommandAction{
func(_ string, _ ...string) k8sExec.Cmd {
return &testingexec.FakeCmd{
CombinedOutputScript: []testingexec.FakeAction{
func() ([]byte, []byte, error) {
return nil, nil, err
},
},
}
},
},
}
}
func TestForkExecutorStep(t *testing.T) {
type args struct {
step Step
fakeExec *testingexec.FakeExec
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Success": {
args: args{
step: backupManagedStep,
fakeExec: &testingexec.FakeExec{DisableScripts: true},
},
want: want{
nil,
},
},
"Failure": {
args: args{
step: wrongCommand,
fakeExec: newFakeExec(errorWrongCommand),
},
want: want{
errors.Wrap(errorWrongCommand, `failed to execute the step "wrong-command": failed to execute command`),
},
},
"WrongStepType": {
args: args{
step: wrongStepType,
},
want: want{
errors.Wrap(NewUnsupportedStepTypeError(wrongStepType), `step type should be Exec or step's manualExecution should be non-empty`),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
fe := NewForkExecutor(WithExecutor(tc.fakeExec))
err := fe.Step(tc.step, nil)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nStep(...): -want error, +got error:\n%s", name, diff)
}
})
}
}

View File

@ -1,172 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
xppkgv1 "github.com/crossplane/crossplane/apis/pkg/v1"
xppkgv1beta1 "github.com/crossplane/crossplane/apis/pkg/v1beta1"
)
// ResourceConverter converts a managed resource from
// the migration source provider's schema to the migration target
// provider's schema.
type ResourceConverter interface {
// Resource takes a managed resource and returns zero or more managed
// resources to be created.
Resource(mg resource.Managed) ([]resource.Managed, error)
}
// ComposedTemplateConverter converts a Composition's ComposedTemplate
// from the migration source provider's schema to the migration target
// provider's schema. Conversion of the `Base` must be handled by
// a ResourceConverter.
type ComposedTemplateConverter interface {
// ComposedTemplate receives a migration source v1.ComposedTemplate
// that has been converted, by a resource converter, to the
// v1.ComposedTemplates with the new shapes specified in the
// `convertedTemplates` argument.
// Conversion of the v1.ComposedTemplate.Bases is handled
// via ResourceConverter.Resource and ComposedTemplate must only
// convert the other fields (`Patches`, `ConnectionDetails`,
// `PatchSet`s, etc.)
// Returns any errors encountered.
ComposedTemplate(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error
}
// CompositionConverter converts a managed resource and a Composition's
// ComposedTemplate that composes a managed resource of the same kind
// from the migration source provider's schema to the migration target
// provider's schema.
type CompositionConverter interface {
ResourceConverter
ComposedTemplateConverter
}
// PatchSetConverter converts patch sets of Compositions.
// Any registered PatchSetConverters
// will be called before any resource or ComposedTemplate conversion is done.
// The rationale is to convert the Composition-wide patch sets before
// any resource-specific conversions so that migration targets can
// automatically inherit converted patch sets if their schemas match them.
// Registered PatchSetConverters will be called in the order
// they are registered.
type PatchSetConverter interface {
// PatchSets converts the `spec.patchSets` of a Composition
// from the migration source provider's schema to the migration target
// provider's schema.
PatchSets(psMap map[string]*xpv1.PatchSet) error
}
// ConfigurationMetadataConverter converts a Crossplane Configuration's metadata.
type ConfigurationMetadataConverter interface {
// ConfigurationMetadataV1 takes a Crossplane Configuration v1 metadata,
// converts it, and stores the converted metadata in its argument.
// Returns any errors encountered during the conversion.
ConfigurationMetadataV1(configuration *xpmetav1.Configuration) error
// ConfigurationMetadataV1Alpha1 takes a Crossplane Configuration v1alpha1
// metadata, converts it, and stores the converted metadata in its
// argument. Returns any errors encountered during the conversion.
ConfigurationMetadataV1Alpha1(configuration *xpmetav1alpha1.Configuration) error
}
// ConfigurationPackageConverter converts a Crossplane configuration package.
type ConfigurationPackageConverter interface {
// ConfigurationPackageV1 takes a Crossplane Configuration v1 package,
// converts it possibly to multiple packages and returns
// the converted configuration package.
// Returns any errors encountered during the conversion.
ConfigurationPackageV1(pkg *xppkgv1.Configuration) error
}
// ProviderPackageConverter converts a Crossplane provider package.
type ProviderPackageConverter interface {
// ProviderPackageV1 takes a Crossplane Provider v1 package,
// converts it possibly to multiple packages and returns the
// converted provider packages.
// Returns any errors encountered during the conversion.
ProviderPackageV1(pkg xppkgv1.Provider) ([]xppkgv1.Provider, error)
}
// PackageLockConverter converts a Crossplane package lock.
type PackageLockConverter interface {
// PackageLockV1Beta1 takes a Crossplane v1beta1 package lock,
// converts it, and stores the converted lock in its argument.
// Returns any errors encountered during the conversion.
PackageLockV1Beta1(lock *xppkgv1beta1.Lock) error
}
// Source is a source for reading resource manifests
type Source interface {
// HasNext returns `true` if the Source implementation has a next manifest
// available to return with a call to Next. Any errors encountered while
// determining whether a next manifest exists will also be reported.
HasNext() (bool, error)
// Next returns the next resource manifest available or
// any errors encountered while reading the next resource manifest.
Next() (UnstructuredWithMetadata, error)
// Reset resets the Source so that it can read the manifests
// from the beginning. There is no guarantee that the Source
// will return the same set of manifests or it will return
// them in the same order after a reset.
Reset() error
}
// Target is a target where resource manifests can be manipulated
// (e.g., added, deleted, patched, etc.)
type Target interface {
// Put writes a resource manifest to this Target
Put(o UnstructuredWithMetadata) error
// Delete deletes a resource manifest from this Target
Delete(o UnstructuredWithMetadata) error
}
// Executor is a migration plan executor.
type Executor interface {
// Init initializes an executor using the supplied executor specific
// configuration data.
Init(config map[string]any) error
// Step asks the executor to execute the next step passing any available
// context from the previous step, and returns any new context to be passed
// to the next step if there exists one.
Step(s Step, ctx map[string]any) error
// Destroy is called when all the steps have been executed,
// or a step has returned an error, and we would like to stop
// executing the plan.
Destroy() error
}
// UnstructuredPreProcessor allows manifests read by the Source
// to be pre-processed before the converters are run
// It's not possible to do any conversions via the pre-processors,
// and they only allow migrators to extract information from
// the manifests read by the Source before any converters are run.
type UnstructuredPreProcessor interface {
// PreProcess is called for a manifest read by the Source
// before any converters are run.
PreProcess(u UnstructuredWithMetadata) error
}
// ManagedPreProcessor allows manifests read by the Source
// to be pre-processed before the converters are run.
// These pre-processors will work for GVKs that have ResourceConverter
// registered.
type ManagedPreProcessor interface {
// ResourcePreProcessor is called for a manifest read by the Source
// before any converters are run.
ResourcePreProcessor(mg resource.Managed) error
}
// CategoricalConverter is a converter that converts resources of a given
// Category. Because it receives an unstructured argument, it should be
// used for implementing generic conversion functions acting on a specific
// category, such as setting a deletion policy on all the managed resources
// observed by the migration Source.
type CategoricalConverter interface {
Convert(u *UnstructuredWithMetadata) error
}

View File

@ -1,331 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"context"
"fmt"
"path/filepath"
"regexp"
"strings"
"time"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/cli-runtime/pkg/resource"
"k8s.io/client-go/discovery"
"k8s.io/client-go/discovery/cached/disk"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/client-go/restmapper"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
const (
errKubernetesSourceInit = "failed to initialize the migration Kubernetes source"
errCategoryGetFmt = "failed to get resources of category %q"
)
var (
_ Source = &KubernetesSource{}
defaultCacheDir = filepath.Join(homedir.HomeDir(), ".kube", "cache")
)
// KubernetesSource is a source implementation to read resources from Kubernetes
// cluster.
type KubernetesSource struct {
registry *Registry
categories []Category
index int
items []UnstructuredWithMetadata
dynamicClient dynamic.Interface
cachedDiscoveryClient discovery.CachedDiscoveryInterface
restMapper meta.RESTMapper
categoryExpander restmapper.CategoryExpander
cacheDir string
restConfig *rest.Config
}
// KubernetesSourceOption sets an option for a KubernetesSource.
type KubernetesSourceOption func(source *KubernetesSource)
// WithCacheDir sets the cache directory for the disk cached discovery client
// used by a KubernetesSource.
func WithCacheDir(cacheDir string) KubernetesSourceOption {
return func(s *KubernetesSource) {
s.cacheDir = cacheDir
}
}
// WithRegistry configures a KubernetesSource to use the specified registry
// for determining the GVKs of resources which will be read from the
// Kubernetes API server.
func WithRegistry(r *Registry) KubernetesSourceOption {
return func(s *KubernetesSource) {
s.registry = r
}
}
// WithCategories configures a KubernetesSource so that it will fetch
// all resources belonging to the specified categories.
func WithCategories(c []Category) KubernetesSourceOption {
return func(s *KubernetesSource) {
s.categories = c
}
}
// NewKubernetesSourceFromKubeConfig initializes a new KubernetesSource using
// the specified kube config file and KubernetesSourceOptions.
func NewKubernetesSourceFromKubeConfig(kubeconfigPath string, opts ...KubernetesSourceOption) (*KubernetesSource, error) {
ks := &KubernetesSource{}
for _, o := range opts {
o(ks)
}
var err error
ks.restConfig, err = clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return nil, errors.Wrap(err, "cannot create rest config object")
}
ks.restConfig.ContentConfig = resource.UnstructuredPlusDefaultContentConfig()
ks.dynamicClient, err = InitializeDynamicClient(kubeconfigPath)
if err != nil {
return nil, errors.Wrapf(err, "failed to initialize a Kubernetes dynamic client from kubeconfig: %s", kubeconfigPath)
}
ks.cachedDiscoveryClient, err = InitializeDiscoveryClient(kubeconfigPath, ks.cacheDir)
if err != nil {
return nil, errors.Wrapf(err, "failed to initialize a Kubernetes discovery client from kubeconfig: %s", kubeconfigPath)
}
return ks, errors.Wrap(ks.init(), errKubernetesSourceInit)
}
// NewKubernetesSource returns a KubernetesSource
// DynamicClient is used here to query resources.
// Elements of gvks (slice of GroupVersionKind) are passed to the Dynamic Client
// in a loop to get list of resources.
// An example element of gvks slice:
// Group: "ec2.aws.upbound.io",
// Version: "v1beta1",
// Kind: "VPC",
func NewKubernetesSource(dynamicClient dynamic.Interface, discoveryClient discovery.CachedDiscoveryInterface, opts ...KubernetesSourceOption) (*KubernetesSource, error) {
ks := &KubernetesSource{
dynamicClient: dynamicClient,
cachedDiscoveryClient: discoveryClient,
}
for _, o := range opts {
o(ks)
}
return ks, errors.Wrap(ks.init(), errKubernetesSourceInit)
}
func (ks *KubernetesSource) init() error {
ks.restMapper = restmapper.NewDeferredDiscoveryRESTMapper(ks.cachedDiscoveryClient)
ks.categoryExpander = restmapper.NewDiscoveryCategoryExpander(ks.cachedDiscoveryClient)
for _, c := range ks.categories {
if err := ks.getCategoryResources(c); err != nil {
return errors.Wrapf(err, "cannot get resources of the category: %s", c)
}
}
if ks.registry == nil {
return nil
}
if err := ks.getGVKResources(ks.registry.claimTypes, CategoryClaim); err != nil {
return errors.Wrap(err, "cannot get claims")
}
if err := ks.getGVKResources(ks.registry.compositeTypes, CategoryComposite); err != nil {
return errors.Wrap(err, "cannot get composites")
}
if err := ks.getGVKResources(ks.registry.GetCompositionGVKs(), CategoryComposition); err != nil {
return errors.Wrap(err, "cannot get compositions")
}
if err := ks.getGVKResources(ks.registry.GetCrossplanePackageGVKs(), CategoryCrossplanePackage); err != nil {
return errors.Wrap(err, "cannot get Crossplane packages")
}
return errors.Wrap(ks.getGVKResources(ks.registry.GetManagedResourceGVKs(), CategoryManaged), "cannot get managed resources")
}
func (ks *KubernetesSource) getMappingFor(gr schema.GroupResource) (*meta.RESTMapping, error) {
r := fmt.Sprintf("%s.%s", gr.Resource, gr.Group)
fullySpecifiedGVR, groupResource := schema.ParseResourceArg(r)
gvk := schema.GroupVersionKind{}
if fullySpecifiedGVR != nil {
gvk, _ = ks.restMapper.KindFor(*fullySpecifiedGVR)
}
if gvk.Empty() {
gvk, _ = ks.restMapper.KindFor(groupResource.WithVersion(""))
}
if !gvk.Empty() {
return ks.restMapper.RESTMapping(gvk.GroupKind(), gvk.Version)
}
fullySpecifiedGVK, groupKind := schema.ParseKindArg(r)
if fullySpecifiedGVK == nil {
gvk := groupKind.WithVersion("")
fullySpecifiedGVK = &gvk
}
if !fullySpecifiedGVK.Empty() {
if mapping, err := ks.restMapper.RESTMapping(fullySpecifiedGVK.GroupKind(), fullySpecifiedGVK.Version); err == nil {
return mapping, nil
}
}
mapping, err := ks.restMapper.RESTMapping(groupKind, gvk.Version)
if err != nil {
if meta.IsNoMatchError(err) {
return nil, errors.Errorf("the server doesn't have a resource type %q", groupResource.Resource)
}
return nil, err
}
return mapping, nil
}
// parts of this implement are taken from the implementation of
// the "kubectl get" command:
// https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/kubectl/pkg/cmd/get
func (ks *KubernetesSource) getCategoryResources(c Category) error {
if ks.restConfig == nil {
return errors.New("rest.Config not initialized")
}
grs, _ := ks.categoryExpander.Expand(c.String())
for _, gr := range grs {
mapping, err := ks.getMappingFor(gr)
if err != nil {
return errors.Wrapf(err, errCategoryGetFmt, c.String())
}
gv := mapping.GroupVersionKind.GroupVersion()
ks.restConfig.GroupVersion = &gv
if len(gv.Group) == 0 {
ks.restConfig.APIPath = "/api"
} else {
ks.restConfig.APIPath = "/apis"
}
client, err := rest.RESTClientFor(ks.restConfig)
if err != nil {
return errors.Wrapf(err, errCategoryGetFmt, c.String())
}
helper := resource.NewHelper(client, mapping)
list, err := helper.List("", mapping.GroupVersionKind.GroupVersion().String(), &metav1.ListOptions{})
if err != nil {
return errors.Wrapf(err, errCategoryGetFmt, c.String())
}
ul, ok := list.(*unstructured.UnstructuredList)
if !ok {
return errors.New("expecting list to be of type *unstructured.UnstructuredList")
}
for _, u := range ul.Items {
ks.items = append(ks.items, UnstructuredWithMetadata{
Object: u,
Metadata: Metadata{
Path: string(u.GetUID()),
Category: c,
},
})
}
}
return nil
}
func (ks *KubernetesSource) getGVKResources(gvks []schema.GroupVersionKind, category Category) error {
processed := map[schema.GroupVersionKind]struct{}{}
for _, gvk := range gvks {
if _, ok := processed[gvk]; ok {
continue
}
m, err := ks.restMapper.RESTMapping(gvk.GroupKind(), gvk.Version)
if err != nil {
return errors.Wrapf(err, "cannot get REST mappings for GVK: %s", gvk.String())
}
if err := ks.getResourcesFor(m.Resource, category); err != nil {
return errors.Wrapf(err, "cannot get resources for GVK: %s", gvk.String())
}
processed[gvk] = struct{}{}
}
return nil
}
func (ks *KubernetesSource) getResourcesFor(gvr schema.GroupVersionResource, category Category) error {
ri := ks.dynamicClient.Resource(gvr)
unstructuredList, err := ri.List(context.TODO(), metav1.ListOptions{})
if err != nil {
return errors.Wrapf(err, "cannot list resources of GVR: %s", gvr.String())
}
for _, u := range unstructuredList.Items {
ks.items = append(ks.items, UnstructuredWithMetadata{
Object: u,
Metadata: Metadata{
Path: string(u.GetUID()),
Category: category,
},
})
}
return nil
}
// HasNext checks the next item
func (ks *KubernetesSource) HasNext() (bool, error) {
return ks.index < len(ks.items), nil
}
// Next returns the next item of slice
func (ks *KubernetesSource) Next() (UnstructuredWithMetadata, error) {
if hasNext, _ := ks.HasNext(); hasNext {
item := ks.items[ks.index]
ks.index++
return item, nil
}
return UnstructuredWithMetadata{}, errors.New("no more elements")
}
// Reset resets the source so that resources can be reread from the beginning.
func (ks *KubernetesSource) Reset() error {
ks.index = 0
return nil
}
// InitializeDynamicClient returns a dynamic client
func InitializeDynamicClient(kubeconfigPath string) (dynamic.Interface, error) {
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return nil, errors.Wrap(err, "cannot create rest config object")
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
return nil, errors.Wrap(err, "cannot initialize dynamic client")
}
return dynamicClient, nil
}
func InitializeDiscoveryClient(kubeconfigPath, cacheDir string) (*disk.CachedDiscoveryClient, error) {
config, err := clientcmd.BuildConfigFromFlags("", kubeconfigPath)
if err != nil {
return nil, errors.Wrap(err, "cannot create rest config object")
}
if cacheDir == "" {
cacheDir = defaultCacheDir
}
httpCacheDir := filepath.Join(cacheDir, "http")
discoveryCacheDir := computeDiscoverCacheDir(filepath.Join(cacheDir, "discovery"), config.Host)
return disk.NewCachedDiscoveryClientForConfig(config, discoveryCacheDir, httpCacheDir, 10*time.Minute)
}
// overlyCautiousIllegalFileCharacters matches characters that *might* not be supported. Windows is really restrictive, so this is really restrictive
var overlyCautiousIllegalFileCharacters = regexp.MustCompile(`[^(\w/.)]`)
// computeDiscoverCacheDir takes the parentDir and the host and comes up with a "usually non-colliding" name.
func computeDiscoverCacheDir(parentDir, host string) string {
// strip the optional scheme from host if its there:
schemelessHost := strings.Replace(strings.Replace(host, "https://", "", 1), "http://", "", 1)
// now do a simple collapse of non-AZ09 characters. Collisions are possible but unlikely. Even if we do collide the problem is short lived
safeHost := overlyCautiousIllegalFileCharacters.ReplaceAllString(schemelessHost, "_")
return filepath.Join(parentDir, safeHost)
}

View File

@ -1,130 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"testing"
"github.com/google/go-cmp/cmp"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/discovery/cached/memory"
"k8s.io/client-go/dynamic/fake"
fakeclientset "k8s.io/client-go/kubernetes/fake"
)
func TestNewKubernetesSource(t *testing.T) {
type args struct {
gvks []schema.GroupVersionKind
}
type want struct {
ks *KubernetesSource
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
gvks: []schema.GroupVersionKind{
{
Group: "ec2.aws.crossplane.io",
Version: "v1beta1",
Kind: "VPC",
},
{
Group: "azure.crossplane.io",
Version: "v1beta1",
Kind: "ResourceGroup",
},
},
},
want: want{
ks: &KubernetesSource{
items: []UnstructuredWithMetadata{
{
Object: unstructured.Unstructured{
Object: unstructuredAwsVpc,
},
Metadata: Metadata{
Category: CategoryManaged,
},
},
{
Object: unstructured.Unstructured{
Object: unstructuredResourceGroup,
},
Metadata: Metadata{
Category: CategoryManaged,
},
},
},
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
s := runtime.NewScheme()
r := NewRegistry(s)
// register a dummy converter for the test GVKs
r.resourceConverters = map[schema.GroupVersionKind]ResourceConverter{}
for _, gvk := range tc.args.gvks {
r.resourceConverters[gvk] = nil
}
dynamicClient := fake.NewSimpleDynamicClientWithCustomListKinds(s,
map[schema.GroupVersionResource]string{
{
Group: "ec2.aws.crossplane.io",
Version: "v1beta1",
Resource: "vpcs",
}: "VPCList",
{
Group: "azure.crossplane.io",
Version: "v1beta1",
Resource: "resourcegroups",
}: "ResourceGroupList",
},
&unstructured.Unstructured{Object: unstructuredAwsVpc},
&unstructured.Unstructured{Object: unstructuredResourceGroup})
client := fakeclientset.NewSimpleClientset(
&unstructured.Unstructured{Object: unstructuredAwsVpc},
&unstructured.Unstructured{Object: unstructuredResourceGroup},
)
client.Fake.Resources = []*metav1.APIResourceList{
{
GroupVersion: "ec2.aws.crossplane.io/v1beta1",
APIResources: []metav1.APIResource{
{
Name: "vpcs",
Kind: "VPC",
},
},
},
{
GroupVersion: "azure.crossplane.io/v1beta1",
APIResources: []metav1.APIResource{
{
Name: "resourcegroups",
Kind: "ResourceGroup",
},
},
},
}
ks, err := NewKubernetesSource(dynamicClient, memory.NewMemCacheClient(client.Discovery()), WithRegistry(r))
if diff := cmp.Diff(tc.want.err, err); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
if diff := cmp.Diff(tc.want.ks.items, ks.items); diff != "" {
t.Errorf("\nNext(...): -want, +got:\n%s", diff)
}
})
}
}

View File

@ -1,60 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func (pg *PlanGenerator) convertPackageLock(o UnstructuredWithMetadata) error {
lock, err := toPackageLock(o.Object)
if err != nil {
return err
}
isConverted := false
for _, lockConv := range pg.registry.packageLockConverters {
if lockConv.re == nil || lockConv.converter == nil || !lockConv.re.MatchString(lock.GetName()) {
continue
}
if err := lockConv.converter.PackageLockV1Beta1(lock); err != nil {
return errors.Wrapf(err, "failed to call converter on package lock: %s", lock.GetName())
}
// TODO: if a lock converter does not convert the given lock,
// we will have a false positive. Better to compute and check
// a diff here.
isConverted = true
}
if !isConverted {
return nil
}
target := &UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(lock),
Metadata: o.Metadata,
}
if err := pg.stepEditPackageLock(o, target); err != nil {
return err
}
return nil
}
func (pg *PlanGenerator) stepEditPackageLock(source UnstructuredWithMetadata, t *UnstructuredWithMetadata) error {
// add step for editing the package lock
s := pg.stepConfiguration(stepEditPackageLock)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
s.Patch.Files = append(s.Patch.Files, t.Metadata.Path)
patchMap, err := computeJSONMergePathDoc(source.Object, t.Object)
if err != nil {
return err
}
return errors.Wrapf(pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(t.Object, patchMap),
},
Metadata: t.Metadata,
}), errEditConfigurationPackageFmt, t.Object.GetName())
}

View File

@ -1,346 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"log"
"reflect"
"regexp"
"strings"
xpv1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
regexIndex = regexp.MustCompile(`(.+)\[(.+)]`)
regexJSONTag = regexp.MustCompile(`([^,]+)?(,.+)?`)
)
const (
jsonTagInlined = ",inline"
)
// isConverted looks up the specified name in the list of already converted
// patch sets.
func isConverted(convertedPS []string, psName string) bool {
for _, n := range convertedPS {
if psName == n {
return true
}
}
return false
}
// removeInvalidPatches removes the (inherited) patches from
// a (split) migration target composed template. The migration target composed
// templates inherit patches from migration source templates by default, and
// this function is responsible for removing patches (including references to
// patch sets) that do not conform to the target composed template's schema.
func (pg *PlanGenerator) removeInvalidPatches(gvkSource, gvkTarget schema.GroupVersionKind, patchSets []xpv1.PatchSet, targetTemplate *xpv1.ComposedTemplate, convertedPS []string) error { //nolint:gocyclo // complexity (11) just above the threshold (10)
c := pg.registry.scheme
source, err := c.New(gvkSource)
if err != nil {
return errors.Wrapf(err, "failed to instantiate a new source object with GVK: %s", gvkSource.String())
}
target, err := c.New(gvkTarget)
if err != nil {
return errors.Wrapf(err, "failed to instantiate a new target object with GVK: %s", gvkTarget.String())
}
newPatches := make([]xpv1.Patch, 0, len(targetTemplate.Patches))
var patches []xpv1.Patch
for _, p := range targetTemplate.Patches {
s := source
switch p.Type { //nolint:exhaustive
case xpv1.PatchTypePatchSet:
ps := getNamedPatchSet(p.PatchSetName, patchSets)
if ps == nil {
// something is wrong with the patchset ref,
// we will just remove the ref
continue
}
if isConverted(convertedPS, ps.Name) {
// then do not use the source schema as the patch set
// is already converted
s = target
}
// assert each of the patches in the set
// conform the target schema
patches = ps.Patches
default:
patches = []xpv1.Patch{p}
}
keep := true
for _, p := range patches {
ok, err := assertPatchSchemaConformance(p, s, target)
if err != nil {
err := errors.Wrap(err, "failed to check whether the patch conforms to the target schema")
if pg.ErrorOnInvalidPatchSchema {
return err
}
log.Printf("Excluding the patch from the migration target because conformance checking has failed with: %v\n", err)
// if we could not check the patch's schema conformance
// and the plan generator is configured not to error,
// assume the patch does not conform to the schema
ok = false
}
if !ok {
keep = false
break
}
}
if keep {
newPatches = append(newPatches, p)
}
}
targetTemplate.Patches = newPatches
return nil
}
// assertPatchSchemaConformance asserts that the specified patch actually
// conforms the specified target schema. We also assert the patch conforms
// to the migration source schema, which prevents an invalid patch from being
// preserved after the conversion.
func assertPatchSchemaConformance(p xpv1.Patch, source, target any) (bool, error) {
var targetPath *string
// because this is defaulting logic and what we default can be overridden
// later in the convert, the type switch is not exhaustive
// TODO: consider processing other patch types
switch p.Type { //nolint:exhaustive
case xpv1.PatchTypeFromCompositeFieldPath, "": // the default type
targetPath = p.ToFieldPath
case xpv1.PatchTypeToCompositeFieldPath:
targetPath = p.FromFieldPath
}
if targetPath == nil {
return false, nil
}
ok, err := assertNameAndTypeAtPath(reflect.TypeOf(source), reflect.TypeOf(target), splitPathComponents(*targetPath))
return ok, errors.Wrapf(err, "failed to assert patch schema for path: %s", *targetPath)
}
// splitPathComponents splits a fieldpath expression into its path components,
// e.g., `m[a.b.c].a.b.c` is split into `m[a.b.c]`, `a`, `b`, `c`.
func splitPathComponents(path string) []string {
components := strings.Split(path, ".")
result := make([]string, 0, len(components))
indexedExpression := false
for _, c := range components {
switch {
case indexedExpression:
result[len(result)-1] = fmt.Sprintf("%s.%s", result[len(result)-1], c)
if strings.Contains(c, "]") {
indexedExpression = false
}
default:
result = append(result, c)
if strings.Contains(c, "[") && !strings.Contains(c, "]") {
indexedExpression = true
}
}
}
return result
}
func isRawExtension(source, target reflect.Type) bool {
reType := reflect.TypeOf(runtime.RawExtension{})
rePtrType := reflect.TypeOf(&runtime.RawExtension{})
return (source == reType && target == reType) || (source == rePtrType && target == rePtrType)
}
// assertNameAndTypeAtPath asserts that the migration source and target
// templates both have the same kind for the type at the specified path.
// Also validates the specific path is valid for the source.
func assertNameAndTypeAtPath(source, target reflect.Type, pathComponents []string) (bool, error) { //nolint:gocyclo
if len(pathComponents) < 1 {
return compareKinds(source, target), nil
}
// if both source and target are runtime.RawExtensions,
// then stop traversing the type hierarchy.
if isRawExtension(source, target) {
return true, nil
}
pathComponent := pathComponents[0]
if len(pathComponent) == 0 {
return false, errors.Errorf("failed to compare source and target structs. Invalid path: %s", strings.Join(pathComponents, "."))
}
m := regexIndex.FindStringSubmatch(pathComponent)
if m != nil {
// then a map component or a slicing component
pathComponent = m[1]
}
// assert the source and the target types
fSource, err := getFieldWithSerializedName(source, pathComponent)
if err != nil {
return false, errors.Wrapf(err, "failed to assert source struct field kind at path: %s", strings.Join(pathComponents, "."))
}
if fSource == nil {
// then source field could not be found
return false, errors.Errorf("struct field %q does not exist for the source type %q at path: %s", pathComponent, source.String(), strings.Join(pathComponents, "."))
}
// now assert that this field actually exists for the target type
// with the same type
fTarget, err := getFieldWithSerializedName(target, pathComponent)
if err != nil {
return false, errors.Wrapf(err, "failed to assert target struct field kind at path: %s", strings.Join(pathComponents, "."))
}
if fTarget == nil || !fTarget.IsExported() || !compareKinds(fSource.Type, fTarget.Type) {
return false, nil
}
nextSource, nextTarget := fSource.Type, fTarget.Type
if m != nil {
// parents are of map or slice type
nextSource = nextSource.Elem()
nextTarget = nextTarget.Elem()
}
return assertNameAndTypeAtPath(nextSource, nextTarget, pathComponents[1:])
}
// compareKinds compares the kinds of the specified types
// dereferencing (following) pointer types.
func compareKinds(s, t reflect.Type) bool {
if s.Kind() == reflect.Pointer {
s = s.Elem()
}
if t.Kind() == reflect.Pointer {
t = t.Elem()
}
return s.Kind() == t.Kind()
}
// getFieldWithSerializedName returns the field of a struct (if it exists)
// with the specified serialized (JSON) name. Returns a nil (and a nil error)
// if a field with the specified serialized name is not found
// in the specified type.
func getFieldWithSerializedName(t reflect.Type, name string) (*reflect.StructField, error) { //nolint:gocyclo
if t.Kind() == reflect.Pointer {
t = t.Elem()
}
if t.Kind() != reflect.Struct {
return nil, errors.Errorf("type is not a struct: %s", t.Name())
}
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
serializedName := f.Name
inlined := false
if fTag, ok := f.Tag.Lookup("json"); ok {
if m := regexJSONTag.FindStringSubmatch(fTag); m != nil && len(m[1]) > 0 {
serializedName = m[1]
}
if strings.HasSuffix(fTag, jsonTagInlined) {
inlined = true
}
}
if name == serializedName {
return &f, nil
}
if inlined {
inlinedType := f.Type
if inlinedType.Kind() == reflect.Pointer {
inlinedType = inlinedType.Elem()
}
if inlinedType.Kind() == reflect.Struct {
sf, err := getFieldWithSerializedName(inlinedType, name)
if err != nil {
return nil, errors.Wrapf(err, "failed to search for field %q in inlined type: %s", name, inlinedType.String())
}
if sf != nil {
return sf, nil
}
}
}
}
return nil, nil // not found
}
// getNamedPatchSet returns the patch set with the specified name
// from the specified patch set slice. Returns nil if a patch set
// with the given name is not found.
func getNamedPatchSet(name *string, patchSets []xpv1.PatchSet) *xpv1.PatchSet {
if name == nil {
// if name is not specified, do not attempt to find a named patchset
return nil
}
for _, ps := range patchSets {
if *name == ps.Name {
return &ps
}
}
return nil
}
// getConvertedPatchSetNames returns the names of patch sets that have been
// converted by a PatchSetConverter.
func getConvertedPatchSetNames(newPatchSets, oldPatchSets []xpv1.PatchSet) []string {
converted := make([]string, 0, len(newPatchSets))
for _, n := range newPatchSets {
found := false
for _, o := range oldPatchSets {
if o.Name != n.Name {
continue
}
found = true
if !reflect.DeepEqual(o, n) {
converted = append(converted, n.Name)
}
break
}
if !found {
converted = append(converted, n.Name)
}
}
return converted
}
// convertToMap converts the given slice of patch sets to a map of
// patch sets keyed by their names.
func convertToMap(ps []xpv1.PatchSet) map[string]*xpv1.PatchSet {
m := make(map[string]*xpv1.PatchSet, len(ps))
for _, p := range ps {
// Crossplane dereferences the last patch set with the same name,
// so override with the last patch set with the same name.
m[p.Name] = p.DeepCopy()
}
return m
}
// convertFromMap converts the specified map of patch sets back to a slice.
// If filterDeleted is set, previously existing patch sets in the Composition
// which have been removed from the map are also removed from the resulting
// slice, and eventually from the Composition. PatchSetConverters are
// allowed to remove patch sets, whereas Composition converters are
// not, as Composition converters have a local view of the patch sets and
// don't know about the other composed templates that may be sharing
// patch sets with them.
func convertFromMap(psMap map[string]*xpv1.PatchSet, oldPS []xpv1.PatchSet, filterDeleted bool) []xpv1.PatchSet {
result := make([]xpv1.PatchSet, 0, len(psMap))
for _, ps := range oldPS {
if filterDeleted && psMap[ps.Name] == nil {
// then patch set has been deleted
continue
}
if psMap[ps.Name] == nil {
result = append(result, ps)
continue
}
result = append(result, *psMap[ps.Name])
delete(psMap, ps.Name)
}
// add the new patch sets
for _, ps := range psMap {
if ps == nil {
continue
}
result = append(result, *ps)
}
return result
}

View File

@ -1,79 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"testing"
"github.com/google/go-cmp/cmp"
)
func TestSplitPathComponents(t *testing.T) {
tests := map[string]struct {
want []string
}{
`m['a.b.c']`: {
want: []string{`m['a.b.c']`},
},
`m["a.b.c"]`: {
want: []string{`m["a.b.c"]`},
},
`m[a.b.c]`: {
want: []string{`m[a.b.c]`},
},
`m[a.b.c.d.e]`: {
want: []string{`m[a.b.c.d.e]`},
},
`m['a.b.c.d.e']`: {
want: []string{`m['a.b.c.d.e']`},
},
`m['a.b']`: {
want: []string{`m['a.b']`},
},
`m['a']`: {
want: []string{`m['a']`},
},
`m['a'].b`: {
want: []string{`m['a']`, `b`},
},
`a`: {
want: []string{`a`},
},
`a.b`: {
want: []string{`a`, `b`},
},
`a.b.c`: {
want: []string{`a`, `b`, `c`},
},
`a.b.c.m['a.b.c']`: {
want: []string{`a`, `b`, `c`, `m['a.b.c']`},
},
`a.b.m['a.b.c'].c`: {
want: []string{`a`, `b`, `m['a.b.c']`, `c`},
},
`m['a.b.c'].a.b.c`: {
want: []string{`m['a.b.c']`, `a`, `b`, `c`},
},
`m[a.b.c].a.b.c`: {
want: []string{`m[a.b.c]`, `a`, `b`, `c`},
},
`m[0]`: {
want: []string{`m[0]`},
},
`a.b.c.m[0]`: {
want: []string{`a`, `b`, `c`, `m[0]`},
},
`m[0].a.b.c`: {
want: []string{`m[0]`, `a`, `b`, `c`},
},
}
for name, tt := range tests {
t.Run(name, func(t *testing.T) {
if diff := cmp.Diff(tt.want, splitPathComponents(name)); diff != "" {
t.Errorf("splitPathComponents(%s): -want, +got:\n%s\n", name, diff)
}
})
}
}

View File

@ -1,129 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import "github.com/pkg/errors"
const (
// KeyContextDiagnostics is the executor step context key for
// storing any extra diagnostics information from
// the executor.
KeyContextDiagnostics = "diagnostics"
)
// PlanExecutor drives the execution of a plan's steps and
// uses the configured `executors` to execute those steps.
type PlanExecutor struct {
executors []Executor
plan Plan
callback ExecutorCallback
}
// Action represents an action to be taken by the PlanExecutor.
// An Action is dictated by a ExecutorCallback implementation
// to the PlanExecutor for each step.
type Action int
const (
// ActionContinue tells the PlanExecutor to continue with the execution
// of a Step.
ActionContinue Action = iota
// ActionSkip tells the PlanExecutor to skip the execution
// of the current Step.
ActionSkip
// ActionCancel tells the PlanExecutor to stop executing
// the Steps of a Plan.
ActionCancel
// ActionRepeat tells the PlanExecutor to repeat the execution
// of the current Step.
ActionRepeat
)
// CallbackResult is the type of a value returned from one of the callback
// methods of ExecutorCallback implementations.
type CallbackResult struct {
Action Action
}
// PlanExecutorOption is a mutator function for setting an option of a
// PlanExecutor.
type PlanExecutorOption func(executor *PlanExecutor)
// WithExecutorCallback configures an ExecutorCallback for a PlanExecutor
// to be notified as the Plan's Step's are executed.
func WithExecutorCallback(cb ExecutorCallback) PlanExecutorOption {
return func(pe *PlanExecutor) {
pe.callback = cb
}
}
// ExecutorCallback is the interface for the callback implementations
// to be notified while executing each Step of a migration Plan.
type ExecutorCallback interface {
// StepToExecute is called just before a migration Plan's Step is executed.
// Can be used to cancel the execution of the Plan, or to continue/skip
// the Step's execution.
StepToExecute(s Step, index int) CallbackResult
// StepSucceeded is called after a migration Plan's Step is
// successfully executed.
// Can be used to cancel the execution of the Plan, or to
// continue/skip/repeat the Step's execution.
StepSucceeded(s Step, index int, diagnostics any) CallbackResult
// StepFailed is called after a migration Plan's Step has
// failed to execute.
// Can be used to cancel the execution of the Plan, or to
// continue/skip/repeat the Step's execution.
StepFailed(s Step, index int, diagnostics any, err error) CallbackResult
}
// NewPlanExecutor returns a new plan executor for executing the steps
// of a migration plan.
func NewPlanExecutor(plan Plan, executors []Executor, opts ...PlanExecutorOption) *PlanExecutor {
pe := &PlanExecutor{
executors: executors,
plan: plan,
}
for _, o := range opts {
o(pe)
}
return pe
}
func (pe *PlanExecutor) Execute() error { //nolint:gocyclo // easier to follow this way
ctx := make(map[string]any)
for i := 0; i < len(pe.plan.Spec.Steps); i++ {
var r CallbackResult
if pe.callback != nil {
r = pe.callback.StepToExecute(pe.plan.Spec.Steps[i], i)
switch r.Action {
case ActionCancel:
return nil
case ActionSkip:
continue
case ActionContinue, ActionRepeat:
}
}
err := pe.executors[0].Step(pe.plan.Spec.Steps[i], ctx)
diag := ctx[KeyContextDiagnostics]
if err != nil {
if pe.callback != nil {
r = pe.callback.StepFailed(pe.plan.Spec.Steps[i], i, diag, err)
}
} else if pe.callback != nil {
r = pe.callback.StepSucceeded(pe.plan.Spec.Steps[i], i, diag)
}
switch r.Action {
case ActionCancel:
return errors.Wrapf(err, "failed to execute step %q at index %d", pe.plan.Spec.Steps[i].Name, i)
case ActionContinue, ActionSkip:
continue
case ActionRepeat:
i--
}
}
return nil
}

View File

@ -1,536 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"reflect"
"time"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
xppkgv1 "github.com/crossplane/crossplane/apis/pkg/v1"
xppkgv1beta1 "github.com/crossplane/crossplane/apis/pkg/v1beta1"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/rand"
)
const (
errSourceHasNext = "failed to generate migration plan: Could not check next object from source"
errSourceNext = "failed to generate migration plan: Could not get next object from source"
errPreProcessFmt = "failed to pre-process the manifest of category %q"
errSourceReset = "failed to generate migration plan: Could not get reset the source"
errUnstructuredConvert = "failed to convert from unstructured object to v1.Composition"
errUnstructuredMarshal = "failed to marshal unstructured object to JSON"
errResourceMigrate = "failed to migrate resource"
errCompositePause = "failed to pause composite resource"
errCompositesEdit = "failed to edit composite resources"
errCompositesStart = "failed to start composite resources"
errCompositionMigrateFmt = "failed to migrate the composition: %s"
errConfigurationMetadataMigrateFmt = "failed to migrate the configuration metadata: %s"
errConfigurationPackageMigrateFmt = "failed to migrate the configuration package: %s"
errProviderMigrateFmt = "failed to migrate the Provider package: %s"
errLockMigrateFmt = "failed to migrate the package lock: %s"
errComposedTemplateBase = "failed to migrate the base of a composed template"
errComposedTemplateMigrate = "failed to migrate the composed templates of the composition"
errResourceOutput = "failed to output migrated resource"
errResourceOrphan = "failed to orphan managed resource"
errResourceRemoveFinalizer = "failed to remove finalizers of managed resource"
errCompositionOutput = "failed to output migrated composition"
errCompositeOutput = "failed to output migrated composite"
errClaimOutput = "failed to output migrated claim"
errClaimsEdit = "failed to edit claims"
errPlanGeneration = "failed to generate the migration plan"
errPause = "failed to store a paused manifest"
errMissingGVK = "managed resource is missing its GVK. Resource converters must set GVKs on any managed resources they newly generate."
)
const (
versionV010 = "0.1.0"
keyCompositionRef = "compositionRef"
keyResourceRefs = "resourceRefs"
)
// PlanGeneratorOption configures a PlanGenerator
type PlanGeneratorOption func(generator *PlanGenerator)
// WithErrorOnInvalidPatchSchema returns a PlanGeneratorOption for configuring
// whether the PlanGenerator should error and stop the migration plan
// generation in case an error is encountered while checking a patch
// statement's conformance to the migration source or target.
func WithErrorOnInvalidPatchSchema(e bool) PlanGeneratorOption {
return func(pg *PlanGenerator) {
pg.ErrorOnInvalidPatchSchema = e
}
}
// WithSkipGVKs configures the set of GVKs to skip for conversion
// during a migration.
func WithSkipGVKs(gvk ...schema.GroupVersionKind) PlanGeneratorOption {
return func(pg *PlanGenerator) {
pg.SkipGVKs = gvk
}
}
// WithMultipleSources can be used to configure multiple sources for a
// PlanGenerator.
func WithMultipleSources(source ...Source) PlanGeneratorOption {
return func(pg *PlanGenerator) {
pg.source = &sources{backends: source}
}
}
// WithEnableConfigurationMigrationSteps enables only
// the configuration migration steps.
// TODO: to be replaced with a higher abstraction encapsulating
// migration scenarios.
func WithEnableConfigurationMigrationSteps() PlanGeneratorOption {
return func(pg *PlanGenerator) {
pg.enabledSteps = getConfigurationMigrationSteps()
}
}
func WithEnableOnlyFileSystemAPISteps() PlanGeneratorOption {
return func(pg *PlanGenerator) {
pg.enabledSteps = getAPIMigrationStepsFileSystemMode()
}
}
type sources struct {
backends []Source
i int
}
func (s *sources) HasNext() (bool, error) {
if s.i >= len(s.backends) {
return false, nil
}
ok, err := s.backends[s.i].HasNext()
if err != nil || ok {
return ok, err
}
s.i++
return s.HasNext()
}
func (s *sources) Next() (UnstructuredWithMetadata, error) {
return s.backends[s.i].Next()
}
func (s *sources) Reset() error {
for _, src := range s.backends {
if err := src.Reset(); err != nil {
return err
}
}
s.i = 0
return nil
}
// PlanGenerator generates a migration.Plan reading the manifests available
// from `source`, converting managed resources and compositions using the
// available `migration.Converter`s registered in the `registry` and
// writing the output manifests to the specified `target`.
type PlanGenerator struct {
source Source
target Target
registry *Registry
subSteps map[step]string
enabledSteps []step
// Plan is the migration.Plan whose steps are expected
// to complete a migration when they're executed in order.
Plan Plan
// ErrorOnInvalidPatchSchema errors and stops plan generation in case
// an error is encountered while checking the conformance of a patch
// statement against the migration source or the migration target.
ErrorOnInvalidPatchSchema bool
// GVKs of managed resources that
// should be skipped for conversion during the migration, if no
// converters are registered for them. If any of the GVK components
// is left empty, it will be a wildcard component.
// Exact matching with an empty group name is not possible.
SkipGVKs []schema.GroupVersionKind
}
// NewPlanGenerator constructs a new PlanGenerator using the specified
// Source and Target and the default converter Registry.
func NewPlanGenerator(registry *Registry, source Source, target Target, opts ...PlanGeneratorOption) PlanGenerator {
pg := &PlanGenerator{
source: &sources{backends: []Source{source}},
target: target,
registry: registry,
subSteps: map[step]string{},
enabledSteps: getAPIMigrationSteps(),
}
for _, o := range opts {
o(pg)
}
return *pg
}
// GeneratePlan generates a migration plan for the manifests available from
// the configured Source and writing them to the configured Target using the
// configured converter Registry. The generated Plan is available in the
// PlanGenerator.Plan variable if the generation is successful
// (i.e., no errors are reported).
func (pg *PlanGenerator) GeneratePlan() error {
pg.Plan.Spec.stepMap = make(map[string]*Step)
pg.Plan.Version = versionV010
defer pg.commitSteps()
if err := pg.preProcess(); err != nil {
return err
}
if err := pg.source.Reset(); err != nil {
return errors.Wrap(err, errSourceReset)
}
return errors.Wrap(pg.convert(), errPlanGeneration)
}
func (pg *PlanGenerator) preProcess() error {
if len(pg.registry.unstructuredPreProcessors) == 0 {
return nil
}
for hasNext, err := pg.source.HasNext(); ; hasNext, err = pg.source.HasNext() {
if err != nil {
return errors.Wrap(err, errSourceHasNext)
}
if !hasNext {
break
}
o, err := pg.source.Next()
if err != nil {
return errors.Wrap(err, errSourceNext)
}
for _, pp := range pg.registry.unstructuredPreProcessors[o.Metadata.Category] {
if err := pp.PreProcess(o); err != nil {
return errors.Wrapf(err, errPreProcessFmt, o.Metadata.Category)
}
}
}
return nil
}
func (pg *PlanGenerator) convertPatchSets(o UnstructuredWithMetadata) ([]string, error) {
var converted []string
for _, psConv := range pg.registry.patchSetConverters {
if psConv.re == nil || psConv.converter == nil {
continue
}
if !psConv.re.MatchString(o.Object.GetName()) {
continue
}
c, err := ToComposition(o.Object)
if err != nil {
return nil, errors.Wrap(err, errUnstructuredConvert)
}
oldPatchSets := make([]xpv1.PatchSet, len(c.Spec.PatchSets))
for i, ps := range c.Spec.PatchSets {
oldPatchSets[i] = *ps.DeepCopy()
}
psMap := convertToMap(c.Spec.PatchSets)
if err := psConv.converter.PatchSets(psMap); err != nil {
return nil, errors.Wrapf(err, "failed to call PatchSet converter on Composition: %s", c.GetName())
}
newPatchSets := convertFromMap(psMap, oldPatchSets, true)
converted = append(converted, getConvertedPatchSetNames(newPatchSets, oldPatchSets)...)
pv := fieldpath.Pave(o.Object.Object)
if err := pv.SetValue("spec.patchSets", newPatchSets); err != nil {
return nil, errors.Wrapf(err, "failed to set converted patch sets on Composition: %s", c.GetName())
}
}
return converted, nil
}
func (pg *PlanGenerator) categoricalConvert(u *UnstructuredWithMetadata) error {
if u.Metadata.Category == categoryUnknown {
return nil
}
source := *u
source.Object = *u.Object.DeepCopy()
converters := pg.registry.categoricalConverters[u.Metadata.Category]
if converters == nil {
return nil
}
// TODO: if a categorical converter does not convert the given object,
// we will have a false positive. Better to compute and check
// a diff here.
for _, converter := range converters {
if err := converter.Convert(u); err != nil {
return errors.Wrapf(err, "failed to convert unstructured object of category: %s", u.Metadata.Category)
}
}
return pg.stepEditCategory(source, u)
}
func (pg *PlanGenerator) convert() error { //nolint: gocyclo
convertedMR := make(map[corev1.ObjectReference][]UnstructuredWithMetadata)
convertedComposition := make(map[string]string)
var composites []UnstructuredWithMetadata
var claims []UnstructuredWithMetadata
for hasNext, err := pg.source.HasNext(); ; hasNext, err = pg.source.HasNext() {
if err != nil {
return errors.Wrap(err, errSourceHasNext)
}
if !hasNext {
break
}
o, err := pg.source.Next()
if err != nil {
return errors.Wrap(err, errSourceNext)
}
if err := pg.categoricalConvert(&o); err != nil {
return err
}
switch gvk := o.Object.GroupVersionKind(); gvk {
case xppkgv1.ConfigurationGroupVersionKind:
if err := pg.convertConfigurationPackage(o); err != nil {
return errors.Wrapf(err, errConfigurationPackageMigrateFmt, o.Object.GetName())
}
case xpmetav1.ConfigurationGroupVersionKind, xpmetav1alpha1.ConfigurationGroupVersionKind:
if err := pg.convertConfigurationMetadata(o); err != nil {
return errors.Wrapf(err, errConfigurationMetadataMigrateFmt, o.Object.GetName())
}
pg.stepBackupAllResources()
pg.stepBuildConfiguration()
pg.stepPushConfiguration()
case xpv1.CompositionGroupVersionKind:
target, converted, err := pg.convertComposition(o)
if err != nil {
return errors.Wrapf(err, errCompositionMigrateFmt, o.Object.GetName())
}
if converted {
migratedName := fmt.Sprintf("%s-migrated", o.Object.GetName())
convertedComposition[o.Object.GetName()] = migratedName
target.Object.SetName(migratedName)
if err := pg.stepNewComposition(target); err != nil {
return errors.Wrapf(err, errCompositionMigrateFmt, o.Object.GetName())
}
}
case xppkgv1.ProviderGroupVersionKind:
isConverted, err := pg.convertProviderPackage(o)
if err != nil {
return errors.Wrap(err, errProviderMigrateFmt)
}
if isConverted {
if err := pg.stepDeleteMonolith(o); err != nil {
return err
}
}
case xppkgv1beta1.LockGroupVersionKind:
if err := pg.convertPackageLock(o); err != nil {
return errors.Wrapf(err, errLockMigrateFmt, o.Object.GetName())
}
default:
if o.Metadata.Category == CategoryComposite {
if err := pg.stepPauseComposite(&o); err != nil {
return errors.Wrap(err, errCompositePause)
}
composites = append(composites, o)
continue
}
if o.Metadata.Category == CategoryClaim {
claims = append(claims, o)
continue
}
targets, converted, err := pg.convertResource(o, false)
if err != nil {
return errors.Wrap(err, errResourceMigrate)
}
if converted {
convertedMR[corev1.ObjectReference{
Kind: gvk.Kind,
Name: o.Object.GetName(),
APIVersion: gvk.GroupVersion().String(),
}] = targets
for _, tu := range targets {
tu := tu
if err := pg.stepNewManagedResource(&tu); err != nil {
return errors.Wrap(err, errResourceMigrate)
}
if err := pg.stepStartManagedResource(&tu); err != nil {
return errors.Wrap(err, errResourceMigrate)
}
}
} else if _, ok, _ := toManagedResource(pg.registry.scheme, o.Object); ok {
if err := pg.stepStartManagedResource(&o); err != nil {
return errors.Wrap(err, errResourceMigrate)
}
}
}
if err := pg.addStepsForManagedResource(&o); err != nil {
return err
}
}
if err := pg.stepEditComposites(composites, convertedMR, convertedComposition); err != nil {
return errors.Wrap(err, errCompositesEdit)
}
if err := pg.stepStartComposites(composites); err != nil {
return errors.Wrap(err, errCompositesStart)
}
if err := pg.stepEditClaims(claims, convertedComposition); err != nil {
return errors.Wrap(err, errClaimsEdit)
}
return nil
}
func (pg *PlanGenerator) convertResource(o UnstructuredWithMetadata, compositionContext bool) ([]UnstructuredWithMetadata, bool, error) {
gvk := o.Object.GroupVersionKind()
conv := pg.registry.resourceConverters[gvk]
if conv == nil {
return []UnstructuredWithMetadata{o}, false, nil
}
// we have already ensured that the GVK belongs to a managed resource type
mg, _, err := toManagedResource(pg.registry.scheme, o.Object)
if err != nil {
return nil, false, errors.Wrap(err, errResourceMigrate)
}
if pg.registry.resourcePreProcessors != nil {
for _, pp := range pg.registry.resourcePreProcessors {
if err = pp.ResourcePreProcessor(mg); err != nil {
return nil, false, errors.Wrap(err, errResourceMigrate)
}
}
}
resources, err := conv.Resource(mg)
if err != nil {
return nil, false, errors.Wrap(err, errResourceMigrate)
}
if err := assertGVK(resources); err != nil {
return nil, true, errors.Wrap(err, errResourceMigrate)
}
if !compositionContext {
assertMetadataName(mg.GetName(), resources)
}
converted := make([]UnstructuredWithMetadata, 0, len(resources))
for _, mg := range resources {
converted = append(converted, UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(mg),
Metadata: o.Metadata,
})
}
return converted, true, nil
}
func assertGVK(resources []resource.Managed) error {
for _, r := range resources {
if reflect.ValueOf(r.GetObjectKind().GroupVersionKind()).IsZero() {
return errors.New(errMissingGVK)
}
}
return nil
}
func assertMetadataName(parentName string, resources []resource.Managed) {
for i, r := range resources {
if len(r.GetName()) != 0 || len(r.GetGenerateName()) != 0 {
continue
}
resources[i].SetGenerateName(fmt.Sprintf("%s-", parentName))
}
}
func (pg *PlanGenerator) convertComposition(o UnstructuredWithMetadata) (*UnstructuredWithMetadata, bool, error) { //nolint:gocyclo
convertedPS, err := pg.convertPatchSets(o)
if err != nil {
return nil, false, errors.Wrap(err, "failed to convert patch sets")
}
comp, err := ToComposition(o.Object)
if err != nil {
return nil, false, errors.Wrap(err, errUnstructuredConvert)
}
var targetResources []*xpv1.ComposedTemplate
isConverted := false
for _, cmp := range comp.Spec.Resources {
u, err := FromRawExtension(cmp.Base)
if err != nil {
return nil, false, errors.Wrapf(err, errCompositionMigrateFmt, o.Object.GetName())
}
gvk := u.GroupVersionKind()
converted, ok, err := pg.convertResource(UnstructuredWithMetadata{
Object: u,
Metadata: o.Metadata,
}, true)
if err != nil {
return nil, false, errors.Wrap(err, errComposedTemplateBase)
}
isConverted = isConverted || ok
cmps := make([]*xpv1.ComposedTemplate, 0, len(converted))
sourceNameUsed := false
for _, u := range converted {
buff, err := u.Object.MarshalJSON()
if err != nil {
return nil, false, errors.Wrap(err, errUnstructuredMarshal)
}
c := cmp.DeepCopy()
c.Base = runtime.RawExtension{
Raw: buff,
}
if err := pg.setDefaultsOnTargetTemplate(cmp.Name, &sourceNameUsed, gvk, u.Object.GroupVersionKind(), c, comp.Spec.PatchSets, convertedPS); err != nil {
return nil, false, errors.Wrap(err, errComposedTemplateMigrate)
}
cmps = append(cmps, c)
}
conv := pg.registry.templateConverters[gvk]
if conv != nil {
if err := conv.ComposedTemplate(cmp, cmps...); err != nil {
return nil, false, errors.Wrap(err, errComposedTemplateMigrate)
}
}
targetResources = append(targetResources, cmps...)
}
comp.Spec.Resources = make([]xpv1.ComposedTemplate, 0, len(targetResources))
for _, cmp := range targetResources {
comp.Spec.Resources = append(comp.Spec.Resources, *cmp)
}
return &UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(&comp),
Metadata: o.Metadata,
}, isConverted, nil
}
func (pg *PlanGenerator) isGVKSkipped(sourceGVK schema.GroupVersionKind) bool {
for _, gvk := range pg.SkipGVKs {
if (len(gvk.Group) == 0 || gvk.Group == sourceGVK.Group) &&
(len(gvk.Version) == 0 || gvk.Version == sourceGVK.Version) &&
(len(gvk.Kind) == 0 || gvk.Kind == sourceGVK.Kind) {
return true
}
}
return false
}
func (pg *PlanGenerator) setDefaultsOnTargetTemplate(sourceName *string, sourceNameUsed *bool, gvkSource, gvkTarget schema.GroupVersionKind, target *xpv1.ComposedTemplate, patchSets []xpv1.PatchSet, convertedPS []string) error {
if pg.isGVKSkipped(gvkSource) {
return nil
}
// remove invalid patches that do not conform to the migration target's schema
if err := pg.removeInvalidPatches(gvkSource, gvkTarget, patchSets, target, convertedPS); err != nil {
return errors.Wrap(err, "failed to set the defaults on the migration target composed template")
}
if *sourceNameUsed || gvkSource.Kind != gvkTarget.Kind {
if sourceName != nil && len(*sourceName) > 0 {
targetName := fmt.Sprintf("%s-%s", *sourceName, rand.String(5))
target.Name = &targetName
}
} else {
*sourceNameUsed = true
}
return nil
}
func init() {
rand.Seed(time.Now().UnixNano())
}

View File

@ -1,645 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"bytes"
"os"
"path/filepath"
"regexp"
"testing"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/test"
v1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
xppkgv1 "github.com/crossplane/crossplane/apis/pkg/v1"
xppkgv1beta1 "github.com/crossplane/crossplane/apis/pkg/v1beta1"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/yaml"
k8syaml "sigs.k8s.io/yaml"
"github.com/crossplane/upjet/pkg/migration/fake"
)
func TestGeneratePlan(t *testing.T) {
type fields struct {
source Source
target *testTarget
registry *Registry
opts []PlanGeneratorOption
}
type want struct {
err error
migrationPlanPath string
// names of resource files to be loaded
migratedResourceNames []string
preProcessResults map[Category][]string
}
tests := map[string]struct {
fields fields
want want
}{
"EmptyPlan": {
fields: fields{
source: newTestSource(map[string]Metadata{}),
target: newTestTarget(),
registry: getRegistry(),
},
want: want{},
},
"PreProcess": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/composition.yaml": {Category: CategoryComposition},
}),
target: newTestTarget(),
registry: getRegistry(withPreProcessor(CategoryComposition, &preProcessor{})),
},
want: want{
preProcessResults: map[Category][]string{
CategoryComposition: {"example.compositions.apiextensions.crossplane.io_v1"},
},
},
},
"PlanWithManagedResourceAndClaim": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/sourcevpc.yaml": {Category: CategoryManaged},
"testdata/plan/claim.yaml": {Category: CategoryClaim},
"testdata/plan/composition.yaml": {},
"testdata/plan/xrd.yaml": {},
"testdata/plan/xr.yaml": {Category: CategoryComposite}}),
target: newTestTarget(),
registry: getRegistry(
withPreProcessor(CategoryManaged, &preProcessor{}),
withDelegatingConverter(fake.MigrationSourceGVK, delegatingConverter{
rFn: func(mg xpresource.Managed) ([]xpresource.Managed, error) {
s := mg.(*fake.MigrationSourceObject)
t := &fake.MigrationTargetObject{}
if _, err := CopyInto(s, t, fake.MigrationTargetGVK, "spec.forProvider.tags", "mockManaged"); err != nil {
return nil, err
}
t.Spec.ForProvider.Tags = make(map[string]string, len(s.Spec.ForProvider.Tags))
for _, tag := range s.Spec.ForProvider.Tags {
v := tag.Value
t.Spec.ForProvider.Tags[tag.Key] = v
}
return []xpresource.Managed{
t,
}, nil
},
cmpFn: func(_ v1.ComposedTemplate, convertedTemplates ...*v1.ComposedTemplate) error {
// convert patches in the migration target composed templates
for i := range convertedTemplates {
convertedTemplates[i].Patches = append([]v1.Patch{
{FromFieldPath: ptrFromString("spec.parameters.tagValue"),
ToFieldPath: ptrFromString(`spec.forProvider.tags["key1"]`),
}, {
FromFieldPath: ptrFromString("spec.parameters.tagValue"),
ToFieldPath: ptrFromString(`spec.forProvider.tags["key2"]`),
},
}, convertedTemplates[i].Patches...)
}
return nil
},
}),
withPatchSetConverter(patchSetConverter{
re: AllCompositions,
converter: &testConverter{},
})),
},
want: want{
migrationPlanPath: "testdata/plan/generated/migration_plan.yaml",
migratedResourceNames: []string{
"pause-managed/sample-vpc.vpcs.fakesourceapi.yaml",
"edit-claims/my-resource.myresources.test.com.yaml",
"start-managed/sample-vpc.vpcs.faketargetapi.yaml",
"pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml",
"edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml",
"deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml",
"remove-finalizers/sample-vpc.vpcs.fakesourceapi.yaml",
"new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml",
"start-composites/my-resource-dwjgh.xmyresources.test.com.yaml",
"create-new-managed/sample-vpc.vpcs.faketargetapi.yaml",
},
preProcessResults: map[Category][]string{
CategoryManaged: {"sample-vpc.vpcs.fakesourceapi_v1alpha1"},
},
},
},
"PlanWithManagedResourceAndClaimForFileSystemMode": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/sourcevpc.yaml": {Category: CategoryManaged},
"testdata/plan/claim.yaml": {Category: CategoryClaim},
"testdata/plan/composition.yaml": {},
"testdata/plan/xrd.yaml": {},
"testdata/plan/xr.yaml": {Category: CategoryComposite}}),
target: newTestTarget(),
registry: getRegistry(
withPreProcessor(CategoryManaged, &preProcessor{}),
withDelegatingConverter(fake.MigrationSourceGVK, delegatingConverter{
rFn: func(mg xpresource.Managed) ([]xpresource.Managed, error) {
s := mg.(*fake.MigrationSourceObject)
t := &fake.MigrationTargetObject{}
if _, err := CopyInto(s, t, fake.MigrationTargetGVK, "spec.forProvider.tags", "mockManaged"); err != nil {
return nil, err
}
t.Spec.ForProvider.Tags = make(map[string]string, len(s.Spec.ForProvider.Tags))
for _, tag := range s.Spec.ForProvider.Tags {
v := tag.Value
t.Spec.ForProvider.Tags[tag.Key] = v
}
return []xpresource.Managed{
t,
}, nil
},
cmpFn: func(_ v1.ComposedTemplate, convertedTemplates ...*v1.ComposedTemplate) error {
// convert patches in the migration target composed templates
for i := range convertedTemplates {
convertedTemplates[i].Patches = append([]v1.Patch{
{FromFieldPath: ptrFromString("spec.parameters.tagValue"),
ToFieldPath: ptrFromString(`spec.forProvider.tags["key1"]`),
}, {
FromFieldPath: ptrFromString("spec.parameters.tagValue"),
ToFieldPath: ptrFromString(`spec.forProvider.tags["key2"]`),
},
}, convertedTemplates[i].Patches...)
}
return nil
},
}),
withPatchSetConverter(patchSetConverter{
re: AllCompositions,
converter: &testConverter{},
})),
opts: []PlanGeneratorOption{WithEnableOnlyFileSystemAPISteps()},
},
want: want{
migrationPlanPath: "testdata/plan/generated/migration_plan_filesystem.yaml",
migratedResourceNames: []string{
"edit-claims/my-resource.myresources.test.com.yaml",
"start-managed/sample-vpc.vpcs.faketargetapi.yaml",
"edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml",
"new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml",
"start-composites/my-resource-dwjgh.xmyresources.test.com.yaml",
"create-new-managed/sample-vpc.vpcs.faketargetapi.yaml",
},
preProcessResults: map[Category][]string{
CategoryManaged: {"sample-vpc.vpcs.fakesourceapi_v1alpha1"},
},
},
},
"PlanWithConfigurationMetaV1": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/configurationv1.yaml": {}}),
target: newTestTarget(),
registry: getRegistry(
withConfigurationMetadataConverter(configurationMetadataConverter{
re: AllConfigurations,
converter: &configurationMetaTestConverter{},
})),
opts: []PlanGeneratorOption{WithEnableConfigurationMigrationSteps()},
},
want: want{
migrationPlanPath: "testdata/plan/generated/configurationv1_migration_plan.yaml",
migratedResourceNames: []string{
"edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml",
},
},
},
"PlanWithConfigurationMetaV1Alpha1": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/configurationv1alpha1.yaml": {}}),
target: newTestTarget(),
registry: getRegistry(
withConfigurationMetadataConverter(configurationMetadataConverter{
re: AllConfigurations,
converter: &configurationMetaTestConverter{},
})),
opts: []PlanGeneratorOption{WithEnableConfigurationMigrationSteps()},
},
want: want{
migrationPlanPath: "testdata/plan/generated/configurationv1alpha1_migration_plan.yaml",
migratedResourceNames: []string{
"edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1alpha1.yaml",
},
},
},
"PlanWithProviderPackageV1": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/providerv1.yaml": {}}),
target: newTestTarget(),
registry: getRegistry(
withProviderPackageConverter(providerPackageConverter{
re: regexp.MustCompile(`xpkg.upbound.io/upbound/provider-aws:.+`),
converter: &monolithProviderToFamilyConfigConverter{},
}),
withProviderPackageConverter(providerPackageConverter{
re: regexp.MustCompile(`xpkg.upbound.io/upbound/provider-aws:.+`),
converter: &monolithicProviderToSSOPConverter{},
})),
opts: []PlanGeneratorOption{WithEnableConfigurationMigrationSteps()},
},
want: want{
migrationPlanPath: "testdata/plan/generated/providerv1_migration_plan.yaml",
migratedResourceNames: []string{
"new-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml",
"new-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml",
"new-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml",
},
},
},
"PlanForConfigurationPackageMigration": {
fields: fields{
source: newTestSource(map[string]Metadata{
"testdata/plan/providerv1.yaml": {},
"testdata/plan/configurationv1.yaml": {},
"testdata/plan/configurationpkgv1.yaml": {},
"testdata/plan/lockv1beta1.yaml": {},
"testdata/plan/sourcevpc.yaml": {Category: CategoryManaged},
"testdata/plan/sourcevpc2.yaml": {Category: CategoryManaged},
}),
target: newTestTarget(),
registry: getRegistry(
withConfigurationMetadataConverter(configurationMetadataConverter{
re: AllConfigurations,
converter: &configurationMetaTestConverter{},
}),
withConfigurationPackageConverter(configurationPackageConverter{
re: regexp.MustCompile(`xpkg.upbound.io/upbound/provider-ref-aws:.+`),
converter: &configurationPackageTestConverter{},
}),
withProviderPackageConverter(providerPackageConverter{
re: regexp.MustCompile(`xpkg.upbound.io/upbound/provider-aws:.+`),
converter: &monolithProviderToFamilyConfigConverter{},
}),
withProviderPackageConverter(providerPackageConverter{
re: regexp.MustCompile(`xpkg.upbound.io/upbound/provider-aws:.+`),
converter: &monolithicProviderToSSOPConverter{},
}),
withPackageLockConverter(packageLockConverter{
re: CrossplaneLockName,
converter: &lockConverter{},
}),
),
opts: []PlanGeneratorOption{WithEnableConfigurationMigrationSteps()},
},
want: want{
migrationPlanPath: "testdata/plan/generated/configurationv1_pkg_migration_plan.yaml",
migratedResourceNames: []string{
"disable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml",
"edit-configuration-package/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml",
"enable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml",
"edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml",
"new-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml",
"new-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml",
"new-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml",
"activate-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml",
"edit-package-lock/lock.locks.pkg.crossplane.io_v1beta1.yaml",
"deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml",
"deletion-policy-delete/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml",
},
},
},
}
for name, tt := range tests {
t.Run(name, func(t *testing.T) {
pg := NewPlanGenerator(tt.fields.registry, tt.fields.source, tt.fields.target, tt.fields.opts...)
err := pg.GeneratePlan()
// compare error state
if diff := cmp.Diff(tt.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("GeneratePlan(): -wantError, +gotError: %s", diff)
}
if err != nil {
return
}
// compare preprocessor results
for c, results := range tt.want.preProcessResults {
pps := tt.fields.registry.unstructuredPreProcessors[c]
if len(pps) != 1 {
t.Fatalf("One pre-processor must have been registered for category: %s", c)
}
pp := pps[0].(*preProcessor)
if diff := cmp.Diff(results, pp.results); diff != "" {
t.Errorf("GeneratePlan(): -wantPreProcessorResults, +gotPreProcessorResults: %s", diff)
}
}
// compare generated plan with the expected plan
p, err := loadPlan(tt.want.migrationPlanPath)
if err != nil {
t.Fatalf("Failed to load plan file from path %s: %v", tt.want.migrationPlanPath, err)
}
if diff := cmp.Diff(p, &pg.Plan, cmpopts.IgnoreUnexported(Spec{})); diff != "" {
t.Errorf("GeneratePlan(): -wantPlan, +gotPlan: %s", diff)
}
// compare generated migration files with the expected ones
for _, name := range tt.want.migratedResourceNames {
path := filepath.Join("testdata/plan/generated", name)
buff, err := os.ReadFile(path)
if err != nil {
t.Fatalf("Failed to read a generated migration resource from path %s: %v", path, err)
}
u := unstructured.Unstructured{}
if err := k8syaml.Unmarshal(buff, &u); err != nil {
t.Fatalf("Failed to unmarshal a generated migration resource from path %s: %v", path, err)
}
gU, ok := tt.fields.target.targetManifests[name]
if !ok {
t.Errorf("GeneratePlan(): Expected generated migration resource file not found: %s", name)
continue
}
removeNilValuedKeys(u.Object)
if diff := cmp.Diff(u, gU.Object); diff != "" {
t.Errorf("GeneratePlan(): -wantMigratedResource, +gotMigratedResource with name %q: %s", name, diff)
}
delete(tt.fields.target.targetManifests, name)
}
// check for unexpected generated migration files
for name := range tt.fields.target.targetManifests {
t.Errorf("GeneratePlan(): Unexpected generated migration file: %s", name)
}
})
}
}
type testSource struct {
sourceManifests map[string]Metadata
paths []string
index int
}
func newTestSource(sourceManifests map[string]Metadata) *testSource {
result := &testSource{sourceManifests: sourceManifests}
result.paths = make([]string, 0, len(result.sourceManifests))
for k := range result.sourceManifests {
result.paths = append(result.paths, k)
}
return result
}
func (f *testSource) HasNext() (bool, error) {
return f.index <= len(f.paths)-1, nil
}
func (f *testSource) Next() (UnstructuredWithMetadata, error) {
um := UnstructuredWithMetadata{
Metadata: f.sourceManifests[f.paths[f.index]],
Object: unstructured.Unstructured{},
}
um.Metadata.Path = f.paths[f.index]
buff, err := os.ReadFile(f.paths[f.index])
if err != nil {
return um, err
}
decoder := yaml.NewYAMLOrJSONDecoder(bytes.NewBufferString(string(buff)), 1024)
if err := decoder.Decode(&um.Object); err != nil {
return um, err
}
f.index++
return um, nil
}
func (f *testSource) Reset() error {
f.index = 0
return nil
}
type testTarget struct {
targetManifests map[string]UnstructuredWithMetadata
}
func newTestTarget() *testTarget {
return &testTarget{
targetManifests: make(map[string]UnstructuredWithMetadata),
}
}
func (f *testTarget) Put(o UnstructuredWithMetadata) error {
f.targetManifests[o.Metadata.Path] = o
return nil
}
func (f *testTarget) Delete(o UnstructuredWithMetadata) error {
delete(f.targetManifests, o.Metadata.Path)
return nil
}
// can be utilized to populate test artifacts
/*func (f *testTarget) dumpFiles(parentDir string) error {
for f, u := range f.targetManifests {
path := filepath.Join(parentDir, f)
buff, err := k8syaml.Marshal(u.Object.Object)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
return err
}
if err := os.WriteFile(path, buff, 0o600); err != nil {
return err
}
}
return nil
}*/
type testConverter struct{}
func (f *testConverter) PatchSets(psMap map[string]*v1.PatchSet) error {
psMap["ps1"].Patches[0].ToFieldPath = ptrFromString(`spec.forProvider.tags["key3"]`)
psMap["ps6"].Patches[0].ToFieldPath = ptrFromString(`spec.forProvider.tags["key4"]`)
return nil
}
func ptrFromString(s string) *string {
return &s
}
type registryOption func(*Registry)
func withDelegatingConverter(gvk schema.GroupVersionKind, d delegatingConverter) registryOption {
return func(r *Registry) {
r.RegisterAPIConversionFunctions(gvk, d.rFn, d.cmpFn, nil)
}
}
func withPatchSetConverter(c patchSetConverter) registryOption {
return func(r *Registry) {
r.RegisterPatchSetConverter(c.re, c.converter)
}
}
func withConfigurationMetadataConverter(c configurationMetadataConverter) registryOption {
return func(r *Registry) {
r.RegisterConfigurationMetadataConverter(c.re, c.converter)
}
}
func withConfigurationPackageConverter(c configurationPackageConverter) registryOption {
return func(r *Registry) {
r.RegisterConfigurationPackageConverter(c.re, c.converter)
}
}
func withProviderPackageConverter(c providerPackageConverter) registryOption {
return func(r *Registry) {
r.RegisterProviderPackageConverter(c.re, c.converter)
}
}
func withPackageLockConverter(c packageLockConverter) registryOption {
return func(r *Registry) {
r.RegisterPackageLockConverter(c.re, c.converter)
}
}
func withPreProcessor(c Category, pp UnstructuredPreProcessor) registryOption {
return func(r *Registry) {
r.RegisterPreProcessor(c, pp)
}
}
func getRegistry(opts ...registryOption) *Registry {
scheme := runtime.NewScheme()
scheme.AddKnownTypeWithName(fake.MigrationSourceGVK, &fake.MigrationSourceObject{})
scheme.AddKnownTypeWithName(fake.MigrationTargetGVK, &fake.MigrationTargetObject{})
r := NewRegistry(scheme)
for _, o := range opts {
o(r)
}
return r
}
func loadPlan(planPath string) (*Plan, error) {
if planPath == "" {
return emptyPlan(), nil
}
buff, err := os.ReadFile(planPath)
if err != nil {
return nil, err
}
p := &Plan{}
return p, k8syaml.Unmarshal(buff, p)
}
func emptyPlan() *Plan {
return &Plan{
Version: versionV010,
}
}
type configurationPackageTestConverter struct{}
func (c *configurationPackageTestConverter) ConfigurationPackageV1(pkg *xppkgv1.Configuration) error {
pkg.Spec.Package = "xpkg.upbound.io/upbound/provider-ref-aws:v0.2.0-ssop"
return nil
}
type configurationMetaTestConverter struct{}
func (cc *configurationMetaTestConverter) ConfigurationMetadataV1(c *xpmetav1.Configuration) error {
c.Spec.DependsOn = []xpmetav1.Dependency{
{
Provider: ptrFromString("xpkg.upbound.io/upbound/provider-aws-eks"),
Version: ">=v0.17.0",
},
}
return nil
}
func (cc *configurationMetaTestConverter) ConfigurationMetadataV1Alpha1(c *xpmetav1alpha1.Configuration) error {
c.Spec.DependsOn = []xpmetav1alpha1.Dependency{
{
Provider: ptrFromString("xpkg.upbound.io/upbound/provider-aws-eks"),
Version: ">=v0.17.0",
},
}
return nil
}
type monolithProviderToFamilyConfigConverter struct{}
func (c *monolithProviderToFamilyConfigConverter) ProviderPackageV1(_ xppkgv1.Provider) ([]xppkgv1.Provider, error) {
ap := xppkgv1.ManualActivation
return []xppkgv1.Provider{
{
ObjectMeta: metav1.ObjectMeta{
Name: "provider-family-aws",
},
Spec: xppkgv1.ProviderSpec{
PackageSpec: xppkgv1.PackageSpec{
Package: "xpkg.upbound.io/upbound/provider-family-aws:v0.37.0",
RevisionActivationPolicy: &ap,
},
},
},
}, nil
}
type monolithicProviderToSSOPConverter struct{}
func (c *monolithicProviderToSSOPConverter) ProviderPackageV1(_ xppkgv1.Provider) ([]xppkgv1.Provider, error) {
ap := xppkgv1.ManualActivation
return []xppkgv1.Provider{
{
ObjectMeta: metav1.ObjectMeta{
Name: "provider-aws-ec2",
},
Spec: xppkgv1.ProviderSpec{
PackageSpec: xppkgv1.PackageSpec{
Package: "xpkg.upbound.io/upbound/provider-aws-ec2:v0.37.0",
RevisionActivationPolicy: &ap,
},
},
},
{
ObjectMeta: metav1.ObjectMeta{
Name: "provider-aws-eks",
},
Spec: xppkgv1.ProviderSpec{
PackageSpec: xppkgv1.PackageSpec{
Package: "xpkg.upbound.io/upbound/provider-aws-eks:v0.37.0",
RevisionActivationPolicy: &ap,
},
},
},
}, nil
}
type lockConverter struct{}
func (p *lockConverter) PackageLockV1Beta1(lock *xppkgv1beta1.Lock) error {
lock.Packages = append(lock.Packages, xppkgv1beta1.LockPackage{
Name: "test-provider",
Type: xppkgv1beta1.ProviderPackageType,
Source: "xpkg.upbound.io/upbound/test-provider",
Version: "vX.Y.Z",
})
return nil
}
type preProcessor struct {
results []string
}
func (pp *preProcessor) PreProcess(u UnstructuredWithMetadata) error {
pp.results = append(pp.results, getVersionedName(u.Object))
return nil
}

View File

@ -1,177 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"sort"
"strconv"
"strings"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/util/json"
"k8s.io/apimachinery/pkg/util/jsonmergepatch"
"k8s.io/apimachinery/pkg/util/rand"
)
type step int
const (
errMarshalSourceForPatch = "failed to marshal source object for computing JSON merge patch"
errMarshalTargetForPatch = "failed to marshal target object for computing JSON merge patch"
errMergePatch = "failed to compute the JSON merge patch document"
errMergePatchMap = "failed to unmarshal the JSON merge patch document into map"
errInvalidStepFmt = "invalid step ID: %d"
)
func setApplyStep(name string, s *Step) {
s.Name = name
s.Type = StepTypeApply
s.Apply = &ApplyStep{}
}
func setPatchStep(name string, s *Step) {
s.Name = name
s.Type = StepTypePatch
s.Patch = &PatchStep{}
s.Patch.Type = PatchTypeMerge
}
func setDeleteStep(name string, s *Step) {
s.Name = name
s.Type = StepTypeDelete
deletePolicy := FinalizerPolicyRemove
s.Delete = &DeleteStep{
Options: &DeleteOptions{
FinalizerPolicy: &deletePolicy,
},
}
}
func setExecStep(name string, s *Step) {
s.Name = name
s.Type = StepTypeExec
s.Exec = &ExecStep{
Command: "sh",
}
}
func (pg *PlanGenerator) commitSteps() { //nolint: gocyclo
if len(pg.Plan.Spec.stepMap) == 0 {
return
}
pg.Plan.Spec.Steps = make([]Step, 0, len(pg.Plan.Spec.stepMap))
keys := make([]string, 0, len(pg.Plan.Spec.stepMap))
for s := range pg.Plan.Spec.stepMap {
keys = append(keys, s)
}
// keys slice consist of the step keys of enabled migration steps.
// step keys are the string representation of integer or float64 numbers,
// which correspond to the step execution order (greater number executes later)
// therefore needs to be sorted according to their numeric values.
// otherwise, sorting the strings directly causes faulty behavior e.g "1" < "10" < "2"
// sorting will panic if a non-numeric step key is found in keys
sort.SliceStable(keys, func(i, j int) bool {
fi, err := strconv.ParseFloat(keys[i], 64)
if err != nil {
panic(err)
}
fj, err := strconv.ParseFloat(keys[j], 64)
if err != nil {
panic(err)
}
return fi < fj
})
addManualExecution := true
switch t := pg.source.(type) {
case *sources:
for _, source := range t.backends {
if _, ok := source.(*FileSystemSource); ok {
addManualExecution = false
break
}
}
case *FileSystemSource:
addManualExecution = false
}
if addManualExecution {
for _, s := range keys {
AddManualExecution(pg.Plan.Spec.stepMap[s])
pg.Plan.Spec.Steps = append(pg.Plan.Spec.Steps, *pg.Plan.Spec.stepMap[s])
}
} else {
for _, s := range keys {
pg.Plan.Spec.Steps = append(pg.Plan.Spec.Steps, *pg.Plan.Spec.stepMap[s])
}
}
}
// AddManualExecution sets the manual execution hint for
// the specified step.
func AddManualExecution(s *Step) {
switch s.Type {
case StepTypeExec:
s.ManualExecution = []string{fmt.Sprintf("%s %s %q", s.Exec.Command, s.Exec.Args[0], strings.Join(s.Exec.Args[1:], " "))}
case StepTypePatch:
for _, f := range s.Patch.Files {
s.ManualExecution = append(s.ManualExecution, fmt.Sprintf("kubectl patch --type='%s' -f %s --patch-file %s", s.Patch.Type, f, f))
}
case StepTypeApply:
for _, f := range s.Apply.Files {
s.ManualExecution = append(s.ManualExecution, fmt.Sprintf("kubectl apply -f %s", f))
}
case StepTypeDelete:
for _, r := range s.Delete.Resources {
s.ManualExecution = append(s.ManualExecution, fmt.Sprintf("kubectl delete %s %s", strings.Join([]string{r.Kind, r.Group}, "."), r.Name))
}
}
}
func (pg *PlanGenerator) stepEnabled(s step) bool {
for _, i := range pg.enabledSteps {
if i == s {
return true
}
}
return false
}
func computeJSONMergePathDoc(source, target unstructured.Unstructured) (map[string]any, error) {
sourceBuff, err := source.MarshalJSON()
if err != nil {
return nil, errors.Wrap(err, errMarshalSourceForPatch)
}
targetBuff, err := target.MarshalJSON()
if err != nil {
return nil, errors.Wrap(err, errMarshalTargetForPatch)
}
patch, err := jsonmergepatch.CreateThreeWayJSONMergePatch(sourceBuff, targetBuff, sourceBuff)
if err != nil {
return nil, errors.Wrap(err, errMergePatch)
}
var result map[string]any
return result, errors.Wrap(json.Unmarshal(patch, &result), errMergePatchMap)
}
func getQualifiedName(u unstructured.Unstructured) string {
namePrefix := u.GetName()
if len(namePrefix) == 0 {
namePrefix = fmt.Sprintf("%s%s", u.GetGenerateName(), rand.String(5))
}
gvk := u.GroupVersionKind()
return fmt.Sprintf("%s.%ss.%s", namePrefix, strings.ToLower(gvk.Kind), gvk.Group)
}
func getVersionedName(u unstructured.Unstructured) string {
v := u.GroupVersionKind().Version
qName := getQualifiedName(u)
if v == "" {
return qName
}
return fmt.Sprintf("%s_%s", qName, v)
}

View File

@ -1,146 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"fmt"
"strings"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
const (
errPutSSOPPackageFmt = "failed to put the SSOP package: %s"
errActivateSSOP = "failed to put the activated SSOP package: %s"
)
func (pg *PlanGenerator) convertProviderPackage(o UnstructuredWithMetadata) (bool, error) { //nolint:gocyclo
pkg, err := toProviderPackage(o.Object)
if err != nil {
return false, err
}
isConverted := false
for _, pkgConv := range pg.registry.providerPackageConverters {
if pkgConv.re == nil || pkgConv.converter == nil || !pkgConv.re.MatchString(pkg.Spec.Package) {
continue
}
targetPkgs, err := pkgConv.converter.ProviderPackageV1(*pkg)
if err != nil {
return false, errors.Wrapf(err, "failed to call converter on Provider package: %s", pkg.Spec.Package)
}
if len(targetPkgs) == 0 {
continue
}
// TODO: if a configuration converter only converts a specific version,
// (or does not convert the given configuration),
// we will have a false positive. Better to compute and check
// a diff here.
isConverted = true
converted := make([]*UnstructuredWithMetadata, 0, len(targetPkgs))
for _, p := range targetPkgs {
p := p
converted = append(converted, &UnstructuredWithMetadata{
Object: ToSanitizedUnstructured(&p),
Metadata: o.Metadata,
})
}
if err := pg.stepNewSSOPs(o, converted); err != nil {
return false, err
}
if err := pg.stepActivateSSOPs(converted); err != nil {
return false, err
}
if err := pg.stepCheckHealthOfNewProvider(o, converted); err != nil {
return false, err
}
if err := pg.stepCheckInstallationOfNewProvider(o, converted); err != nil {
return false, err
}
}
return isConverted, nil
}
func (pg *PlanGenerator) stepDeleteMonolith(source UnstructuredWithMetadata) error {
// delete the monolithic provider package
s := pg.stepConfigurationWithSubStep(stepDeleteMonolithicProvider, true)
source.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(source.Object))
s.Delete.Resources = []Resource{
{
GroupVersionKind: FromGroupVersionKind(source.Object.GroupVersionKind()),
Name: source.Object.GetName(),
},
}
return nil
}
// add steps for the new SSOPs
func (pg *PlanGenerator) stepNewSSOPs(source UnstructuredWithMetadata, targets []*UnstructuredWithMetadata) error {
var s *Step
isFamilyConfig, err := checkContainsFamilyConfigProvider(targets)
if err != nil {
return errors.Wrapf(err, "could not decide whether the provider family config")
}
if isFamilyConfig {
s = pg.stepConfigurationWithSubStep(stepNewFamilyProvider, true)
} else {
s = pg.stepConfigurationWithSubStep(stepNewServiceScopedProvider, true)
}
for _, t := range targets {
t.Object.Object = addGVK(source.Object, t.Object.Object)
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
s.Apply.Files = append(s.Apply.Files, t.Metadata.Path)
if err := pg.target.Put(*t); err != nil {
return errors.Wrapf(err, errPutSSOPPackageFmt, t.Metadata.Path)
}
}
return nil
}
// add steps for activating SSOPs
func (pg *PlanGenerator) stepActivateSSOPs(targets []*UnstructuredWithMetadata) error {
var s *Step
isFamilyConfig, err := checkContainsFamilyConfigProvider(targets)
if err != nil {
return errors.Wrapf(err, "could not decide whether the provider family config")
}
if isFamilyConfig {
s = pg.stepConfigurationWithSubStep(stepActivateFamilyProviderRevision, true)
} else {
s = pg.stepConfigurationWithSubStep(stepActivateServiceScopedProviderRevision, true)
}
for _, t := range targets {
t.Metadata.Path = fmt.Sprintf("%s/%s.yaml", s.Name, getVersionedName(t.Object))
s.Patch.Files = append(s.Patch.Files, t.Metadata.Path)
if err := pg.target.Put(UnstructuredWithMetadata{
Object: unstructured.Unstructured{
Object: addNameGVK(t.Object, map[string]any{
"spec": map[string]any{
"revisionActivationPolicy": "Automatic",
},
}),
},
Metadata: t.Metadata,
}); err != nil {
return errors.Wrapf(err, errActivateSSOP, t.Metadata.Path)
}
}
return nil
}
func checkContainsFamilyConfigProvider(targets []*UnstructuredWithMetadata) (bool, error) {
for _, t := range targets {
paved := fieldpath.Pave(t.Object.Object)
pkg, err := paved.GetString("spec.package")
if err != nil {
return false, errors.Wrap(err, "could not get package of provider")
}
if strings.Contains(pkg, "provider-family") {
return true, nil
}
}
return false, nil
}

View File

@ -1,542 +0,0 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package migration
import (
"regexp"
"github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane/apis/apiextensions/v1"
xpmetav1 "github.com/crossplane/crossplane/apis/pkg/meta/v1"
xpmetav1alpha1 "github.com/crossplane/crossplane/apis/pkg/meta/v1alpha1"
xppkgv1 "github.com/crossplane/crossplane/apis/pkg/v1"
xppkgv1beta1 "github.com/crossplane/crossplane/apis/pkg/v1beta1"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
// AllCompositions matches all v1.Composition names.
AllCompositions = regexp.MustCompile(`.*`)
// AllConfigurations matches all metav1.Configuration names.
AllConfigurations = regexp.MustCompile(`.*`)
// CrossplaneLockName is the Crossplane package lock's `metadata.name`
CrossplaneLockName = regexp.MustCompile(`^lock$`)
)
const (
errAddToScheme = "failed to register types with the registry's scheme"
errFmtNewObject = "failed to instantiate a new runtime.Object using runtime.Scheme for: %s"
errFmtNotManagedResource = "specified GVK does not belong to a managed resource: %s"
)
type patchSetConverter struct {
// re is the regular expression against which a Composition's name
// will be matched to determine whether the conversion function
// will be invoked.
re *regexp.Regexp
// converter is the PatchSetConverter to be run on the Composition's
// patch sets.
converter PatchSetConverter
}
type configurationMetadataConverter struct {
// re is the regular expression against which a Configuration's name
// will be matched to determine whether the conversion function
// will be invoked.
re *regexp.Regexp
// converter is the ConfigurationMetadataConverter to be run on the Configuration's
// metadata.
converter ConfigurationMetadataConverter
}
type configurationPackageConverter struct {
// re is the regular expression against which a Configuration package's
// reference will be matched to determine whether the conversion function
// will be invoked.
re *regexp.Regexp
// converter is the ConfigurationPackageConverter to be run on the
// Configuration package.
converter ConfigurationPackageConverter
}
type providerPackageConverter struct {
// re is the regular expression against which a Provider package's
// reference will be matched to determine whether the conversion function
// will be invoked.
re *regexp.Regexp
// converter is the ProviderPackageConverter to be run on the
// Provider package.
converter ProviderPackageConverter
}
type packageLockConverter struct {
// re is the regular expression against which a package Lock's name
// will be matched to determine whether the conversion function
// will be invoked.
re *regexp.Regexp
// converter is the PackageLockConverter to be run on the package Lock.
converter PackageLockConverter
}
// Registry is a registry of `migration.Converter`s keyed with
// the associated `schema.GroupVersionKind`s and an associated
// runtime.Scheme with which the corresponding types are registered.
type Registry struct {
unstructuredPreProcessors map[Category][]UnstructuredPreProcessor
resourcePreProcessors []ManagedPreProcessor
resourceConverters map[schema.GroupVersionKind]ResourceConverter
templateConverters map[schema.GroupVersionKind]ComposedTemplateConverter
patchSetConverters []patchSetConverter
configurationMetaConverters []configurationMetadataConverter
configurationPackageConverters []configurationPackageConverter
providerPackageConverters []providerPackageConverter
packageLockConverters []packageLockConverter
categoricalConverters map[Category][]CategoricalConverter
scheme *runtime.Scheme
claimTypes []schema.GroupVersionKind
compositeTypes []schema.GroupVersionKind
}
// NewRegistry returns a new Registry initialized with
// the specified runtime.Scheme.
func NewRegistry(scheme *runtime.Scheme) *Registry {
return &Registry{
resourceConverters: make(map[schema.GroupVersionKind]ResourceConverter),
templateConverters: make(map[schema.GroupVersionKind]ComposedTemplateConverter),
categoricalConverters: make(map[Category][]CategoricalConverter),
unstructuredPreProcessors: make(map[Category][]UnstructuredPreProcessor),
scheme: scheme,
}
}
// make sure a converter is being registered for a managed resource,
// and it's registered with our runtime scheme.
// This will be needed, during runtime, for properly converting resources.
func (r *Registry) assertManagedResource(gvk schema.GroupVersionKind) {
obj, err := r.scheme.New(gvk)
if err != nil {
panic(errors.Wrapf(err, errFmtNewObject, gvk))
}
if _, ok := obj.(resource.Managed); !ok {
panic(errors.Errorf(errFmtNotManagedResource, gvk))
}
}
// RegisterResourceConverter registers the specified ResourceConverter
// for the specified GVK with this Registry.
func (r *Registry) RegisterResourceConverter(gvk schema.GroupVersionKind, conv ResourceConverter) {
r.assertManagedResource(gvk)
r.resourceConverters[gvk] = conv
}
// RegisterTemplateConverter registers the specified ComposedTemplateConverter
// for the specified GVK with this Registry.
func (r *Registry) RegisterTemplateConverter(gvk schema.GroupVersionKind, conv ComposedTemplateConverter) {
r.assertManagedResource(gvk)
r.templateConverters[gvk] = conv
}
// RegisterCompositionConverter is a convenience method for registering both
// a ResourceConverter and a ComposedTemplateConverter that act on the same
// managed resource kind and are implemented by the same type.
func (r *Registry) RegisterCompositionConverter(gvk schema.GroupVersionKind, conv CompositionConverter) {
r.RegisterResourceConverter(gvk, conv)
r.RegisterTemplateConverter(gvk, conv)
}
// RegisterPatchSetConverter registers the given PatchSetConverter for
// the compositions whose name match the given regular expression.
func (r *Registry) RegisterPatchSetConverter(re *regexp.Regexp, psConv PatchSetConverter) {
r.patchSetConverters = append(r.patchSetConverters, patchSetConverter{
re: re,
converter: psConv,
})
}
// RegisterConfigurationMetadataConverter registers the given ConfigurationMetadataConverter
// for the configurations whose name match the given regular expression.
func (r *Registry) RegisterConfigurationMetadataConverter(re *regexp.Regexp, confConv ConfigurationMetadataConverter) {
r.configurationMetaConverters = append(r.configurationMetaConverters, configurationMetadataConverter{
re: re,
converter: confConv,
})
}
// RegisterConfigurationMetadataV1ConversionFunction registers the specified
// ConfigurationMetadataV1ConversionFn for the v1 configurations whose name match
// the given regular expression.
func (r *Registry) RegisterConfigurationMetadataV1ConversionFunction(re *regexp.Regexp, confConversionFn ConfigurationMetadataV1ConversionFn) {
r.RegisterConfigurationMetadataConverter(re, &delegatingConverter{
confMetaV1Fn: confConversionFn,
})
}
// RegisterConfigurationMetadataV1Alpha1ConversionFunction registers the specified
// ConfigurationMetadataV1Alpha1ConversionFn for the v1alpha1 configurations
// whose name match the given regular expression.
func (r *Registry) RegisterConfigurationMetadataV1Alpha1ConversionFunction(re *regexp.Regexp, confConversionFn ConfigurationMetadataV1Alpha1ConversionFn) {
r.RegisterConfigurationMetadataConverter(re, &delegatingConverter{
confMetaV1Alpha1Fn: confConversionFn,
})
}
// RegisterConfigurationPackageConverter registers the specified
// ConfigurationPackageConverter for the Configuration v1 packages whose reference
// match the given regular expression.
func (r *Registry) RegisterConfigurationPackageConverter(re *regexp.Regexp, pkgConv ConfigurationPackageConverter) {
r.configurationPackageConverters = append(r.configurationPackageConverters, configurationPackageConverter{
re: re,
converter: pkgConv,
})
}
// RegisterConfigurationPackageV1ConversionFunction registers the specified
// ConfigurationPackageV1ConversionFn for the Configuration v1 packages whose reference
// match the given regular expression.
func (r *Registry) RegisterConfigurationPackageV1ConversionFunction(re *regexp.Regexp, confConversionFn ConfigurationPackageV1ConversionFn) {
r.RegisterConfigurationPackageConverter(re, &delegatingConverter{
confPackageV1Fn: confConversionFn,
})
}
// RegisterProviderPackageConverter registers the given ProviderPackageConverter
// for the provider packages whose references match the given regular expression.
func (r *Registry) RegisterProviderPackageConverter(re *regexp.Regexp, pkgConv ProviderPackageConverter) {
r.providerPackageConverters = append(r.providerPackageConverters, providerPackageConverter{
re: re,
converter: pkgConv,
})
}
// RegisterProviderPackageV1ConversionFunction registers the specified
// ProviderPackageV1ConversionFn for the provider v1 packages whose reference
// match the given regular expression.
func (r *Registry) RegisterProviderPackageV1ConversionFunction(re *regexp.Regexp, pkgConversionFn ProviderPackageV1ConversionFn) {
r.RegisterProviderPackageConverter(re, &delegatingConverter{
providerPackageV1Fn: pkgConversionFn,
})
}
// RegisterPackageLockConverter registers the given PackageLockConverter.
func (r *Registry) RegisterPackageLockConverter(re *regexp.Regexp, lockConv PackageLockConverter) {
r.packageLockConverters = append(r.packageLockConverters, packageLockConverter{
re: re,
converter: lockConv,
})
}
// RegisterCategoricalConverter registers the specified CategoricalConverter
// for the specified Category of resources.
func (r *Registry) RegisterCategoricalConverter(c Category, converter CategoricalConverter) {
r.categoricalConverters[c] = append(r.categoricalConverters[c], converter)
}
// RegisterCategoricalConverterFunction registers the specified
// CategoricalConverterFunctionFn for the specified Category.
func (r *Registry) RegisterCategoricalConverterFunction(c Category, converterFn CategoricalConverterFunctionFn) {
r.RegisterCategoricalConverter(c, &delegatingConverter{
categoricalConverterFn: converterFn,
})
}
// RegisterPackageLockV1Beta1ConversionFunction registers the specified
// RegisterPackageLockV1Beta1ConversionFunction for the package v1beta1 locks.
func (r *Registry) RegisterPackageLockV1Beta1ConversionFunction(re *regexp.Regexp, lockConversionFn PackageLockV1Beta1ConversionFn) {
r.RegisterPackageLockConverter(re, &delegatingConverter{
packageLockV1Beta1Fn: lockConversionFn,
})
}
// AddToScheme registers types with this Registry's runtime.Scheme
func (r *Registry) AddToScheme(sb func(scheme *runtime.Scheme) error) error {
return errors.Wrap(sb(r.scheme), errAddToScheme)
}
// AddCompositionTypes registers the Composition types with
// the registry's scheme. Only the v1 API of Compositions
// is currently supported.
func (r *Registry) AddCompositionTypes() error {
return r.AddToScheme(xpv1.AddToScheme)
}
// AddCrossplanePackageTypes registers the
// {Provider,Configuration,Lock, etc.}.pkg types with
// the registry's scheme.
func (r *Registry) AddCrossplanePackageTypes() error {
if err := r.AddToScheme(xppkgv1beta1.AddToScheme); err != nil {
return err
}
return r.AddToScheme(xppkgv1.AddToScheme)
}
// AddClaimType registers a new composite resource claim type
// with the given GVK
func (r *Registry) AddClaimType(gvk schema.GroupVersionKind) {
r.claimTypes = append(r.claimTypes, gvk)
}
// AddCompositeType registers a new composite resource type with the given GVK
func (r *Registry) AddCompositeType(gvk schema.GroupVersionKind) {
r.compositeTypes = append(r.compositeTypes, gvk)
}
// GetManagedResourceGVKs returns a list of all registered managed resource
// GVKs
func (r *Registry) GetManagedResourceGVKs() []schema.GroupVersionKind {
gvks := make([]schema.GroupVersionKind, 0, len(r.resourceConverters)+len(r.templateConverters))
for gvk := range r.resourceConverters {
gvks = append(gvks, gvk)
}
for gvk := range r.templateConverters {
gvks = append(gvks, gvk)
}
return gvks
}
// GetCompositionGVKs returns the registered Composition GVKs.
func (r *Registry) GetCompositionGVKs() []schema.GroupVersionKind {
// Composition types are registered with this registry's scheme
if _, ok := r.scheme.AllKnownTypes()[xpv1.CompositionGroupVersionKind]; ok {
return []schema.GroupVersionKind{xpv1.CompositionGroupVersionKind}
}
return nil
}
// GetCrossplanePackageGVKs returns the registered Crossplane package GVKs.
func (r *Registry) GetCrossplanePackageGVKs() []schema.GroupVersionKind {
if r.scheme.AllKnownTypes()[xppkgv1.ProviderGroupVersionKind] == nil ||
r.scheme.AllKnownTypes()[xppkgv1.ConfigurationGroupVersionKind] == nil ||
r.scheme.AllKnownTypes()[xppkgv1beta1.LockGroupVersionKind] == nil {
return nil
}
return []schema.GroupVersionKind{
xppkgv1.ProviderGroupVersionKind,
xppkgv1.ConfigurationGroupVersionKind,
xppkgv1beta1.LockGroupVersionKind,
}
}
// GetAllRegisteredGVKs returns a list of registered GVKs
// including v1.CompositionGroupVersionKind,
// metav1.ConfigurationGroupVersionKind,
// metav1alpha1.ConfigurationGroupVersionKind
// pkg.ConfigurationGroupVersionKind,
// pkg.ProviderGroupVersionKind,
// pkg.LockGroupVersionKind.
func (r *Registry) GetAllRegisteredGVKs() []schema.GroupVersionKind {
gvks := make([]schema.GroupVersionKind, 0, len(r.claimTypes)+len(r.compositeTypes)+len(r.resourceConverters)+len(r.templateConverters)+1)
gvks = append(gvks, r.claimTypes...)
gvks = append(gvks, r.compositeTypes...)
gvks = append(gvks, r.GetManagedResourceGVKs()...)
gvks = append(gvks, xpv1.CompositionGroupVersionKind, xpmetav1.ConfigurationGroupVersionKind, xpmetav1alpha1.ConfigurationGroupVersionKind,
xppkgv1.ConfigurationGroupVersionKind, xppkgv1.ProviderGroupVersionKind, xppkgv1beta1.LockGroupVersionKind)
return gvks
}
// ResourceConversionFn is a function that converts the specified migration
// source managed resource to one or more migration target managed resources.
type ResourceConversionFn func(mg resource.Managed) ([]resource.Managed, error)
// ComposedTemplateConversionFn is a function that converts from the specified
// migration source v1.ComposedTemplate to one or more migration
// target v1.ComposedTemplates.
type ComposedTemplateConversionFn func(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error
// PatchSetsConversionFn is a function that converts
// the `spec.patchSets` of a Composition from the migration source provider's
// schema to the migration target provider's schema.
type PatchSetsConversionFn func(psMap map[string]*xpv1.PatchSet) error
// ConfigurationMetadataV1ConversionFn is a function that converts the specified
// migration source Configuration v1 metadata to the migration target
// Configuration metadata.
type ConfigurationMetadataV1ConversionFn func(configuration *xpmetav1.Configuration) error
// ConfigurationMetadataV1Alpha1ConversionFn is a function that converts the specified
// migration source Configuration v1alpha1 metadata to the migration target
// Configuration metadata.
type ConfigurationMetadataV1Alpha1ConversionFn func(configuration *xpmetav1alpha1.Configuration) error
// PackageLockV1Beta1ConversionFn is a function that converts the specified
// migration source package v1beta1 lock to the migration target
// package lock.
type PackageLockV1Beta1ConversionFn func(pkg *xppkgv1beta1.Lock) error
// ConfigurationPackageV1ConversionFn is a function that converts the specified
// migration source Configuration v1 package to the migration target
// Configuration package(s).
type ConfigurationPackageV1ConversionFn func(pkg *xppkgv1.Configuration) error
// ProviderPackageV1ConversionFn is a function that converts the specified
// migration source provider v1 package to the migration target
// Provider package(s).
type ProviderPackageV1ConversionFn func(pkg xppkgv1.Provider) ([]xppkgv1.Provider, error)
// CategoricalConverterFunctionFn is a function that converts resources of a
// Category. Because it receives an unstructured argument, it should be
// used for implementing generic conversion functions acting on a specific
// category.
type CategoricalConverterFunctionFn func(u *UnstructuredWithMetadata) error
type delegatingConverter struct {
rFn ResourceConversionFn
cmpFn ComposedTemplateConversionFn
psFn PatchSetsConversionFn
confMetaV1Fn ConfigurationMetadataV1ConversionFn
confMetaV1Alpha1Fn ConfigurationMetadataV1Alpha1ConversionFn
confPackageV1Fn ConfigurationPackageV1ConversionFn
providerPackageV1Fn ProviderPackageV1ConversionFn
packageLockV1Beta1Fn PackageLockV1Beta1ConversionFn
categoricalConverterFn CategoricalConverterFunctionFn
}
func (d *delegatingConverter) Convert(u *UnstructuredWithMetadata) error {
if d.categoricalConverterFn == nil {
return nil
}
return d.categoricalConverterFn(u)
}
func (d *delegatingConverter) ConfigurationPackageV1(pkg *xppkgv1.Configuration) error {
if d.confPackageV1Fn == nil {
return nil
}
return d.confPackageV1Fn(pkg)
}
func (d *delegatingConverter) PackageLockV1Beta1(lock *xppkgv1beta1.Lock) error {
if d.packageLockV1Beta1Fn == nil {
return nil
}
return d.packageLockV1Beta1Fn(lock)
}
func (d *delegatingConverter) ProviderPackageV1(pkg xppkgv1.Provider) ([]xppkgv1.Provider, error) {
if d.providerPackageV1Fn == nil {
return []xppkgv1.Provider{pkg}, nil
}
return d.providerPackageV1Fn(pkg)
}
func (d *delegatingConverter) ConfigurationMetadataV1(c *xpmetav1.Configuration) error {
if d.confMetaV1Fn == nil {
return nil
}
return d.confMetaV1Fn(c)
}
func (d *delegatingConverter) ConfigurationMetadataV1Alpha1(c *xpmetav1alpha1.Configuration) error {
if d.confMetaV1Alpha1Fn == nil {
return nil
}
return d.confMetaV1Alpha1Fn(c)
}
func (d *delegatingConverter) PatchSets(psMap map[string]*xpv1.PatchSet) error {
if d.psFn == nil {
return nil
}
return d.psFn(psMap)
}
// Resource takes a managed resource and returns zero or more managed
// resources to be created by calling the configured ResourceConversionFn.
func (d *delegatingConverter) Resource(mg resource.Managed) ([]resource.Managed, error) {
if d.rFn == nil {
return []resource.Managed{mg}, nil
}
return d.rFn(mg)
}
// ComposedTemplate converts from the specified migration source
// v1.ComposedTemplate to the migration target schema by calling the configured
// ComposedTemplateConversionFn.
func (d *delegatingConverter) ComposedTemplate(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error {
if d.cmpFn == nil {
return nil
}
return d.cmpFn(sourceTemplate, convertedTemplates...)
}
// DefaultCompositionConverter is a generic composition converter
// conversionMap: is fieldpath map for conversion
// Key of the conversionMap points to the source field
// Value of the conversionMap points to the target field
// Example: "spec.forProvider.assumeRolePolicyDocument": "spec.forProvider.assumeRolePolicy",
// fns are functions that manipulate the patchsets
func DefaultCompositionConverter(conversionMap map[string]string, fns ...func(sourceTemplate xpv1.ComposedTemplate) ([]xpv1.Patch, error)) ComposedTemplateConversionFn {
return func(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error {
var patchesToAdd []xpv1.Patch
for _, fn := range fns {
patches, err := fn(sourceTemplate)
if err != nil {
return errors.Wrap(err, "cannot run the patch sets converter function")
}
patchesToAdd = append(patchesToAdd, patches...)
}
patchesToAdd = append(patchesToAdd, ConvertComposedTemplatePatchesMap(sourceTemplate, conversionMap)...)
for i := range convertedTemplates {
convertedTemplates[i].Patches = append(convertedTemplates[i].Patches, patchesToAdd...)
}
return nil
}
}
// RegisterAPIConversionFunctions registers the supplied ResourceConversionFn and
// ComposedTemplateConversionFn for the specified GVK, and the supplied
// PatchSetsConversionFn for all the discovered Compositions.
// The specified GVK must belong to a Crossplane managed resource type and
// the type must already have been registered with this registry's scheme
// by calling Registry.AddToScheme.
func (r *Registry) RegisterAPIConversionFunctions(gvk schema.GroupVersionKind, rFn ResourceConversionFn, cmpFn ComposedTemplateConversionFn, psFn PatchSetsConversionFn) {
d := &delegatingConverter{
rFn: rFn,
cmpFn: cmpFn,
psFn: psFn,
}
r.RegisterPatchSetConverter(AllCompositions, d)
r.RegisterCompositionConverter(gvk, d)
}
// RegisterConversionFunctions registers the supplied ResourceConversionFn and
// ComposedTemplateConversionFn for the specified GVK, and the supplied
// PatchSetsConversionFn for all the discovered Compositions.
// The specified GVK must belong to a Crossplane managed resource type and
// the type must already have been registered with this registry's scheme
// by calling Registry.AddToScheme.
// Deprecated: Use RegisterAPIConversionFunctions instead.
func (r *Registry) RegisterConversionFunctions(gvk schema.GroupVersionKind, rFn ResourceConversionFn, cmpFn ComposedTemplateConversionFn, psFn PatchSetsConversionFn) {
r.RegisterAPIConversionFunctions(gvk, rFn, cmpFn, psFn)
}
func (r *Registry) RegisterPreProcessor(category Category, pp UnstructuredPreProcessor) {
r.unstructuredPreProcessors[category] = append(r.unstructuredPreProcessors[category], pp)
}
// PreProcessor is a function type to convert pre-processor functions to
// UnstructuredPreProcessor.
type PreProcessor func(u UnstructuredWithMetadata) error
func (pp PreProcessor) PreProcess(u UnstructuredWithMetadata) error {
if pp == nil {
return nil
}
return pp(u)
}
func (r *Registry) RegisterResourcePreProcessor(pp ManagedPreProcessor) {
r.resourcePreProcessors = append(r.resourcePreProcessors, pp)
}
type ResourcePreProcessor func(mg resource.Managed) error
func (pp ResourcePreProcessor) ResourcePreProcessor(mg resource.Managed) error {
if pp == nil {
return nil
}
return pp(mg)
}

View File

@ -1,15 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: test.com/v1alpha1
kind: MyResource
metadata:
name: my-resource
namespace: upbound-system
spec:
compositionRef:
name: example
parameters:
tagValue: demo-test
region: us-west-1

View File

@ -1,78 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
labels:
purpose: example
name: example
spec:
compositeTypeRef:
apiVersion: test.com/v1alpha1
kind: XMyResource
patchSets:
- name: not-referenced
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.myTag
- name: ps1
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.tags[2].value
- name: ps2
patches:
- fromFieldPath: "spec.parameters.region"
toFieldPath: spec.forProvider.region
- name: ps3
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: metadata.labels[a.b.c.d/tag-value]
- name: ps4
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: metadata.labels['a.b.c.d.e/tag-value']
- name: ps5
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: metadata.labels["a.b.c.d.e.f/tag-value"]
- name: ps6
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.tags[3].value
resources:
- base:
apiVersion: fakesourceapi/v1alpha1
kind: VPC
spec:
forProvider:
cidrBlock: "192.168.0.0/16"
region: "us-west-1"
tags:
- key: key1
value: val1
- key: key2
value: val2
- key: key3
value: val3
name: vpc
patches:
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.tags[0].value
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.tags[1].value
- type: PatchSet
patchSetName: ps1
- type: PatchSet
patchSetName: ps2
- type: PatchSet
patchSetName: ps3
- type: PatchSet
patchSetName: ps4
- type: PatchSet
patchSetName: ps5
- type: PatchSet
patchSetName: ps6
- fromFieldPath: "spec.parameters.tagValue"
toFieldPath: spec.forProvider.param

View File

@ -1,10 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: pkg.crossplane.io/v1
kind: Configuration
metadata:
name: platform-ref-aws
spec:
package: xpkg.upbound.io/upbound/provider-ref-aws:v0.1.0

View File

@ -1,40 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: meta.pkg.crossplane.io/v1
kind: Configuration
metadata:
name: platform-ref-aws
annotations:
meta.crossplane.io/maintainer: Upbound <support@upbound.io>
meta.crossplane.io/source: github.com/upbound/platform-ref-aws
meta.crossplane.io/license: Apache-2.0
meta.crossplane.io/description: |
This reference platform Configuration for Kubernetes and Data Services
is a starting point to build, run, and operate your own internal cloud
platform and offer a self-service console and API to your internal teams.
meta.crossplane.io/readme: |
This reference platform `Configuration` for Kubernetes and Data Services
is a starting point to build, run, and operate your own internal cloud
platform and offer a self-service console and API to your internal teams.
It provides platform APIs to provision fully configured EKS clusters,
with secure networking, and stateful cloud services (RDS) designed to
securely connect to the nodes in each EKS cluster -- all composed using
cloud service primitives from the [Upbound Official AWS
Provider](https://marketplace.upbound.io/providers/upbound/provider-aws). App
deployments can securely connect to the infrastructure they need using
secrets distributed directly to the app namespace.
To learn more checkout the [GitHub
repo](https://github.com/upbound/platform-ref-aws/) that you can copy and
customize to meet the exact needs of your organization!
spec:
crossplane:
version: ">=v1.7.0-0"
dependsOn:
- provider: xpkg.upbound.io/upbound/provider-aws
version: ">=v0.15.0"
- provider: xpkg.upbound.io/crossplane-contrib/provider-helm
version: ">=v0.12.0"

View File

@ -1,40 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: meta.pkg.crossplane.io/v1alpha1
kind: Configuration
metadata:
name: platform-ref-aws
annotations:
meta.crossplane.io/maintainer: Upbound <support@upbound.io>
meta.crossplane.io/source: github.com/upbound/platform-ref-aws
meta.crossplane.io/license: Apache-2.0
meta.crossplane.io/description: |
This reference platform Configuration for Kubernetes and Data Services
is a starting point to build, run, and operate your own internal cloud
platform and offer a self-service console and API to your internal teams.
meta.crossplane.io/readme: |
This reference platform `Configuration` for Kubernetes and Data Services
is a starting point to build, run, and operate your own internal cloud
platform and offer a self-service console and API to your internal teams.
It provides platform APIs to provision fully configured EKS clusters,
with secure networking, and stateful cloud services (RDS) designed to
securely connect to the nodes in each EKS cluster -- all composed using
cloud service primitives from the [Upbound Official AWS
Provider](https://marketplace.upbound.io/providers/upbound/provider-aws). App
deployments can securely connect to the infrastructure they need using
secrets distributed directly to the app namespace.
To learn more checkout the [GitHub
repo](https://github.com/upbound/platform-ref-aws/) that you can copy and
customize to meet the exact needs of your organization!
spec:
crossplane:
version: ">=v1.7.0-0"
dependsOn:
- provider: xpkg.upbound.io/upbound/provider-aws
version: ">=v0.15.0"
- provider: xpkg.upbound.io/crossplane-contrib/provider-helm
version: ">=v0.12.0"

View File

@ -1,10 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-aws-ec2
spec:
revisionActivationPolicy: Automatic

View File

@ -1,10 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-aws-eks
spec:
revisionActivationPolicy: Automatic

View File

@ -1,10 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-family-aws
spec:
revisionActivationPolicy: Automatic

View File

@ -1,67 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
spec:
steps:
- exec:
command: sh
args:
- "-c"
- "kubectl get managed -o yaml > backup/managed-resources.yaml"
name: backup-managed-resources
manualExecution:
- sh -c "kubectl get managed -o yaml > backup/managed-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get composite -o yaml > backup/composite-resources.yaml"
name: backup-composite-resources
manualExecution:
- sh -c "kubectl get composite -o yaml > backup/composite-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
name: backup-claim-resources
manualExecution:
- sh -c "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml testdata/plan/configurationv1.yaml"
name: edit-configuration-metadata
manualExecution:
- sh -c "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml testdata/plan/configurationv1.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
name: build-configuration
manualExecution:
- sh -c "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
name: push-configuration
manualExecution:
- sh -c "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
type: Exec
version: 0.1.0

View File

@ -1,232 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
spec:
steps:
- exec:
command: sh
args:
- "-c"
- "kubectl get managed -o yaml > backup/managed-resources.yaml"
name: backup-managed-resources
manualExecution:
- sh -c "kubectl get managed -o yaml > backup/managed-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get composite -o yaml > backup/composite-resources.yaml"
name: backup-composite-resources
manualExecution:
- sh -c "kubectl get composite -o yaml > backup/composite-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
name: backup-claim-resources
manualExecution:
- sh -c "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
type: Exec
- patch:
type: merge
files:
- deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml
name: deletion-policy-orphan
manualExecution:
- "kubectl patch --type='merge' -f deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml --patch-file deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml"
type: Patch
- apply:
files:
- new-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml
name: new-ssop
manualExecution:
- "kubectl apply -f new-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml"
type: Apply
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-family-aws --for condition=Healthy"
name: wait-for-healthy
manualExecution:
- sh -c "kubectl wait provider.pkg provider-family-aws --for condition=Healthy"
type: Exec
- apply:
files:
- new-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml
- new-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml
name: new-ssop
manualExecution:
- "kubectl apply -f new-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml"
- "kubectl apply -f new-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml"
type: Apply
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-aws-ec2 --for condition=Healthy"
name: wait-for-healthy
manualExecution:
- sh -c "kubectl wait provider.pkg provider-aws-ec2 --for condition=Healthy"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-aws-eks --for condition=Healthy"
name: wait-for-healthy
manualExecution:
- sh -c "kubectl wait provider.pkg provider-aws-eks --for condition=Healthy"
type: Exec
- patch:
type: merge
files:
- disable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml
name: disable-dependency-resolution
manualExecution:
- "kubectl patch --type='merge' -f disable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml --patch-file disable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml"
type: Patch
- patch:
type: merge
files:
- edit-package-lock/lock.locks.pkg.crossplane.io_v1beta1.yaml
name: edit-package-lock
manualExecution:
- "kubectl patch --type='merge' -f edit-package-lock/lock.locks.pkg.crossplane.io_v1beta1.yaml --patch-file edit-package-lock/lock.locks.pkg.crossplane.io_v1beta1.yaml"
type: Patch
- delete:
options:
finalizerPolicy: Remove
resources:
- group: pkg.crossplane.io
kind: Provider
name: provider-aws
version: v1
name: delete-monolithic-provider
manualExecution:
- "kubectl delete Provider.pkg.crossplane.io provider-aws"
type: Delete
- patch:
type: merge
files:
- activate-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml
name: activate-ssop
manualExecution:
- "kubectl patch --type='merge' -f activate-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml --patch-file activate-ssop/provider-family-aws.providers.pkg.crossplane.io_v1.yaml"
type: Patch
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-family-aws --for condition=Installed"
name: wait-for-installed
manualExecution:
- sh -c "kubectl wait provider.pkg provider-family-aws --for condition=Installed"
type: Exec
- patch:
type: merge
files:
- activate-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml
- activate-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml
name: activate-ssop
manualExecution:
- "kubectl patch --type='merge' -f activate-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml --patch-file activate-ssop/provider-aws-ec2.providers.pkg.crossplane.io_v1.yaml"
- "kubectl patch --type='merge' -f activate-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml --patch-file activate-ssop/provider-aws-eks.providers.pkg.crossplane.io_v1.yaml"
type: Patch
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-aws-ec2 --for condition=Installed"
name: wait-for-installed
manualExecution:
- sh -c "kubectl wait provider.pkg provider-aws-ec2 --for condition=Installed"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl wait provider.pkg provider-aws-eks --for condition=Installed"
name: wait-for-installed
manualExecution:
- sh -c "kubectl wait provider.pkg provider-aws-eks --for condition=Installed"
type: Exec
- exec:
command: sh
args:
- "-c"
- "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml testdata/plan/configurationv1.yaml"
name: edit-configuration-metadata
manualExecution:
- sh -c "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1.yaml testdata/plan/configurationv1.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
name: build-configuration
manualExecution:
- sh -c "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
name: push-configuration
manualExecution:
- sh -c "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
type: Exec
- patch:
type: merge
files:
- edit-configuration-package/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml
name: edit-configuration-package
manualExecution:
- "kubectl patch --type='merge' -f edit-configuration-package/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml --patch-file edit-configuration-package/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml"
type: Patch
- patch:
type: merge
files:
- enable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml
name: enable-dependency-resolution
manualExecution:
- "kubectl patch --type='merge' -f enable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml --patch-file enable-dependency-resolution/platform-ref-aws.configurations.pkg.crossplane.io_v1.yaml"
type: Patch
- patch:
type: merge
files:
- deletion-policy-delete/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml
name: deletion-policy-delete
manualExecution:
- "kubectl patch --type='merge' -f deletion-policy-delete/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml --patch-file deletion-policy-delete/sample-vpc.vpcs.fakesourceapi_v1alpha1.yaml"
type: Patch
version: 0.1.0

View File

@ -1,67 +0,0 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
spec:
steps:
- exec:
command: sh
args:
- "-c"
- "kubectl get managed -o yaml > backup/managed-resources.yaml"
name: backup-managed-resources
manualExecution:
- sh -c "kubectl get managed -o yaml > backup/managed-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get composite -o yaml > backup/composite-resources.yaml"
name: backup-composite-resources
manualExecution:
- sh -c "kubectl get composite -o yaml > backup/composite-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
name: backup-claim-resources
manualExecution:
- sh -c "kubectl get claim --all-namespaces -o yaml > backup/claim-resources.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1alpha1.yaml testdata/plan/configurationv1alpha1.yaml"
name: edit-configuration-metadata
manualExecution:
- sh -c "cp edit-configuration-metadata/platform-ref-aws.configurations.meta.pkg.crossplane.io_v1alpha1.yaml testdata/plan/configurationv1alpha1.yaml"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
name: build-configuration
manualExecution:
- sh -c "up xpkg build --package-root={{PKG_ROOT}} --examples-root={{EXAMPLES_ROOT}} -o {{PKG_PATH}}"
type: Exec
- exec:
command: sh
args:
- "-c"
- "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
name: push-configuration
manualExecution:
- sh -c "up xpkg push {{TARGET_CONFIGURATION_PACKAGE}} -f {{PKG_PATH}}"
type: Exec
version: 0.1.0

Some files were not shown because too many files have changed in this diff Show More