Compare commits

...

674 Commits

Author SHA1 Message Date
Sergen Yalçın 4c6bfc216d
Merge pull request #519 from erhancagirici/xp-v2-module-update
bump upjet go module to v2
2025-08-04 14:49:37 +03:00
Erhan Cagirici 8ff62171a7
Merge pull request #518 from crossplane/xp-v2
Generate crossplane v2 providers
2025-08-01 17:31:26 +03:00
Erhan Cagirici d1183ac46f bump upjet go module to v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-08-01 01:50:50 +03:00
Erhan Cagirici b4eb48bcba go mod: bump crossplane-runtime to v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici cf5fef8c8a fix: use astutils for import manipulation in resolver
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici 4b39d500a1 rename crossplane-runtime import paths for v2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 21:05:12 +03:00
Erhan Cagirici 2c719f0b96 examples: generate namespaced examples
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 3157e7da7e regenerate unittest mocks
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b3b8ef0938 remove connection publisher options from controller template
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 88e0308220 generate gated controller setup methods
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 72f730ba32 update unit tests for namespaced MRs
update sensitive tests
update types builder tests
add tests for FileProducer connection secret ref resolution
add namespaced tests for api callbacks

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 7e1dd50d8f remove migration framework
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici c4427804fd make MR metric recording keys namespaced
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 0636518853 add MR namespace to various logs
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici a30bbb511d generate local cross-resource references for namespaced crds
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b64c7f1799 remove ESS configuration
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici b20c833a70 handle local secret references for sensitive observations
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 96746e6b15 runtime resolution of local secret refs in sensitive parameters of namespaced MRs
- inject namespace to the local secret ref if MR is namespaced

- cross-namespace secret refs are effectively not allowed for namespaced MRs

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 4b64e6ecb8 generate local secret refs for sensitive fields in namespaced MRs
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Erhan Cagirici 6ea3bc96f2 generate namespace-friendly Go structs for namespaced MR types
- namespaced MR Go structs now inline v2-style ManagedResourceSpec in the type template
as a result:
- writeConnectionSecretToRef becomes a local secret ref in namespaced MRs
- providerConfigRef becomes a typed reference with kind included
- deletionPolicy gets removed in namespaced MRs
- publishConnectionDetailsTo gets removed from all MRs

Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2025-07-31 14:27:49 +03:00
Jared Watts 3d1ab3b9fb refactor: pipeline Run should return early after handling cluster only scenario
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 9d7ed5871c test: update unit tests for namespaced usage
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts a0e93d3827 make external client and friends namespace aware
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts a590156eb9 enabled namespaced conversions to be registered too
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 76770f82b2 Pipeline Run accepts both cluster scoped provider config and optional namespaced config
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Jared Watts 8e9fcfd9c3 feat: make namespaced resource generation opt-in
Also take into account if there is a main.go template file when
generating controller setup and provider main.go files. There
will be no template in the monolith case, meaning we shoud generate
a consolidated controller setup and shouldn't generate main.go files
for each group.

Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-07-31 14:27:49 +03:00
Nic Cope 387fb83493 Generate namespaced Go types and controllers
This is mostly a case of finding hard coded paths and assumptions and making
them configurable.

Signed-off-by: Nic Cope <nicc@rk0n.org>
2025-07-31 14:27:49 +03:00
Nic Cope 40bae8497f POC: Make api and controller paths configurable
This PR is a no-op when I run it on provider-upjet-aws. It generates
exactly the same code as before this PR.

My goal is to allow invoking the main apis/controllers loop twice.
Once for cluster scoped and once for namespaced resources. I'll do
that in a follow-up PR.

Signed-off-by: Nic Cope <nicc@rk0n.org>
2025-07-31 14:27:49 +03:00
Sergen Yalçın 96241b0ae5
Merge pull request #512 from sergenyalcin/ignore-identity-in-diff
Sanitize Identity field in InstanceDiff
2025-07-31 13:17:36 +03:00
Sergen Yalçın ffb68a034a
Fix unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-31 12:32:12 +03:00
Sergen Yalçın 99c75ab7cb
Sanitize Identity field in Diff
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-31 12:32:09 +03:00
Sergen Yalçın 766236448e
Merge pull request #515 from sergenyalcin/custom-state-check
Custom state check configuration for TF plugin framework resources
2025-07-28 15:18:18 +03:00
Sergen Yalçın 9505d31da7
Change function name to TerraformPluginFrameworkIsStateEmptyFn
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-28 15:11:59 +03:00
Sergen Yalçın 5ac5cb0b35
Custom nil check for state
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-07-25 17:18:40 +03:00
Erhan Cagirici 0af42ca259
Merge pull request #507 from erhancagirici/remove-id-check-in-externalname-fw
remove id validation from setExternalName for resources without id field
2025-06-19 13:09:11 +03:00
Fatih Türken f794e5eddf remove id validation from setExternalName for resources without id field
Signed-off-by: Fatih Türken <turkenf@gmail.com>
Co-authored-by: Erhan Cagirici <erhan@upbound.io>
2025-06-19 11:28:35 +03:00
Sergen Yalçın c4332e6ed1
Merge pull request #506 from sergenyalcin/fix-sensitive-parameter-generation
Fix incorrectly generated connection string map
2025-06-18 16:55:31 +03:00
Sergen Yalçın dd08349e54
Fix linter
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-18 16:43:38 +03:00
Sergen Yalçın c42638efc0
Fix incorrectly generated connection string map
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-18 15:56:56 +03:00
Sergen Yalçın 9098842035
Merge pull request #504 from sergenyalcin/fix-wildcard-expand-during-conversion
Fix wildcard expand behavior when if the field path is not found during conversion
2025-06-17 18:23:52 +03:00
Sergen Yalçın c275d5ec5c
Fix wildcard expand behavior when if the field path is not found during conversion
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-06-17 18:16:04 +03:00
Erhan Cagirici f6111127e7
Merge pull request #500 from nikimanoledaki/nm/debug-provider
Validate that `ts.FrameworkProvider` is not nil to avoid panic
2025-06-04 12:31:12 +03:00
nikimanoledaki 60517ef9af
Validate that ts.FrameworkProvdier is not nil to avoid panic
Fetching the ts.FrameworkProvider.Schema field panics if the FrameworkProvider
struct is not set / nil. Validate that the FrameworkProvider is not nil before
continuing. Return an error message if it is nil.

Signed-off-by: nikimanoledaki <niki.manoledaki@grafana.com>
2025-06-03 18:01:46 +02:00
Sergen Yalçın 55edf18c68
Merge pull request #493 from sergenyalcin/bump-crossplane-runtime
Bump crossplane-runtime dependency
2025-05-09 15:48:45 +03:00
Sergen Yalçın fe88167010
Bump crossplane-runtime dependency
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-05-09 15:39:59 +03:00
Jean du Plessis 6fdbab083c
Merge pull request #491 from jbw976/changelogs 2025-05-09 10:43:48 +02:00
Jared Watts b8639959bc
feat(changelogs): add support for change logs in controller templates
Signed-off-by: Jared Watts <jbw976@gmail.com>
2025-04-30 16:28:57 +01:00
Sergen Yalçın aea6e7c546
Merge pull request #488 from grafana/duologic/update_deps
chore(deps): update dependencies
2025-04-28 12:51:37 +03:00
Sergen Yalçın 16e038b6f7
Merge pull request #440 from digna-ionos/main
call ApplyTFConversions in Update function from terraform plugin sdk external client
2025-04-28 12:36:11 +03:00
Duologic 362869c714 chore: update crossplane/crossplane-tools
Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-25 18:36:14 +02:00
Duologic 32c12301c3 chore: update crossplane/crossplane
Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-24 15:27:48 +02:00
Duologic 1ab808957f chore(deps): update dependencies
Update crossplane-runtime and k8s.io dependencies

Fix linting errors as well.

Signed-off-by: Duologic <jeroen@simplistic.be>
2025-04-23 00:33:04 +02:00
Sergen Yalçın ae37e28e15
Merge pull request #458 from mergenci/bump-crossplane-runtime-v1.18.0
Bump crossplane-runtime version to v1.18.0
2025-04-22 13:54:37 +03:00
Sergen Yalçın c745c8cbe9
Remove the patch types that are no longer supported from migration framework
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-22 13:50:13 +03:00
Sergen Yalçın f890ff1448
Bump crossplane/crossplane to v1.18.0
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-22 13:19:17 +03:00
Cem Mergenci b3885e63ee
Replace deprecated API with typed versions.
References:
https://github.com/kubernetes/kubernetes/pull/124263
https://github.com/kubernetes-sigs/controller-runtime/pull/2799

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:04 +03:00
Cem Mergenci a514684a84
Pass context to panic handlers.
References:
https://github.com/kubernetes/kubernetes/pull/121970
126f5cee56

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:04 +03:00
Cem Mergenci 635a4f8d39
Bump crossplane-runtime version to v1.18.0.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-04-22 11:23:00 +03:00
Sergen Yalçın 7a4c1dd211
Merge pull request #485 from sergenyalcin/support-count-usage-in-registry
Adding support for parsing the count used registry examples and bumping go version and dependencies
2025-04-16 22:18:08 +03:00
Sergen Yalçın e02e0871aa
Use commit has instead of version for actions/cache
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-16 10:17:40 +03:00
Sergen Yalçın 32c53e5a72
Update the actions/cache version
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:44:09 +03:00
Sergen Yalçın 203e41eb7e
Bump go version and some dependencies
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:24:22 +03:00
Sergen Yalçın 67e73bb368
Add support for parsing the count used registry examples
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-04-15 19:21:35 +03:00
Sergen Yalçın 72675757bb
Merge pull request #466 from sergenyalcin/update-loop-prevention
Add a new configuration option for preventing the possible update loops
2025-02-11 15:46:00 +03:00
Sergen Yalçın 1a6d69bbd2
Merge pull request #465 from turkenh/fix-conversion
Expose conversion option to inject key/values in the conversion to list
2025-02-06 19:21:50 +03:00
Sergen Yalçın 3d9beef672
Add a new configuration option for Update Loop Prevention
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-02-06 17:19:27 +03:00
Hasan Turken 163784981f
Expose conversion option to inject key/values in the conversion to list
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2025-02-06 14:29:33 +03:00
Cem Mergenci, PhD ce71033d45
Merge pull request #461 from mergenci/remove-diff-in-observe-only
Remove diff calculation in observe-only reconciliation
2025-01-30 16:40:24 +03:00
Cem Mergenci 43311a8459 Add generics expression for compatibility with local environments.
We discovered that GoLand failed to build without the generics
expression, whereas VS Code warned that it was unnecessary.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 16:33:18 +03:00
Cem Mergenci 525abba0fa Add TODOs for other external clients.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 09:49:43 +03:00
Cem Mergenci 2f967cf07c Rename `noDiff` to `hasDiff`.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 02:17:12 +03:00
Cem Mergenci 9c244cdd10 Remove diff calculation in observe-only reconciliation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-30 02:17:12 +03:00
Sergen Yalçın 40ef4d8de7
Merge pull request #462 from sergenyalcin/parametrize-registry
Parametrize the registry name of the provider
2025-01-23 17:56:48 +03:00
Sergen Yalçın 5dbd74a606
Parametrize the registry name of the provider
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2025-01-23 16:13:22 +03:00
Cem Mergenci, PhD db86f70a16
Merge pull request #437 from smcavallo/crossplane-runtime_v1.17.0
Upgrade crossplane-runtime to v1.17.0
2025-01-08 17:22:16 +03:00
smcavallo bd4838eb82 update toolchain and move disconnect method
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo 33de4d42f6 upgrade go to 1.22.7
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo b6820b0a55 Implement `Disconnect` on `ExternalClient` interface
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
smcavallo 9f51ffe663 Upgrade crossplane-runtime to v1.17.0
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2025-01-08 14:41:00 +03:00
Cem Mergenci, PhD beb604beb6
Merge pull request #451 from erhancagirici/ci-chores
Update GH action dependencies and linter config
2025-01-08 14:37:50 +03:00
Cem Mergenci 04d0b9f42e Remove unintentional indentation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2025-01-08 14:01:43 +03:00
Cem Mergenci, PhD a10b5b985a
Merge pull request #450 from fernandezcuesta/patch-1
fix: typo in example
2024-12-29 17:14:11 +03:00
Alper Rifat Ulucinar 4d4ed3d890
Merge pull request #454 from ulucinar/fix-kssource-test
Fix TestNewKubernetesSource sporadic
2024-11-23 15:57:23 +03:00
Alper Rifat Ulucinar 3982f4caac
Fix TestNewKubernetesSource sporadic
- Compare sorted expected and observed slices.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-11-22 18:16:22 +03:00
Cem Mergenci, PhD ba35c31702
Merge pull request #424 from ulucinar/fix-conversion-typemeta
Fix empty TypeMeta while running API conversions
2024-11-15 20:35:42 +03:00
Erhan Cagirici be5e036794
fix unintentional modification of slice in GetSensitiveParameters (#449)
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-15 14:06:40 +03:00
Erhan Cagirici deff69065f update GH action dependency versions
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:48:50 +03:00
Erhan Cagirici 57535ad9fa update linter config & remove deprecations
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:46:47 +03:00
Erhan Cagirici fcb3112de2 switch build submodule to crossplane/build repo
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-11-13 20:45:16 +03:00
J. Fernández 8642d46957
fix: typo in example
Signed-off-by: Jesús Fernández <7312236+fernandezcuesta@users.noreply.github.com>
2024-11-13 18:24:34 +01:00
Alper Rifat Ulucinar a08ecd7fe3 Fix empty TypeMeta while running API conversions
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-11-04 23:50:01 +03:00
Jean du Plessis a18bd41b7a
Merge pull request #289 from sergenyalcin/migration-framework 2024-10-20 23:11:40 +02:00
Jean du Plessis 7eaaf8a403
Merge pull request #413 from yordis/chore-2 2024-10-20 23:10:29 +02:00
Jean du Plessis 5cdf36996e
Merge pull request #441 from rickard-von-essen/bug/ref-subnet 2024-10-07 17:47:51 +02:00
Rickard von Essen 1ae4c81e89
Add license header
Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-07 09:14:42 +02:00
Rickard von Essen 6dae02b730
Fix scraping Refs from attributes containing lists
This correctly parses references contained in lists and adds them to the map of
references.

Example 1):

```hcl
    require_attestations_by = [google_binary_authorization_attestor.attestor.name]
```

Correctly generates a ref and building `provider-upjet-gcp` with this change
produces the expected `examples-generated/binaryauthorization/v1beta2/policy.yaml`
with this diff compared to without this change.

```
@@ -14,8 +14,8 @@ spec:
     - cluster: us-central1-a.prod-cluster
       enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
       evaluationMode: REQUIRE_ATTESTATION
-      requireAttestationsBy:
-      - ${google_binary_authorization_attestor.attestor.name}
+      requireAttestationsByRefs:
+      - name: attestor
     defaultAdmissionRule:
     - enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
       evaluationMode: ALWAYS_ALLOW
```

1) https://github.com/hashicorp/terraform-provider-google/blob/v5.39.0/website/docs/r/binary_authorization_policy.html.markdown?plain=1#L49

Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-05 13:00:12 +02:00
Rickard von Essen ed6ae5a806
bug: Surface bug - scraping lists of references does not work
Scraping does not handle `TupleConsExpr` when parsing example HCL code from
Terraform documentation.

Example 1):

```hcl
    require_attestations_by = [google_binary_authorization_attestor.attestor.name]
```

Does not add:

```
cluster_admission_rules.require_attestations_by: google_binary_authorization_attestor.attestor.name
```

As it should since the reference is contained in a _list_.

1) https://github.com/hashicorp/terraform-provider-google/blob/v5.39.0/website/docs/r/binary_authorization_policy.html.markdown?plain=1#L49

Signed-off-by: Rickard von Essen <rickard.von.essen@gmail.com>
2024-10-05 13:00:00 +02:00
digna-ionos 750f770b0f call ApplyTFConversions in Update function from terraform plugin sdk external client because it was causing unmarshalling errors.
Signed-off-by: digna-ionos <darius-andrei.igna@ionos.com>
2024-09-12 19:45:39 +03:00
Fatih Türken 3afbb7796d
Merge pull request #435 from turkenf/fix-nil-err
Fix the issue of hiding errors
2024-09-11 21:49:56 +03:00
Fatih Türken af44144929 Update recoverIfPanic() func comments
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-09-11 12:50:23 +03:00
Fatih Türken 11cd2f50f0 Fix the issue of hiding errors
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-09-09 18:57:36 +03:00
Jean du Plessis 34c30b90d6
Merge pull request #432 from mergenci/rename-referenced-branches 2024-09-03 15:03:49 +02:00
Cem Mergenci 7293dbb39e Rename referenced master branches to main.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-09-03 14:56:12 +02:00
Jean du Plessis 42ad41bb93
Merge pull request #429 from displague/patch-1 2024-08-28 15:26:00 +02:00
Marques Johansson 4149a0c367 docs: add link in README to new adding-new-resource guide
Follow-up to #405

Signed-off-by: Marques Johansson <mjohansson@equinix.com>
2024-08-28 13:04:54 +00:00
Cem Mergenci, PhD 2e361ad3b6
Merge pull request #428 from mergenci/async-panic-handler
Recover from panics in async external clients
2024-08-22 17:16:23 +03:00
Cem Mergenci 3dc4f0f69c Streamline async panic handler implementation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-08-22 17:09:59 +03:00
Cem Mergenci 78890e711d Recover from panics in async external clients.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-08-21 17:08:11 +03:00
Sergen Yalçın 1644827c94
Merge pull request #425 from tchinmai7/main 2024-08-02 12:29:52 +03:00
Tarun Chinmai Sekar 926e623f08 Refactor to a method
Signed-off-by: Tarun Chinmai Sekar <schinmai@akamai.com>
2024-08-01 08:35:23 -07:00
Tarun Chinmai Sekar 67bd255fdb Check for nil before calling IsKnown()
Signed-off-by: Tarun Chinmai Sekar <schinmai@akamai.com>
2024-07-25 15:13:35 -07:00
Jean du Plessis e295e17c7b
Merge pull request #405 from turkenf/add-resource-guide 2024-07-02 11:39:23 +02:00
Fatih Türken 79e8359981 Add section about running uptest locally and resolve review comments
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-26 15:54:52 +03:00
Fatih Türken de9edaebe5 Add new guide about adding a new resource
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-26 11:14:52 +03:00
Alper Rifat Ulucinar 37c7f4e91d
Merge pull request #411 from ulucinar/fix-embedded-conversion
Add config.Resource.RemoveSingletonListConversion
2024-06-12 12:39:27 +00:00
Alper Rifat Ulucinar 361331e820
Add unit tests for traverser.maxItemsSync
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-06-12 15:35:16 +03:00
Alper Rifat Ulucinar 58f4ba3fb8
Merge pull request #403 from turkenf/update-resource-config-doc
Update reference usage example and fix broken links in configuring a resource doc
2024-06-10 10:33:11 +00:00
Alper Rifat Ulucinar 6c305d0fb2
Merge pull request #410 from ulucinar/fix-ex-conv
Fix singleton list example conversion if there's no annotation
2024-06-07 09:29:28 +00:00
Alper Rifat Ulucinar 7ab5e2085d
Merge pull request #417 from ulucinar/fix-416
Do not prefix JSON fieldpaths starting with status.atProvider in resource.GetSensitiveParameters
2024-06-06 11:22:33 +00:00
Alper Rifat Ulucinar 91d382de43
Do not prefix JSON fieldpaths starting with status.atProvider in resource.GetSensitiveParameters
- If the MR API has a spec.forProvider.status field and there are sensitive attributes, then
  fieldpath.Paved.ExpandWildcards complains instead of expanding as an empty slice, which
  breaks the reconciliation.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-06-06 12:38:35 +03:00
Fatih Türken d10aa6e84a Update reference usage example and fix broken links in configuring a resource doc
Signed-off-by: Fatih Türken <turkenf@gmail.com>
2024-06-06 10:39:07 +03:00
Yordis Prieto d37f2e3157
chore: improve references docs
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-06-01 18:02:14 -04:00
Alper Rifat Ulucinar f4f87bab85
Add traverser.maxItemsSync schema traverser for syncing the MaxItems
constraints between the JSON & Go schemas.

- We've observed that some MaxItems constraints in the JSON schemas are not set
  where the corresponding MaxItems constraints in the Go schemas are set to 1.
- This inconsistency results in some singleton lists not being properly converted
  in the MR API.
- This traverser can mitigate such inconsistencies.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-31 02:52:40 +03:00
Alper Rifat Ulucinar 5318cd959a
Add traverser.AccessSchema to access the Terraform schema elements
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 18:35:33 +03:00
Alper Rifat Ulucinar 40733472e6
Merge pull request #389 from gravufo/replace-kingpin-v2
Replace gopkg.in/alecthomas/kingpin.v2 by github.com/alecthomas/kingpin/v2
2024-05-30 14:00:05 +00:00
Alper Rifat Ulucinar cc76abb788
Add config.Provider.TraverseTFSchemas to traverse the Terraform schemas of
all the resources of a Provider.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 15:45:22 +03:00
Alper Rifat Ulucinar 5265292696
Export config.TraverseSchemas
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 15:27:41 +03:00
Christian Artin 14ac95fd84 Run go mod tidy
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:18:25 -04:00
Christian Artin 16b23263b1 Upgrade github.com/alecthomas/kingpin/v2
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:17:03 -04:00
Christian Artin 75bea9acf5 Replace gopkg.in/alecthomas/kingpin.v2 by github.com/alecthomas/kingpin/v2
Signed-off-by: Christian Artin <gravufo@gmail.com>
2024-05-30 07:17:03 -04:00
Alper Rifat Ulucinar 72ab08c2fe
Add config.Resource.RemoveSingletonListConversion to be able to remove
already configured singleton list conversions.

- The main use case is to prevent singleton list conversions for
  configuration arguments with single nested blocks.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-30 10:54:05 +03:00
Alper Rifat Ulucinar 7e3bb74eb1
Report in the error message the example manifest path if an object conversion fails
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-29 21:49:58 +03:00
Alper Rifat Ulucinar c537babf6d
Do not panic if the example manifest being converted with a singleton list
does not have any annotations set.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-29 21:34:17 +03:00
Sergen Yalçın a444cc1abc
Merge pull request #407 from sergenyalcin/cond-late-init
Add a new late-init configuration to skip already filled field in spec.initProvider
2024-05-24 11:48:02 +03:00
Alper Rifat Ulucinar 89a7d0afb5
Merge pull request #406 from ulucinar/sensitive-initProvider
Generate Secret References for Sensitive Parameters under the spec.initProvider API tree
2024-05-24 08:13:29 +00:00
Sergen Yalçın be0cde1fe9
Add conditionalIgnoredCanonicalFieldPaths to IgnoreFields to fix the unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-22 16:33:51 +03:00
Alper Rifat Ulucinar a6de4b1306
Add unit tests for spec.initProvider secret references
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-22 16:17:44 +03:00
Sergen Yalçın eadba75a4f
Add a new late-init API to skip already filled field in spec.initProvider
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-22 16:14:35 +03:00
Alper Rifat Ulucinar ec75edfd1d
Add support for resolving secret references from spec.initProvider
- If both spec.forProvider and spec.initProvider tree reference
  secrets for the same target field, spec.forProvider overrides
  spec.initProvider.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-22 15:06:06 +03:00
Alper Rifat Ulucinar f577a5483e
Generate the corresponding Kubernetes secret references for the sensitive Terraform
configuration arguments also under the spec.initProvider API tree.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-20 20:29:47 +03:00
Alper Rifat Ulucinar 92d1af84d2
Merge pull request #402 from ulucinar/available-versions
Add config.Resource.PreviousVersions to specify the previous versions of an MR API
2024-05-15 19:33:17 +00:00
Sergen Yalçın 942508c537
Merge pull request #397 from sergenyalcin/example-converter
Add example converter for conversion of singleton lists to embedded objects
2024-05-14 17:10:30 +03:00
Sergen Yalçın e3647026b8
Address review comments
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-14 17:04:42 +03:00
Alper Rifat Ulucinar b5622a10e1
Deprecate config.Resource.OverrideFieldNames in favor of
config.Resource.PreviousVersions.

- The only known use case for config.Resource.OverrideFieldNames is
  to resolve type confilicts between the older versions of the CRDs
  and the ones being generated. The "PreviousVersions" API allows
  loading of the existing types from the filesystem so that upjet
  code generation pipeline's type conflict resolution mechanisms
  can prevent such name collisions.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-14 14:15:12 +03:00
Alper Rifat Ulucinar 50e8284f1f
Add config.Resource.PreviousVersions to be able to specify the known previous
versions of an MR API.

- upjet code generation pipelines can then utilize this information to load
  the already existing type names for these previous versions and prevent
  collisions for the generated CRD types.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-14 14:12:12 +03:00
Alper Rifat Ulucinar c33a66dc58
Merge pull request #400 from ulucinar/tf-conversion
Allow Specification of the CRD API Version a Controller Watches & Reconciles
2024-05-14 11:07:23 +00:00
Sergen Yalçın 0b73a9fdeb
Add an example manifest converter for conversion of singleton lists to embedded object
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-05-12 13:23:29 +03:00
Alper Rifat Ulucinar 94634891cc
Deprecate config.Reference.Type in favor of config.Reference.TerraformName
- TerraformName will automatically handle name & version configurations that will affect
  the generated cross-resource reference. This is crucial especially if the
  provider generates multiple versions for its MR APIs.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-10 18:00:48 +03:00
Alper Rifat Ulucinar c606b4cbc0
Add config.NewTFSingletonConversion that returns a new TerraformConversion to convert between
singleton lists and embedded objects when parameters pass Crossplane and Terraform boundaries.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 19:23:09 +03:00
Alper Rifat Ulucinar b5dbcc5e33
conversion.singletonListConverter now operates on a configured set of path prefixes
- Previously, it only converted parameters under spec.forProvider
- This missed CRD API conversions (of singleton lists) under
  spec.initProvider & status.atProvider

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
(cherry picked from commit 08b3f3260ce86457271dda7401dfd9a69a10f656)
2024-05-09 16:55:40 +03:00
Alper Rifat Ulucinar 83b3b8e41d
Add config.TerraformConversion to abstract the runtime parameter conversions between
the Crossplane & Terraform layers.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 14:19:12 +03:00
Alper Rifat Ulucinar f5b0d82844
Rename conversion.Mode to conversion.ListConversionMode
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 11:52:24 +03:00
Alper Rifat Ulucinar cc7324eb98
Add config.Resource.ControllerReconcileVersion to be able to control the specific CR API version
the associated controller will watch & reconcile.

- For backwards-compatibility, ControllerReconcileVersion defaults to Version if unspecified.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-09 11:42:25 +03:00
Alper Rifat Ulucinar 03a207b641
Merge pull request #387 from ulucinar/embed-singleton
Generate singleton lists as embedded objects
2024-05-08 13:47:14 +00:00
Jean du Plessis b5d344b3cd
Merge pull request #399 from yordis/chore-improve-docs 2024-05-08 08:57:38 +02:00
Alper Rifat Ulucinar 44c139ef80
Add tests for config.SingletonListEmbedder
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar 26df74c447
Add unit tests for runtime API conversions
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar e82d6242c2
Call hub & spoke generation pipelines after the CRD generation pipeline
- Prevent execution of these pipelines multiple times for each available
  versions of an API group.
- Improve conversion.RoundTrip paved conversion error message

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:54 +03:00
Alper Rifat Ulucinar cf00700ffe
Fix connection details state value map conversion
- Fix runtime conversion for expanded field paths of length
  greater than 1.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 5a6fd9541b
Initialize config.Resource.OverrideFieldNames with an empty map
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 136fabd8ab
Fix slice value length assertion in conversion.Convert
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar a87bc73b65
Add conversion.identityConversion API converter and allow excluding field paths
when the conversion.RoundTrip copies the fields, from src to dst, with the same name.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 914df98fcc
Fix the error messages in the template implementations of conversion.Converter
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 8031da83b4
Add config.Resource.crd{Storage,Hub}Version to be able to configure
the storage & hub API versions independently.

- The default values for both of the storage & hub versions is
  the version being generated, i.e., Resource.Version.
- Replace pipeline.ConversionHubGenerator & pipeline.ConversionSpokeGenerator
  with a common pipeline.ConversionNodeGenerator implementation
- Now the hub generator can also inspect the generated files to regenerate
  the hub versions according to the latest resource configuration and
  we have removed the assumption that the hub version is always the
  latest version generated.
- Fix duplicated GKVs issue in zz_register.go.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:53 +03:00
Alper Rifat Ulucinar 9e917b3d85
Add conversion.singletonConverter conversion function for CRD API conversions between
singleton list & embedded object API versions.

- Export conversion.Convert to be reused in embedded singleton list webhook API conversions.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar 668de344b5
Add a config.Resource configuration option to be able to mark
the generated CRD API version as the storage version.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar fb97ee5f28
Make singleton-list-to-embedded-object API conversions optional
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Alper Rifat Ulucinar 7e87a7fc47
Generate singleton lists as embedded objects
- Terraform configuration blocks, even if they have a MaxItems
  constraint of 1, are (almost) always generated as lists. We
  now generate the lists with a MaxItems constraint of 1 as
  embedded objects in our MR APIs.
- This also helps when updating or patching via SSA the
  (previously list) objects. The merging strategy implemented
  by SSA requires configuration for associative lists and
  converting the singleton lists into embedded objects removes
  the configuration need.
- A schema traverser is introduced, which can decouple the Terraform
  schema traversal logic from the actions (such as code generation,
  inspection, or singleton-list-to-embedded-object conversion)
  taken while traversing the schema.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-05-08 02:53:52 +03:00
Yordis Prieto 6a230300a8
chore: improve docs about selector field name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-05-07 16:22:47 -07:00
Jean du Plessis 0fb9c98a22
Merge pull request #394 from jeanduplessis/update-notices 2024-04-29 13:28:24 +02:00
Jean du Plessis ddffe7b362
Updates the MPL code used in NOTICE
Signed-off-by: Jean du Plessis <jean@upbound.io>
2024-04-29 14:25:35 +03:00
Jean du Plessis 4f6628bd74
Merge pull request #393 from yordis/yordis/chore-2 2024-04-27 01:43:52 +02:00
Yordis Prieto ff539fc32d
chore: clarify documentation around reference type name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-04-26 19:11:01 -04:00
Jean du Plessis e0436d3a1c
Merge pull request #392 from yordis/yordis/chore-1 2024-04-26 18:23:21 +02:00
Yordis Prieto 113d9e4190
chore: improve doc about configuring a resource external name
Signed-off-by: Yordis Prieto <yordis.prieto@gmail.com>
2024-04-26 10:29:36 -04:00
Alper Rifat Ulucinar c3efc56108
Post release commit after v1.3.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-04-25 15:55:55 +03:00
Alper Rifat Ulucinar 577bfa78fb
Merge pull request #391 from ulucinar/fix-sync-state
Cache the error from the last asynchronous reconciliation
2024-04-25 11:40:10 +00:00
Cem Mergenci, PhD c3cccedcbc
Merge pull request #390 from mergenci/mr-metrics
Introduce MR metrics
2024-04-25 14:28:53 +03:00
Cem Mergenci 461dcc3739 Introduce MR metrics.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-04-25 14:09:13 +03:00
Alper Rifat Ulucinar 77cc776e62
Cache the error from the last asynchronous reconciliation to return it in
the next asynchronous reconciliation for the Terraform plugin SDK &
framework based external clients.

- Set the "Synced" status condition to "False" in the async CallbackFn
  to immediately update it when the async operation fails.
- Set the "Synced" status condition to "True" when the async operation
  succeeds, or when the external client's Observe call reveals an
  up-to-date external resource which is not scheduled for deletion.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-04-25 00:10:12 +03:00
Cem Mergenci, PhD 4c67d8ebd3
Merge pull request #385 from mergenci/external-api-calls-metric
Add external API calls metric
2024-03-28 15:33:50 +03:00
Cem Mergenci 5f977ad584 Add external API calls metric.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-03-28 15:26:55 +03:00
Sergen Yalçın 50919febc5
Merge pull request #381 from sergenyalcin/add-required-configuration-option
Add a new configuration option for required field generation
2024-03-19 15:47:50 +03:00
Sergen Yalçın 845dbf6b1b
Add doc for RequiredFields function
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 15:26:10 +03:00
Sergen Yalçın b73a85f49b
Add requiredFields to ignoreUnexported for fixing unit tests
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 12:49:33 +03:00
Sergen Yalçın f25329f346
- Move the config.ExternalName.RequiredFields to config.Resource.requiredFields
- Deprecate config.MarkAsRequired in favor of a new configuration function on *config.Resource that still accepts a slice to mark multiple fields as required without doing and invervention in native field schema.

Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-19 12:42:36 +03:00
Sergen Yalçın bdfbe67ab3
Add a new ``Required` configuration option
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-18 18:17:54 +03:00
Sergen Yalçın 2ef7077f6d
Merge pull request #376 from sergenyalcin/add-header-to-setup
Add the `Header` Go template variable to setup.go.tmpl
2024-03-14 19:27:45 +03:00
Sergen Yalçın 85d8bf7b54
Add Header Go template variable to setup.go.tmpl
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 18:32:31 +03:00
Sergen Yalçın 84abd051e6
Merge pull request #373 from sergenyalcin/move-license-statements-tmpl
Move license statements to separate files (for tmpl files) to prevent license statement duplication
2024-03-14 14:14:22 +03:00
Sergen Yalçın 2d71c5b36d
Remove blank line on top
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 14:06:50 +03:00
Sergen Yalçın a648048b9d
Move license statements to separate files to prevent license statement duplication
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-14 00:02:10 +03:00
Sergen Yalçın 363f66c52d
Merge pull request #358 from sergenyalcin/fix-statefunc-call
Removing the applying of StateFuncs to parameters
2024-03-06 13:50:15 +03:00
Jean du Plessis 05703b568d
Merge pull request #363 from jaylevin/patch-1 2024-03-06 09:32:41 +02:00
Sergen Yalçın 956c7d489b
Remove the unnecessary case and add nolint
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-05 21:06:55 +03:00
Alper Rifat Ulucinar e2a229705f
Merge pull request #362 from bobh66/fix_make
Remove the empty img.build make target & the image.mk include
2024-03-05 18:41:17 +03:00
Bob Haddleton 712feee32d Remove img.build make target
Signed-off-by: Bob Haddleton <bob.haddleton@nokia.com>
2024-03-05 08:11:07 -06:00
Alper Rifat Ulucinar f043e2e5dd
Merge pull request #360 from bobh66/swap_synced_ready
Swap SYNCED and READY columns in output
2024-03-05 16:21:25 +03:00
Jordan Levin 6d2bf28827
Fix go code spacing in configuration-a-resource.md 2024-03-04 14:17:33 -08:00
Bob Haddleton f0b7317e96 Swap SYNCED and READY columns in output
Signed-off-by: Bob Haddleton <bob.haddleton@nokia.com>
2024-03-01 09:19:07 -06:00
Sergen Yalçın 93af08a988
Remove StateFunc calls
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-03-01 12:04:28 +03:00
Jean du Plessis 902acfe539
Merge pull request #357 from tomasmota/main 2024-02-27 15:04:00 +02:00
tomasmota 16d3101d6d
small docs corrections
Signed-off-by: tomasmota <tomasrebelomota@gmail.com>
2024-02-27 13:35:59 +01:00
Jean du Plessis 6f23c91b94
Merge pull request #356 from tomasmota/main 2024-02-27 13:39:45 +02:00
tomasmota b424aafa9c
fix link in docs
Signed-off-by: tomasmota <tomasrebelomota@gmail.com>
2024-02-27 12:20:32 +01:00
Sergen Yalçın c13945f264
Merge pull request #355 from sergenyalcin/fix-sensitive-generation
Fix slice type sensitive fieldpath generation
2024-02-26 20:10:52 +03:00
Sergen Yalçın b63f33874c
Change cases from string to type of FieldType
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-02-26 20:05:02 +03:00
Sergen Yalçın 8bf78e8106
Fix non-primitive type sensitive field generation
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-02-23 18:27:14 +03:00
Alper Rifat Ulucinar 2eb7f71f91
Merge pull request #354 from ulucinar/json-license
Add .license files for the JSON test artifacts
2024-02-23 17:53:01 +03:00
Alper Rifat Ulucinar 22a7a64531
Add .license files for the JSON test artifacts
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-23 17:32:11 +03:00
Cem Mergenci, PhD eaa1e9a7ea
Merge pull request #350 from mergenci/external-client-check-lateinitialize-management-policy
Check LateInitialize management policy in Plugin Framework external client
2024-02-21 16:06:43 +03:00
Cem Mergenci af31423729 Check LateInitialize management policy in Plugin Framework client.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-02-21 15:57:15 +03:00
Jean du Plessis 3ca1fe281f
Merge pull request #352 from jeanduplessis/new-maintainers 2024-02-21 11:02:38 +02:00
Jean du Plessis 5c6932fd1d
Merge pull request #310 from danielsinai/patch-1 2024-02-21 10:46:56 +02:00
Jean du Plessis 9823bcb918
Adds erhancagirici and mergenci as maintainers
Signed-off-by: Jean du Plessis <jean@upbound.io>
2024-02-21 10:02:07 +02:00
Alper Rifat Ulucinar 2f05dbe00e
Post release commit after v1.2.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-15 21:46:17 +03:00
Sergen Yalçın af53160682
Merge pull request #341 from erhancagirici/plugin-fw-requires-replace-at-update-time
move RequiresReplace check of plan to Update time instead of Observe & suppress diffs with only computed attributes
2024-02-15 13:38:30 +03:00
Erhan Cagirici 86997cdcf5 check plan RequiresReplace at update time instead of observe & ignore diffs with only computed values
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-02-15 11:35:33 +03:00
Alper Rifat Ulucinar 796fd66e79
Merge pull request #347 from ulucinar/handle-slices
Handle JSON arrays in json.Canonicalize
2024-02-15 11:31:18 +03:00
Alper Rifat Ulucinar 7d81ac79f0
Handle JSON arrays in json.Canonicalize
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-15 01:30:56 +03:00
Alper Rifat Ulucinar 9cb2f6b487
Merge pull request #342 from ulucinar/single-nested
Add support for generating MR API for Terraform resource's nested single configuration blocks
2024-02-14 22:35:06 +03:00
Alper Rifat Ulucinar 780c97fac1
Merge pull request #344 from ulucinar/canonical-json
Add json.Canonicalize to compute a canonical form of serialized JSON objects
2024-02-14 20:39:02 +03:00
Alper Rifat Ulucinar 3372192374
Add unit tests for json.Canonicalize
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-14 19:33:55 +03:00
Alper Rifat Ulucinar 13e9fc05f7
Add json.Canonicalize to compute a canonical form of serialized JSON objects
- Add config.CanonicalizeJSONParameters that returns a config.ConfigurationInjector
  for converting a list of top-level Terraform arguments with JSON values to their
  canonical forms.
- Breaking change: config.ConfigurationInjector now returns an error.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-14 17:34:45 +03:00
Alper Rifat Ulucinar f6a1a0de74
Merge pull request #343 from mergenci/external-client-check-lateinitialize-management-policy
Check LateInitialize management policy in SDKv2 external client
2024-02-14 12:09:22 +03:00
Cem Mergenci 1ed1df0d27 Check LateInitialize management policy in SDKv2 external client.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-02-14 01:24:51 +03:00
Alper Rifat Ulucinar 47cb5c5b4b
Add config.NewExternalNameFrom that makes the existing ExternalName configurations
reusable via compositions.

- Format the doc comments for the config.TemplatedStringAsIdentifier function.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-13 22:35:12 +03:00
Alper Rifat Ulucinar a57975b67f
Add config.SchemaElementOption.EmbeddedObject to specify schema elements
which will be generated as embedded objects instead of singleton lists in CRDs.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-13 17:52:15 +03:00
Alper Rifat Ulucinar c589cdb5b1
Add support for generating MR API for Terraform resource's nested single configuration blocks
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-09 17:44:47 +03:00
Alper Rifat Ulucinar 66ff8b2211
Post release commit after v1.1.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 19:59:49 +03:00
Alper Rifat Ulucinar a8e5f39764
Merge pull request #339 from ulucinar/storage-version
Set CRD storage version to the latest generated version
2024-02-01 19:54:27 +03:00
Alper Rifat Ulucinar b5e5ca72f2
Set CRD storage version to the latest generated version
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 17:45:10 +03:00
Alper Rifat Ulucinar 602c7d473e
Merge pull request #338 from ulucinar/rename-to-pluginsdk
Replace NoFork Terminology with TerraformPluginSDK
2024-02-01 17:17:42 +03:00
Alper Rifat Ulucinar ca6672d406
Use controller.OperationTrackerFinalizer for both of the Terraform plugin SDK &
framework based managed resources.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 16:25:01 +03:00
Alper Rifat Ulucinar d717974060
Fix nofork tests: Prevent using a global state in the Terraform plugin SDK v2 & framework tests
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 14:07:44 +03:00
Alper Rifat Ulucinar 6568a885f2
Add a more explanatory error message when immutable fields of a managed resource have changed
- We currentl do not generate CRD validation rules for immutable fields and thus, if they
  are updated, we block it at the Terraform layer. The improve the error message for
  such cases as follows:
  "refuse to update the external resource because the following update requires replacing it"

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 13:12:02 +03:00
Alper Rifat Ulucinar 70058d80ed
Rename types with a "NoFork" in their names by replacing it with "TerraformPluginSDK"
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 13:03:37 +03:00
Alper Rifat Ulucinar 547a439da9
Rename nofork related files so that nofork is replaced with tfpluginsdk
- pkg/controller/nofork_store.go is common to both the TF plugin SDK & framework
  so its name is preserved.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-02-01 11:23:34 +03:00
Alper Rifat Ulucinar 3c2fda9d67
Merge pull request #337 from ulucinar/fix-settagdiff-panic
Set the ID string to empty string on Terraform state for calculating the diffs
2024-01-31 20:32:21 +03:00
Alper Rifat Ulucinar cfc0349f56
Set the ID string to empty string on Terraform state for calculating the diffs
when the resource does not yet exist.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-31 19:12:49 +03:00
Alper Rifat Ulucinar 0e52e5d2e6
Merge pull request #334 from erhancagirici/fix-diffstate-when-empty
fix diff state not being set to freshly observed state for non-existing resources
2024-01-31 17:20:05 +03:00
Alper Rifat Ulucinar af203bdb54
Merge pull request #329 from mergenci/no-fork-fw
terraform plugin framework external client & connectors
2024-01-31 16:06:11 +03:00
Erhan Cagirici 5dff491aea refactor unit tests
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-31 14:58:20 +03:00
Erhan Cagirici b7edc58a70 add unit tests for terraform plugin framework external client
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-31 14:41:40 +03:00
Erhan Cagirici bad163c4c4 terraform resource nil check for SDKv2
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-31 14:41:02 +03:00
Erhan Cagirici 67c38e9c47 go mod tidy
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:25:13 +03:00
Erhan Cagirici 2b2812fe61 add doc comments for internal funcs
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 340b66fb89 pre-allocate diag error accumulator slice
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici e7b633e14b refactor diag to string, terraform value type, error messages
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Cem Mergenci 9b9149c1a5 Add doc comments.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-01-30 21:22:59 +03:00
Cem Mergenci 7444fba2d7 Improve provider generation performance.
Speedup in upbound/provider-aws, obtained via non-rigorous measurement
techniques, is around 1.1x. Performance benefits would be higher, if
there were more Terraform Plugin Framework resources configured.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-01-30 21:22:59 +03:00
Erhan Cagirici fe03f89abd refactor unnecessary iface pointers & add go doc comments
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 615d6e7817 fix TF list conversions
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 68bf3f177f fix unit tests
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 31e377d863 minor log message and naming changes
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 1cabce63d1 fix linter issues
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 0339b392a5 diff detection with plan and state comparison & refactor
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 50593c67ea bump terraform-plugin-framework to 1.4.1
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 48630aeddc refactor obsolete plugin framework provider variables
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici b9c6910fed dynamically configure provider of the framework server
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 9a712e85ff logging improvements & refactor
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici c45556a5df plugin framework async external client
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici fde84728bb refuse diffs with resource replacement & code cleanup refactor
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Erhan Cagirici 7f9ab46b86 initial full working setup
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-30 21:22:59 +03:00
Cem Mergenci 40adf4fd00 Configure provider server.
Configuring provider server with a nil configuration request lets
server retrieve already-configured provider's configuration.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-01-30 21:22:59 +03:00
Cem Mergenci f8668e2d0f WIP: Break ground for Terraform Plugin Framework reconciliation.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2024-01-30 21:22:37 +03:00
Sergen Yalçın 62169a1988
Merge pull request #335 from sergenyalcin/add-override-fieldnames-api
Add configuration API parameter for overriding the field names
2024-01-30 16:45:54 +03:00
Sergen Yalçın 5b8a0c830d
Add unit test and use defer for using the override mechanism
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-01-30 16:24:38 +03:00
Sergen Yalçın 6a2b6de079
Add configuration API parameter for overriding the field names
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-01-30 10:41:31 +03:00
Alper Rifat Ulucinar 7ba180b509
Merge pull request #321 from ulucinar/multiversion-crds
Multiversion CRDs & Conversion Webhooks
2024-01-30 00:59:03 +03:00
Alper Rifat Ulucinar fa398ebc5b
Do not set the latest version as the storage version for smoother downgrades
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:46 +03:00
Alper Rifat Ulucinar a85dded459
Rename conversion.TerraformedConversion to conversion.ManagedConversion
- Add unit tests for conversion.RoundTrip

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:45 +03:00
Sergen Yalçın b9d2b305d7
Add customConverter struct for supporting the Custom Converters
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-01-30 00:48:45 +03:00
Alper Rifat Ulucinar 77fba47119
Add unit tests for the conversion package
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:45 +03:00
Alper Rifat Ulucinar 97ce3ac9a8
Only start the conversion webhooks if they are enabled via the controller.Options.StartWebhooks
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:45 +03:00
Alper Rifat Ulucinar a5a315e118
Differentiate the names of the generated conversion hub and spoke files
- Do not generate empty conversion hub or spoke files.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:45 +03:00
Alper Rifat Ulucinar 584a442e15
Fix linter issues
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:45 +03:00
Alper Rifat Ulucinar e58e4b81d0
Add webhook conversion registry
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:44 +03:00
Alper Rifat Ulucinar 3030fb0e3e
Add round-trip version converter
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:44 +03:00
Alper Rifat Ulucinar 97359b8718
Rename ConversionConvertable as ConversionSpoke
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:48:44 +03:00
Alper Rifat Ulucinar 41e83d85dd
Support for multiversion CRDs & conversion webhooks
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-30 00:47:42 +03:00
Alper Rifat Ulucinar f34335a8e5
Merge pull request #332 from haarchri/fix/environment-patches
fix(environment): add missing environment specific patches TO/FROM
2024-01-29 20:11:39 +03:00
Alper Rifat Ulucinar c8c49ae892
Merge pull request #331 from ulucinar/acyclic-resolvers
Add a transformer to remove API group imports for cross-resource references
2024-01-28 18:34:29 +03:00
Alper Rifat Ulucinar bbac0a6bdd
Add the --api-group-override command-line option to the resolver transformation
to be able to override the generated API group names.

- Add the --api-resolver-package command-line option to the resolver transformation
  to properly configure the package of the GetManagedResource function.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-26 19:41:14 +03:00
Alper Rifat Ulucinar 994c07aef2
Add failure tests for the resolver transformer
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-25 15:50:33 +03:00
Alper Rifat Ulucinar 14930173cb
Move main.TransformPackages to transformers.Resolver.TransformPackages
- Bump Go version to 1.21
- Bump golangci-lint to v1.55.2
- Add tests for the Resolver.TransformPackages

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-25 13:21:04 +03:00
Erhan Cagirici 510b6e0a15 fix diff state not being set to freshly observed state for non-existing resources
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2024-01-24 13:28:06 +03:00
Alper Rifat Ulucinar bc7c1793a7
Add new imports to the resolver files in stable order
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-24 04:15:13 +03:00
Alper Rifat Ulucinar 3674eb5c0b
Use a hard-coded AST while generating the reference source statements in function `getManagedResourceStatements`
- Use a hard-coded AST while generating the reference source variable declarations in function `addMRVariableDeclarations`

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-24 03:05:03 +03:00
Alper Rifat Ulucinar 2b1d5dfa79
Set the default value of the transformer command-line option --pattern to "./..."
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-24 01:40:59 +03:00
Alper Rifat Ulucinar eb38f3f7c8
Do not transform already transformed files
- Fix the floating comment issue for the very first function declaration

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-24 01:32:28 +03:00
Alper Rifat Ulucinar 08422adc0f
Add the transformer "resolver" to remove API group imports for cross-resource reference resolver modules
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-24 01:32:28 +03:00
Alper Rifat Ulucinar 87fae5ea61
Merge pull request #326 from ulucinar/isolate-terraformed-implementations
Generate a standalone "zz_generated.terraformed.go" file for each resource
2024-01-23 17:16:26 +03:00
Christopher Haar f22c0762f4 fix(environment): add missing environment specific patches TO/FROM
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
2024-01-23 11:44:04 +01:00
Jean du Plessis ed4e0dec7a
Merge pull request #330 from jake-ciolek/fix-docs-get-provider-type 2024-01-17 15:18:20 +02:00
Jakub Ciolek 3f04f0a7b4 Fix the return type so it uses Upjet
The return type of GetProvider() uses Terrajet's *tjconfig.Provider.
Update it to use the Upjet type - *ujconfig.Provider.

Signed-off-by: Jakub Ciolek <jakub@ciolek.dev>
2024-01-17 13:58:20 +01:00
Jean du Plessis 8fd8905224
Merge pull request #328 from mbbush/fix-doc-link 2024-01-13 16:00:31 +02:00
Matt Bush 50aeba53e6 fix broken link in docs
Signed-off-by: Matt Bush <matt@span.io>
2024-01-12 17:51:58 -08:00
Alper Rifat Ulucinar 10c16799c0
Generate a standalone "zz_generated.terraformed.go" file for each resource
- Each resource will now have its own resource.Terraformed interface
  implementation file.
- The per-group "zz_generated_terraformed.go" is now split into
  per-resource "zz_generated.terraformed.go" files.
- This helps while generating multiple versions for the CRDs.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2024-01-10 00:42:25 +03:00
Alper Rifat Ulucinar 1976411db8
Merge pull request #320 from ulucinar/bump-tf-sdk
Bump hashicorp/terraform-plugin-sdk to v2.30.0
2024-01-09 12:35:12 +03:00
Sergen Yalçın fb3a200ed6
Merge pull request #324 from sergenyalcin/fix-nil-instance-diff
Add a nil check for the calculated instanceDiff while Observe
2024-01-09 11:27:02 +03:00
Sergen Yalçın 58b94f00e8
Add a nil check for the calculated instanceDiff while Observe
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2024-01-09 05:18:31 +03:00
Jean du Plessis b20afd65e4
Merge pull request #322 from smcavallo/fix-doc-links 2024-01-05 21:49:09 +02:00
smcavallo 217f62e450 fix doc links
Signed-off-by: smcavallo <smcavallo@hotmail.com>
2024-01-05 12:29:44 -05:00
Alper Rifat Ulucinar f87063b2ec
Bump hashicorp/terraform-plugin-sdk to v2.30.0
- If schema.Resource.Schema is nil, try to populate the schema from the SchemaFunc.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-27 15:22:31 +03:00
Alper Rifat Ulucinar 4cb45f9104
Merge pull request #319 from ulucinar/diffstate-config
Set RawConfig on the diff state
2023-12-27 15:08:26 +03:00
Alper Rifat Ulucinar 5b1ae0d0d4
Set RawConfig on the diff state
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-26 17:58:53 +03:00
Sergen Yalçın fb8acdbaec
Merge pull request #315 from sergenyalcin/support-references-for-initprovider
Add reference fields to the InitProvider
2023-12-26 12:49:52 +03:00
Sergen Yalçın 6a56e732b1
Fix nonintentional if case
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-26 11:38:03 +03:00
Alper Rifat Ulucinar 81e262067d
Merge pull request #318 from ulucinar/fix-diff-state
Set diff state's Attributes to nil if the resource does not exist
2023-12-25 11:49:55 +03:00
Alper Rifat Ulucinar ccb06111c8
Set diff state's Attributes to nil if the resource does not exist
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-25 10:38:15 +03:00
Alper Rifat Ulucinar 1d547af3dd
Merge pull request #317 from ulucinar/fix-customize-diff
Set the RawPlan for a new terraform.InstanceState returned from an observation
2023-12-22 18:54:41 +03:00
Alper Rifat Ulucinar f0de67a3ce
Set the RawPlan for a new terraform.InstanceState returned from an observation
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-22 16:04:42 +03:00
Alper Rifat Ulucinar 74159f85d1
Return the cty.Value of InstanceState from noForkExternal.fromInstanceStateToJSONMap
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-22 16:04:37 +03:00
Alper Rifat Ulucinar b15649bdb6
Alias diag import
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-22 15:49:01 +03:00
Sergen Yalçın 0bc8185b6f
Merge pull request #316 from sergenyalcin/pass-allstate-to-getexternalnamefn
Pass full state to GetExternalNameFn function to access field other than ID
2023-12-22 15:44:13 +03:00
Sergen Yalçın f548c79f1f
- Check if the id is empty
- Move the error check to the correct place

Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-22 14:14:20 +03:00
Sergen Yalçın ac7c6a1953
Pass full state to GetExternalNameFn function to access field other than ID
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-21 18:17:47 +03:00
Sergen Yalçın eb239f423e
Add reference fields to the InitProvider
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-21 15:47:32 +03:00
Alper Rifat Ulucinar 807fb9dd95
Merge pull request #308 from ulucinar/ssa-object-lists
Add config.Resource. ServerSideApplyMergeStrategies to configure the SSA Merge Strategies
2023-12-21 15:41:31 +03:00
Alper Rifat Ulucinar 9543be90ce
Merge pull request #313 from ulucinar/capture-id
Cache the Terraform instance state returned from schema.Resource.Apply even if the returned diagnostics contain errors
2023-12-18 12:31:47 +03:00
Alper Rifat Ulucinar b674f3dd0c
Cache the Terraform instance state returned from schema.Resource.Apply
in external-client's Create even if the returned diagnostics contain
errors.

- In most cases, the Terraform plugin SDK's create implementation
  for a resource comprises multiple steps (with the creation of
  the external resource being the very first step). In case, the
  creation succeeds but any of the subsequent steps fail, then
  upjet's TF plugin SDK-based external client will not record
  this state losing the only opportunity to associate the MR
  with the newly provisioned external resource in some cases.
  We now put this initial state into the upjet's in-memory
  state cache so that it's now available for the external-
  client's next observe call.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-14 15:21:43 +03:00
Sergen Yalçın cf1b3462e7
Merge pull request #312 from sergenyalcin/handle--notfound-error-type
Ignore specific error that returned by expandWildcard function
2023-12-13 10:59:30 +03:00
Alper Rifat Ulucinar 0fc0a07447
Add config.Resource.ServerSideApplyMergeStrategies to be able to configure
the server-side apply merge strategies for object lists & maps.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-12 19:37:03 +03:00
Alper Rifat Ulucinar a46199ffcd
Merge pull request #223 from therealmitchconnors/main
Make main-tf contents exported
2023-12-12 18:59:42 +03:00
Sergen Yalçın 085ff0a262
Ignore specific error that returned by expandWildcard function
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-12 14:29:14 +03:00
Alper Rifat Ulucinar 77613fda2c
Merge pull request #311 from ulucinar/enable-customizediff
Call the registered schema.CustomizeDiffFunc functions in the Terraform SDK-based external client
2023-12-12 14:12:42 +03:00
Alper Rifat Ulucinar 7431179b0b
Call the registered schema.CustomizeDiffFunc functions in the Terraform SDK-based external client
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-12 12:13:15 +03:00
Daniel Sinai 0353e98655
Changed 404 routes 2023-12-08 11:26:21 +02:00
Mitch Connors 25a4aa59de Add exported function description
Signed-off-by: Mitch Connors <mitchconnors@gmail.com>
2023-12-07 17:52:14 +00:00
Sergen Yalçın a9788207cd
Merge pull request #309 from sergenyalcin/set-timeouts-instancestate
Put default and configured timeouts to InstanceState for using the timeouts while Observe calls
2023-12-05 18:14:36 +03:00
Sergen Yalçın f99dee3115
Pass default and configured timeouts to InstanceState for using the timeouts while Observe calls
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-12-05 17:35:53 +03:00
Alper Rifat Ulucinar ca2cfe666b
Merge pull request #301 from negz/scribble
Add server-side apply merge strategy markers
2023-12-05 10:48:20 +03:00
Alper Rifat Ulucinar 5150500b82
Fix kubebuilder topological markers: use "=" instead of ":" between tokens
- Do not add topological markers for sensitive fields

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-12-04 22:50:14 +03:00
Mitch Connors c40331d552 fix signature
Signed-off-by: Mitch Connors <mitchconnors@gmail.com>
2023-11-20 16:26:34 +00:00
Mitch Connors 0752be698f Make main-tf contents exported
Signed-off-by: Mitch Connors <mitchconnors@gmail.com>
2023-11-20 16:23:30 +00:00
Alper Rifat Ulucinar bed1fa2fdd
Post release commit after v1.0.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-16 15:23:51 +03:00
Alper Rifat Ulucinar da16191162
Merge pull request #294 from ulucinar/no-fork
Add NoFork External Connectors & Clients
2023-11-16 15:19:01 +03:00
Sergen Yalçın 911e290126
Add unit tests for errors package
- Add unit test for InitProvider

Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-11-13 18:47:41 +03:00
Sergen Yalçın 6d324c10d4
Add licence statements
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-11-13 18:47:36 +03:00
Sergen Yalçın 7294c9255e
Add unit tests for no-fork arch
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-11-13 18:47:31 +03:00
Sergen Yalçın 67ddc679d5
Fix delete condition
Signed-off-by: Sergen Yalçın <44261342+sergenyalcin@users.noreply.github.com>
2023-11-13 18:45:25 +03:00
Erhan Cagirici 8e74a8ec89
add support for hcl functions in string params (#9)
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-13 18:44:00 +03:00
Nic Cope 8ef1aa2a8c Add server-side apply merge strategy markers
This commit adds SSA merge strategy markers that allow the API server to
granularly merge lists, rather than atomically replacing them. Composition
functions use SSA, so this change means that a function can return only the list
elements it desires (e.g. tags) and those elements will be merged into any
existing elements, without replacing them.

For the moment I've only covered two cases:

* Lists that we know are sets of scalar values (generated from Terraform sets)
* Maps of scalar values (generated from Terraform maps)

I'm hopeful that in both of these cases it _should_ be possible to allow the map
or set to be granularly merged, not atomically replaced.

https://kubernetes.io/docs/reference/using-api/server-side-apply/#merge-strategy

Signed-off-by: Nic Cope <nicc@rk0n.org>
2023-11-11 23:08:51 -08:00
Erhan Cagirici d9420d3eee add nil & type assertion checks in param statefunc processor
Signed-off-by: Erhan Cagirici <erhan@upbound.io>
2023-11-10 10:48:46 +03:00
Alper Rifat Ulucinar 00618d78c1
Clear errors from async operations upon successful observation
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-08 16:38:36 +02:00
Alper Rifat Ulucinar 520fe966ae
Add new error types for no-fork async mode
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-07 13:51:23 +02:00
Sergen Yalçın 280b4b0fdd
Add nil check for instanceDiff
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-11-03 16:42:23 +03:00
Alper Rifat Ulucinar 146ba718ba
Suppress linter issues
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-03 16:30:55 +03:00
Cem Mergenci 5c527e47a8 Move `ignore_changes` helper functions to a separate file.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2023-11-03 16:26:25 +03:00
Cem Mergenci b4ecd22401 Handle top-level sets while ignoring initProvider diffs.
* Remove all length keys, which are suffixed with % or #, from
initProvider-exclusive diffs. Map and set diffs are successfully
applied without these length keys. List diffs require length keys. To
be addressed later.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
2023-11-03 16:26:25 +03:00
Alper Rifat Ulucinar 82a4ad9428
Add support for configuring Terraform resource timeouts both from the schema and config.Resource.OperationTimeouts
- The timeouts specified in upjet resource configuration (config.Resource.OperationTimeouts)
  prevails any defaults configured in the Terraform schema.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-03 13:37:24 +03:00
Erhan Cagirici 229c2734b4
change TF CustomDiff func signature to accept state and config
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-02 18:11:52 +03:00
Alper Rifat Ulucinar e4d7b4eb9b
Fix linter issues
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-02 01:22:41 +03:00
Alper Rifat Ulucinar b93f12782e
Replace custom gci section prefixes github.com/crossplane/crossplane-*
with github.com/crossplane/upjet

- Run `gci write` to organize imports with the above change.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-02 00:56:16 +03:00
Alper Rifat Ulucinar 4aeb57e0d7
Fix tests
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:46:34 +03:00
Alper Rifat Ulucinar 1bad32b35f
Fix REUSE issues
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:30 +03:00
Alper Rifat Ulucinar 3cfd053162
Reviews for the granular management policies second phase
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:30 +03:00
Cem Mergenci eb72995c5e
Improve management policies support.
Changes to initProvider parameters are ignored, with edge cases to be
handled.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:30 +03:00
Erhan Cagirici b6929906f5
nil check
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:30 +03:00
Erhan Cagirici 8016470229
apply state functions to MR spec parameters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar d72fc1e60c
Add support for hybrid CLI-based reconciling for configured resources
- Add config.Provider.WithNoForkIncludeList to explicitly specify
  the set of resources to be reconciled under the no-fork architecture.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar 6ca88de6dc
Switch from alpha to beta management policisies in controller.go.tmpl
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar c89008b7ee
Fix tests
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar ff9c354005
Remove remaining upbound/upjet module references
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar e3a7c59a7c
Reviews for the management policies first phase
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Cem Mergenci 4835bbe3ff
Partially support management policies in no-fork external client.
Currently updates to `initProvider` after resource creation take
effect, which is against specification. Next step is to fix it.

Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:29 +03:00
Alper Rifat Ulucinar 5cc8250c4c
Add support for logically deleting MRs
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Alper Rifat Ulucinar 43676d9e1e
Requeue an immediate reconcile request right after a successful async create, update or delete callback
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Alper Rifat Ulucinar d37e3a265c
Implement the missing prevent_destroy lifecycle meta-argument functionality
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Alper Rifat Ulucinar f31d1f7166
Refactor no-fork external client's Create/Update/Delete methods
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Alper Rifat Ulucinar 496f1fe1a2
Unconditionally compute InstanceState.RawConfig in the no-fork external clients
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Alper Rifat Ulucinar 0707799502
Refactor commons between async & sync no-fork external clients
- Terraformed resources which are reconciled using the synchronous
  no-fork external client also use the in-memory Terraform state
  cache. This was needed because there are resources who put
  certain attributes into the TF state only during the Create call.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:28 +03:00
Erhan Cagirici f2f4180e7b
port instance diff fixes to async client
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Erhan Cagirici a78ccac118
minor logging changes
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Erhan Cagirici a3e1935a17
no-fork async connector & client
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Alper Rifat Ulucinar 716daa25b3
Add config.Resource.TerraformCustomDiff to customize Terraform InstanceDiffs
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Alper Rifat Ulucinar 5ac3a183ca
Set RawConfig of InstanceState and InstanceDiff
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Alper Rifat Ulucinar eecf3bea8d
Add resource.WithZeroValueJSONOmitEmptyFilter late-initialization option back
- Late-initialization of nil values from corresponding zero-values
  results in a late-initialization loop and prevents the resource
  from becoming ready.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:27 +03:00
Alper Rifat Ulucinar 1f01d7065c
Do not add the resource.WithZeroValueJSONOmitEmptyFilter late-initialization option by default
- May result in unnecessary drift if some fields are not late-initialized.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar c61363fd9e
Confine no-fork client's instance diff computation to Observe
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar c19b5d017e
Add config.Resource.TerraformConfigurationInjector to allow injecting Terraform configuration parameters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar 0709592c8e
Add config.Resource.SchemaElementOptions to configure options for resource schema elements
- Add support for explicitly configuring fields to be added to the Observation types.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar bdbd1d2bda
Add late-initialization logic
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar 4be6a72a2a
Compute Instance{State,Diff}.RawPlan to be able to handle Terraform AWS v5.x tags
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Alper Rifat Ulucinar b3f2bac5c4
Update crossplane-runtime to commit 4c4b0b47b6ed
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:26 +03:00
Cem Mergenci 3c664304f2
Add Terraform provider schema to config.Provider.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Erhan Cagirici f07a6e14f1
register eventHandler
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Erhan Cagirici d46bde43dc
publish connection details
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Alper Rifat Ulucinar 1cc1a824cd
Add upjet_resource_deletion_seconds & upjet_resource_reconcile_delay_seconds metrics
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Alper Rifat Ulucinar 59cc664e01
Add TTR metric for the forkless client
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Alper Rifat Ulucinar ca6d80fbd7
Add upjet_resource_ext_api_duration histogram metric
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:25 +03:00
Erhan Cagirici 7285322532
handle SetObservation errors & rename copy func
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Alper Rifat Ulucinar 3a1991d6ed
Add no-fork external client's log option to controller template
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Cem Mergenci 921bce71d6
Configure no-fork external client for resources using a bool flag.
Signed-off-by: Cem Mergenci <cmergenci@gmail.com>
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Erhan Cagirici f4ad2a52eb
tf instance state converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Alper Rifat Ulucinar 97849ad0d3
Fix updates
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Alper Rifat Ulucinar 54e5dd9be7
Disable updates
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:24 +03:00
Alper Rifat Ulucinar 4ca6a0b3d9
Add the NoForkConnector to reconcile MRs without process forks
- Switch to Resource.RefreshWithoutUpgrade & Resource.Apply

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-11-01 19:23:23 +03:00
Alper Rifat Ulucinar c4a76d2a75
Merge pull request #288 from turkenh/gmp-beta-new
Granular management policy - promote to BETA
2023-10-12 12:37:06 +03:00
Sergen Yalçın dded5ce3f3
Add doc for migration framework
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-10-11 13:36:14 +03:00
Hasan Turken cd1230a80e
Fix build and linter issues
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-10-11 13:24:15 +03:00
lsviben b3849f6e5f
fix wording on initProvider
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-10-11 12:48:15 +03:00
lsviben 1539782c3d
GMP to beta
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-10-11 12:48:09 +03:00
Hasan Turken 39e7b7e370
Bump to latest runtime
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-10-11 12:48:00 +03:00
Sergen Yalçın b54f63590a
Merge pull request #279 from crossplane/ownership_change
Updates source code for ownership change
2023-10-11 12:33:42 +03:00
Jean du Plessis ba139387b3 Updates source code for ownership change
Signed-off-by: Jean du Plessis <jean@upbound.io>
2023-10-10 20:55:33 +02:00
Jean du Plessis c36e606378
Merge pull request #285 from turkenf/bump-dependencies 2023-10-10 13:21:29 +02:00
Fatih Türken 8a4a08776a Bump Github workflow dependencies 2023-10-06 19:26:21 +03:00
Fatih Türken cc55f39524
Merge pull request #281 from turkenf/add-jitter
Add jitter to reconcile delay for managed.Reconciler
2023-09-27 21:59:52 +03:00
Alper Rifat Ulucinar e283b5ce39
Only configure poll jitter if the jitter is non-zero
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-09-27 21:47:09 +03:00
Fatih Türken d6ef7bda79 Add jitter to reconcile delay for managed.Reconciler 2023-09-25 18:33:20 +03:00
Hasan Turken a41bb83a48
Merge pull request #248 from lsviben/update-gmp-update-docs
updated the management policies upgrade doc
2023-09-14 11:52:09 +03:00
Lovro Sviben 0154ca2d80
Merge pull request #274 from lsviben/improve-cel-message
Improve CEL message
2023-09-11 12:55:44 +02:00
Erhan Cagirici f73a6622af
remove finalizers before deleting migrated old MRs (#272) 2023-09-08 14:27:34 +03:00
lsviben c59a8f8444
update tests
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-09-07 15:35:27 +02:00
lsviben 283de24307
improve CEL required message
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-09-07 15:27:30 +02:00
Sergen Yalçın ac6f4d34ea
Merge pull request #267 from sergenyalcin/resource-pre-processor
Adding ResourcePreProcessor interface
2023-09-05 14:26:36 +02:00
Sergen Yalçın df18f12ca3
Merge pull request #270 from sergenyalcin/fix-source-duplicate
Fix of duplicate resource reading
2023-09-04 01:17:24 +02:00
Sergen Yalçın fb44bb7cd4
Fix of duplicate resource reading
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-09-03 14:43:26 +02:00
Jean du Plessis 115548163e
Merge pull request #268 from marcoths/main 2023-09-01 06:02:17 -05:00
Marco Hernandez 37b4b79be1 don't scrape empty files 2023-09-01 11:07:02 +01:00
Sergen Yalçın 60c2651e15
Add ResourcePreProcessor
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-31 23:57:02 +02:00
Sergen Yalçın 14cd59c998
Merge pull request #265 from sergenyalcin/fs-step-execution
Adding a PlanGeneratorOption for excluding Kubernetes scenario based steps from Filesystem scenario
2023-08-31 22:26:21 +02:00
Sergen Yalçın c3c13b6a16
Add critical annotations to the ignore list of CopyInto function
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-29 00:55:12 +02:00
Sergen Yalçın 152a24033b
- Adding a PlanGeneratorOption for excluding Kubernetes scenario based steps from Filesystem scenario
- Remove manual execution fields from plan for Filesystem source

Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-29 00:33:38 +02:00
Alper Rifat Ulucinar 27f66b3b19
Post release commit after v0.10.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-23 11:50:36 +03:00
Alper Rifat Ulucinar a5cc50f09a
Merge pull request #262 from ulucinar/queue-after-ready
Queue a reconcile request after marking the resource as ready
2023-08-23 11:41:49 +03:00
Alper Rifat Ulucinar 8e859d6127
Queue a reconcile request after marking the resource as ready
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-23 11:15:37 +03:00
Alper Rifat Ulucinar ce56bba77b
Merge pull request #261 from toastwaffle/log-inuse
Add the `inUse` count to SharedProviderScheduler log statements
2023-08-22 20:18:54 +03:00
Samuel Littley faca7a35ed Add the `inUse` count to SharedProviderScheduler log statements 2023-08-21 12:52:48 +01:00
Alper Rifat Ulucinar e620c62289
Merge pull request #255 from dalton-hill-0/escape-cel-reserved-keywords
Escape CEL Reserved Keywords when Generating Validation Rules
2023-08-21 14:05:19 +03:00
Alper Rifat Ulucinar 1a41e99700
Merge pull request #260 from lsviben/improve-initProvider-forProvider-merge
Do not overwrite existing values when merging spec.initProvider onto spec.forProvider
2023-08-21 13:31:45 +03:00
Alper Rifat Ulucinar bd528e443b
Merge pull request #259 from ulucinar/fix-requeue
Fixes the requeueing bug which requeues a reconcile request in a wrong workqueue
2023-08-18 14:52:49 +03:00
lsviben ca09925057
remove overwrite from initProvider-forProvider merge
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-08-18 11:33:24 +02:00
Alper Rifat Ulucinar 4cfa964bc8
Fixes the requeueing bug which requeues a reconcile request in a wrong workqueue
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-18 11:33:21 +03:00
dalton hill 322c5ebd5e removes unnecessary calls to test.EquateErrors() in test comparison
Signed-off-by: dalton hill <dalton.hill.0@protonmail.com>
2023-08-17 11:55:16 -05:00
Alper Rifat Ulucinar 0325664d27
Merge pull request #257 from lsviben/fix-optional-on-required
revert leaving optional for required init fields
2023-08-17 18:01:55 +03:00
lsviben 41c1c366e0
revert adding optional to all required fields 2023-08-17 12:11:12 +02:00
Sergen Yalçın 6e0d116c36
Merge pull request #251 from sergenyalcin/common-converters
Add some common utility functions to migration framework
2023-08-15 11:26:29 +02:00
Sergen Yalçın 3549876692
Add some details to the code comments
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-15 10:58:41 +02:00
dalton hill 18259307e7 updated test case name to be more consistent with existing names
Signed-off-by: dalton hill <dalton.hill.0@protonmail.com>
2023-08-14 12:26:39 -05:00
dalton hill bd187bd9a1 escapes cel reserved keywords when generating validation rules
Signed-off-by: dalton hill <dalton.hill.0@protonmail.com>
2023-08-14 11:53:27 -05:00
Sergen Yalçın 3d06353ddb
Move the provider specific functions to extensions-migration
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-10 15:46:32 +02:00
Piotr Zaniewski ab2768412d
Merge pull request #252 from upbound/backstage-config-fix
Change spec.owner in catalog-info.yaml
2023-08-10 12:27:46 +02:00
Piotr Zaniewski 132c6aa209
Change spec.owner from team:extensions to team-extensions 2023-08-10 10:23:38 +02:00
Sergen Yalçın 2309a95927
Add some common utility functions to migration framework
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-08-09 17:35:39 +02:00
Alper Rifat Ulucinar c0e39e7cbe
Merge pull request #249 from ulucinar/set-default-eventhandler
Set a default EventHandler if the provider's main module does not supply one
2023-08-04 14:38:53 +03:00
lsviben 67164a3af4
updated the management policies upgrade doc
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-08-03 17:54:55 +02:00
Alper Rifat Ulucinar 7c2b48d811
Set a default EventHandler if the provider's main module does not supply one
- Setting a default EventHandler to keep backwards-compatibility
  for the generated providers

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-03 18:12:19 +03:00
Christopher Haar caa74d2aeb
Merge pull request #247 from haarchri/feature/doc-fix-count
feat(docs): fix doc generation
2023-08-02 14:18:49 +02:00
Alper Rifat Ulucinar 93d7b82b8b
Post release commit after v0.9.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-01 20:03:39 +03:00
Christopher Haar e9ef734a72 style(lint): add nolint:gocyclo
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
2023-08-01 19:03:07 +02:00
Christopher Haar af46bef5ea feat(docs): fix doc generation
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
2023-08-01 18:28:04 +02:00
Alper Rifat Ulucinar 06bdecc2fc
Merge pull request #241 from ulucinar/fix-192
Explicitly Queue a Reconcile Request if a Shared Provider has Expired
2023-08-01 17:23:01 +03:00
Alper Rifat Ulucinar 7a9116f274
Use an EventHandler with the controller.external to retry on scheduling errors
- The external client will requeue at most 20 times before reporting an error

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-08-01 17:12:11 +03:00
Alper Rifat Ulucinar 2ef67f5f5a
Merge pull request #244 from upbound/revert-238-fix/doc-generator
Revert "feat(doc-generator): fix generation examples for provider-vault"
2023-08-01 17:10:41 +03:00
Alper Rifat Ulucinar d6887fbe49
Revert "feat(doc-generator): fix generation examples for provider-vault" 2023-08-01 17:01:19 +03:00
Christopher Haar 39dcb37efd
Merge pull request #238 from haarchri/fix/doc-generator
feat(doc-generator): fix generation examples for provider-vault
2023-08-01 11:41:52 +02:00
Alper Rifat Ulucinar 7fb82a4771
Merge pull request #243 from ulucinar/fix-sync-controller
Fix the controller.go.tmpl for MRs reconciled in synchronous mode
2023-07-31 16:19:08 +03:00
Alper Rifat Ulucinar c1657602f2
Fix the controller.go.tmpl for MRs reconciled in synchronous mode
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-31 16:04:30 +03:00
Alper Rifat Ulucinar b5bb5131e6
Merge pull request #242 from ulucinar/fix-docs
Fix config.Provider.RootGroup field docs
2023-07-31 14:33:54 +03:00
Alper Rifat Ulucinar 100c7f1eb2
Fix config.Provider.RootGroup field docs
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-31 10:20:00 +03:00
Lovro Sviben 46549fea21
Merge pull request #237 from lsviben/initProvider
Support initProvider
2023-07-28 19:27:02 +02:00
lsviben 9e4b3d8b8c
initProvider support
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-07-28 16:55:36 +02:00
Fatih Türken 42f0f7193b
Merge pull request #221 from turkenf/issue-196
Remove the trailing document separator (---) from examples generated
2023-07-27 23:07:13 +03:00
Fatih Türken 36ae8faee7 Remove the trailing document separator 2023-07-27 23:03:18 +03:00
Alper Rifat Ulucinar e4ffeb8a75
Merge pull request #231 from ulucinar/fix-180
Requeue the reconcile request on async update error
2023-07-27 15:22:32 +03:00
Alper Rifat Ulucinar 07d58e0a3e
Fix tests
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-27 11:35:48 +03:00
Alper Rifat Ulucinar 30386dbb9a
Move controller.EventHandler to handler.EventHandler
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-27 03:21:08 +03:00
Alper Rifat Ulucinar b1cdd33a8d
Export controller.EventHandler so that it's accessible by all Upjet components
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-27 02:22:26 +03:00
Alper Rifat Ulucinar d635b3e108
Register a single event handler with the MR controllers
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-26 21:03:18 +03:00
Alper Rifat Ulucinar d85bcd3842
Refactor requeue logic into controller.callbackFn
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-26 18:47:11 +03:00
Alper Rifat Ulucinar 1c53001ddb
Requeue the reconcile request on async create error
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-26 18:12:02 +03:00
Alper Rifat Ulucinar d4c3e5bd9b
Requeue the reconcile request on async update error 2023-07-26 18:12:02 +03:00
Christopher Haar fb8ce3c613 feat(doc-generator): fix generation examples for provider-vault
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
2023-07-26 15:39:33 +02:00
Christopher Haar e7eb461228 feat(doc-generator): fix generation examples for provider-vault
Signed-off-by: Christopher Haar <christopher.haar@upbound.io>
2023-07-26 15:15:04 +02:00
Lovro Sviben cd958ec5d1
Merge pull request #224 from lsviben/granular_managment_policies
Support granular managment policies
2023-07-24 08:37:09 +02:00
Alper Rifat Ulucinar accab08e93
Merge pull request #236 from ulucinar/bump-go1.20
Bump Go version to v1.20 in go.mod
2023-07-20 20:41:08 +03:00
Alper Rifat Ulucinar 11a1e20d1d
Bump Go version to v1.20 in go.mod
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-20 20:21:46 +03:00
lsviben 0767c52b32
support granular management policies
Signed-off-by: lsviben <sviben.lovro@gmail.com>
2023-07-20 13:26:19 +02:00
dverveiko b70acc1bcb
Merge pull request #227 from dverveiko/issue-727
Add design-doc-provider-identity-based-auth.md to docs
2023-07-14 13:08:22 +03:00
Verveiko Denys 0caa002288 Added image & Fix links 2023-07-14 11:20:01 +03:00
Piotr Zaniewski a57aed6bb3
Merge pull request #229 from upbound/convert-to-component
Fast Follow: Converting upjet to component to include CI visibility
2023-07-10 17:46:42 +03:00
Piotr Zaniewski 95002d631a
Converting upjet to component to include CI visibility 2023-07-10 15:18:15 +02:00
Piotr Zaniewski a02ad96862
Merge pull request #228 from upbound/backstage-catalog
Adding backstage catalog file
2023-07-10 15:58:54 +03:00
Piotr Zaniewski 801af6fd02
Adding backstage catalog file 2023-07-10 14:11:43 +02:00
Alper Rifat Ulucinar 90af5e5461
Merge pull request #217 from ulucinar/categorical-converters
Add support for categorical converters
2023-07-07 18:11:27 +03:00
Alper Rifat Ulucinar 5a09c262ad
Add support for executor diagnostics in step context
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-06 22:20:16 +03:00
Alper Rifat Ulucinar 424f2cbb70
Add migration plan executor callback
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-06 19:22:01 +03:00
Alper Rifat Ulucinar 2c01279835
Add migration.PlanExecutor
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-06 16:10:08 +03:00
Alper Rifat Ulucinar 4df4a033e2
Rename executor.go as fork_executor.go
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-05 19:32:07 +03:00
Alper Rifat Ulucinar 2725c224da
Add FileSystemTargetOption to specify the parent directory for output
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-05 19:22:35 +03:00
Sergen Yalçın 3da89b8ae3
Use substep for Delete step
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-07-05 16:19:08 +02:00
Sergen Yalçın 79df430e2b
Move adding operation of manual execution fields to common place
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-07-05 14:46:55 +02:00
Alper Rifat Ulucinar 04000a2ccc
Fix Provider.pkg conversion step
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-05 15:25:35 +03:00
Sergen Yalçın b939e761dd
Fix order of steps
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-07-05 13:09:54 +02:00
Alper Rifat Ulucinar 2303f9fb48
Fix conditional Managed Resource orphaning and de-orphaning steps
- Do not use dynamic Client for categorical retrieval with the Kubernetes source

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-05 13:05:37 +03:00
Verveiko Denys 77c7d6cd4c Add design-doc-provider-identity-based-auth.md to docs 2023-07-05 10:56:09 +03:00
Sergen Yalçın 95d7b77a3e
Fix ordering issue
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-07-04 22:22:58 +02:00
Sergen Yalçın c4632ba83a
Change type of edit-conf-meta step to Exec
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-07-04 21:43:23 +02:00
Alper Rifat Ulucinar 88dc9b6ff5
Integration fixes
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-07-04 17:15:22 +03:00
Sergen Yalçın 9415315d96
Add manualExecution fields to steps
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-06-23 12:09:35 +02:00
Sergen Yalçın 14dfa1ac10
Fix linter issues
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-06-22 10:53:39 +02:00
Sergen Yalçın 224d981dcc
Add exec steps
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-06-21 22:53:48 +02:00
Alper Rifat Ulucinar f3397b875e
Reset multi-source backend index on Source.Reset call
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-21 22:04:18 +03:00
Alper Rifat Ulucinar acabc4d818
Add support for PlanGenerators acting on multiple migration Sources
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-21 16:00:21 +03:00
Alper Rifat Ulucinar d269972289
Add migration plan steps related to categorical converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-20 21:54:42 +03:00
Alper Rifat Ulucinar 3d98a47ae5
Add support for categorical converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-14 17:39:05 +03:00
Sergen Yalçın 018e6404ba
Merge pull request #216 from sergenyalcin/remove-migrator-docs
Remove family provider migrator docs
2023-06-12 17:23:39 +03:00
Sergen Yalçın 93ae363d60
Remove family provider migrator docs because they moved to the new repo
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-06-12 14:48:49 +03:00
Alper Rifat Ulucinar aaafdd4057
Merge pull request #212 from ulucinar/lock-converters
Add support for category-based manifest pre-processors
2023-06-09 14:02:42 +03:00
Alper Rifat Ulucinar e69a782a5c
Do not panic if a Configuration.meta emitted by a migration.Source is not converted
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-09 02:05:07 +03:00
Alper Rifat Ulucinar 5484a02c26
Add support for fetching all resources of a specific category in migration.KubernetesSource
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-09 01:20:04 +03:00
Alper Rifat Ulucinar 57978a6822
Use meta.RESTMapper to convert from GVKs to GVRs in migration.KubernetesSource
- Use the same cache dir as kubectl for migration.KubernetesSource's
  disk cached discovery client.
- Remove all usages of meta.UnsafeGuessKindToResource.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-08 23:21:37 +03:00
Alper Rifat Ulucinar d1f5f70ffe
Add support for Lock.pkg converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-08 16:50:25 +03:00
Alper Rifat Ulucinar 52bb8deb93
Add support for category-based manifest pre-processors
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-06-08 16:50:25 +03:00
Sergen Yalçın 48477c8df0
Merge pull request #200 from sergenyalcin/migration-discovery
Switch to discovery client for managed resources
2023-06-08 16:39:30 +03:00
Sergen Yalçın 5f2dad2489
Merge pull request #209 from ulucinar/configuration-migration
Add support for converting Crossplane Configurations
2023-06-01 13:51:19 +03:00
Sergen Yalçın f79f70f490
Merge pull request #210 from ezgidemirel/sp-mig-plan
Add an example Smaller Provider migration plan
2023-05-29 15:42:01 +03:00
ezgidemirel 9e93ffdb66
introduce Exec step type and implement ForkExecutor
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-05-29 15:37:03 +03:00
Alper Rifat Ulucinar aaf1c161d1
Add support for Configuration.pkg converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-26 22:22:05 +03:00
Alper Rifat Ulucinar b8634e82c1
Add support for Provider.pkg converters
- Generate Configuration migration steps named:
  - new-ssop: Installs new SSOPs that will replace a monolithic provider package
  - edit-monolithic-provider: Sets spec.SkipDependencyResolution:true for a monolithic provider
  - delete-monolithic-provider: Deletes a monolithic provider package
  - activate-ssop: Activates the new SSOPs installed

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-26 17:26:06 +03:00
ezgidemirel 6633f45f9b
Add an example Smaller Provider migration plan
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
2023-05-26 13:25:37 +03:00
ezgidemirel 2920aa2040
Merge pull request #206 from ezgidemirel/merge-guides
Merge two SP migration docs into a single file
2023-05-25 15:44:26 +03:00
ezgidemirel c037acf2d3
Merge two SP migration docs into a single file
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
2023-05-25 10:22:30 +03:00
Alper Rifat Ulucinar 3b2e2cc0f1
Generate "edit-configurations" migration patch step
- Do not generate empty steps in the migration plan

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-25 02:32:24 +03:00
Alper Rifat Ulucinar e506364eed
Add support for converting Crossplane Configurations
- Add the migration.Executor interface

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-24 14:47:19 +03:00
Yury Tsarev 4bd272164e
Merge pull request #207 from ytsarev/generate-from-compositions
Extend SSOP migration script and process with the local source
2023-05-23 13:51:27 +01:00
Yury Tsarev d578b1f493
Add offline manifest generation clarification in script and the doc
Signed-off-by: Yury Tsarev <yury@upbound.io>
2023-05-23 14:39:16 +02:00
Fatih Türken 7eb5f5adb6
Merge pull request #208 from turkenf/update-guide
Remove the command related to the sample configuration directory
2023-05-23 13:38:24 +03:00
Fatih Türken e2da58cc28 Remove the command related to the sample configuration directory 2023-05-23 13:18:25 +03:00
Fatih Türken 71a052abaa
Merge pull request #193 from turkenf/improve-new-provider-generation-guide
Incorporate Feedback on the New Provider Generation Guide
2023-05-22 13:00:36 +03:00
Fatih Türken 90ce879a01 Incorporate Feedback on the New Provider Generation Guide 2023-05-22 12:51:48 +03:00
Yury Tsarev 3a0cb58323
Comply with shellcheck
* Optimize grepping
* Skip checks in places where shellcheck can't figure out dynamic `eval`
  and provides false negatives

Signed-off-by: Yury Tsarev <yury@upbound.io>
2023-05-18 12:59:25 +02:00
Yury Tsarev bff10d5d0c
Extend migration script and process with the local source
* Extend `generate-manifests.sh` to generate SSOP installation manifests
out of local Configuration files/Compositions
* Document it by extending existing migration step

Signed-off-by: Yury Tsarev <yury@upbound.io>
2023-05-18 12:50:05 +02:00
ezgidemirel 998ed606be
Merge pull request #205 from ezgidemirel/move-to-scripts
Move inline bash code to separate files to prevent copy/paste errors
2023-05-17 10:57:48 +03:00
ezgidemirel b265566cd5
Move inline bash code to seperate files to prevent copy/paste errors
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
2023-05-17 10:45:16 +03:00
Hasan Turken 13368e29cf
Merge pull request #204 from turkenh/migration-improvements
Fixes and improvements in smaller provider migration docs
2023-05-17 10:28:51 +03:00
Hasan Turken 16053473d2
Fixes and improvements in smaller provider migration docs
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-05-17 10:01:05 +03:00
Jean du Plessis 9deb6c331c
Merge pull request #203 from ezgidemirel/smaller-provider-doc 2023-05-16 18:37:32 +02:00
ezgidemirel 4171c3870a
Add Smaller Provider Migration guides
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
2023-05-16 19:35:19 +03:00
Sergen Yalçın 27f7e4b392
Switch to discovery client for managed resources
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-05-16 16:48:23 +03:00
Alper Rifat Ulucinar abb95b1093
Merge pull request #173 from ulucinar/fix-rawext
Add migration.PlanGeneratorOption to configure a PlanGenerator
2023-05-16 13:14:25 +03:00
Alper Rifat Ulucinar eac99ca0f8
Add support for configuring the migration.PlanGenerator to skip processing of specific GVKs
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-16 13:06:37 +03:00
Alper Rifat Ulucinar 972fa10ce6
migration.PlanGenerator does not error by default in case of patch schema checking errors
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-16 13:06:05 +03:00
Alper Rifat Ulucinar f5920873e0
Merge pull request #153 from ulucinar/fix-e87
Add Patch migration step type
2023-05-16 13:01:15 +03:00
Alper Rifat Ulucinar 645d7260d8
Merge pull request #195 from ulucinar/controller-map
Deprecate config.BasePackages.Controller in favor of ControllerMap
2023-05-02 18:47:51 +03:00
Alper Rifat Ulucinar 56c3caac14
Deprecate config.BasePackages.Controller in favor of ControllerMap
- ControllerMap allows the subpackage name to be specified for the
  controller.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-05-02 18:42:37 +03:00
Alper Rifat Ulucinar 1d0b7b0c08
Merge pull request #194 from ulucinar/families
Generate per-service providers
2023-04-27 16:18:41 +03:00
Alper Rifat Ulucinar 37babb8f78
Rename pipeline.SetupGenerator as ProviderGenerator
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-27 16:15:47 +03:00
Alper Rifat Ulucinar 66268861d0
Rename base package as config
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-26 00:30:18 +03:00
Alper Rifat Ulucinar d71b091db1
Generate provider subpackages
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-25 17:22:38 +03:00
Alper Rifat Ulucinar 549776a017
Merge pull request #188 from ezgidemirel/ess-plugin
Consume ESS plugin changes
2023-04-24 18:39:21 +03:00
ezgidemirel a8e4272e48
Update pkg/pipeline/templates/controller.go.tmpl
Co-authored-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-18 14:57:00 +03:00
Alper Rifat Ulucinar 1268a48eba
Merge pull request #190 from ulucinar/template-funcs
Add ToLower & ToUpper template functions for config.TemplatedStringAsIdentifier
2023-04-18 13:25:26 +03:00
Alper Rifat Ulucinar 6e2b097cae
Add ToLower & ToUpper template functions for config.TemplatedStringAsIdentifier
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-18 11:17:55 +03:00
Alper Rifat Ulucinar 92d9d638a6
Fix imports
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-17 17:53:50 +03:00
ezgidemirel 45c4477449
Consume ESS plugin changes
Signed-off-by: ezgidemirel <ezgidemirel91@gmail.com>
2023-04-17 14:16:31 +03:00
Jean du Plessis 0ccac1f1db
Merge pull request #191 from turkenf/improve-sizing-guide 2023-04-14 19:10:08 +02:00
Fatih Türken 4e2ebc81d1 Add an example ControllerConfig to sizing guide 2023-04-14 19:03:06 +03:00
Hasan Turken 1c48474d6f
Merge pull request #187 from turkenh/doc-add-support-for-oo
Guide for adding Observe Only Support
2023-04-13 17:43:57 +03:00
Hasan Turken dc3d1d2d4d
Resolve comments in Observe Only support guide
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-13 17:14:45 +03:00
Hasan Turken 01e5c1fafb
Merge pull request #186 from turkenh/no-tags-in-spec
Do not add tags to spec when ObserveOnly
2023-04-13 15:45:12 +03:00
Hasan Turken fed43b6fa8
Add docs for adding OO support
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-13 14:14:45 +03:00
Hasan Turken b32610ffd1
Do not add tags to spec when ObserveOnly
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-13 13:18:59 +03:00
Hasan Turken 4a46ee68a2
Merge pull request #168 from turkenh/observe-with-import
Observe with Terraform Import when Observe Only
2023-04-13 11:14:33 +03:00
Hasan Turken 56bd8e0b42
Merge pull request #166 from turkenh/generate-cel-rules
Generate CEL validation rules not to enforce required fields when ObserveOnly
2023-04-11 22:45:57 +03:00
Hasan Turken 7605bf6258
Resolve comments - add todo for nested identifiers
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-11 22:41:46 +03:00
Hasan Turken d57e855c15
Mark param as identifier in ParameterAsIdentifier
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-11 22:41:46 +03:00
Hasan Turken a93d30046f
Generate CEL rules for ObserveOnly
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-11 22:41:44 +03:00
Hasan Turken 0132b237af
Merge pull request #164 from turkenh/o_o-at-provider
Include full state under status.atProvider
2023-04-11 22:37:36 +03:00
Hasan Turken 462dd33a1c
Add provider configuration for features package for backward compatibility
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-11 17:00:12 +03:00
Hasan Turken 4aef47a24c
Refactor observation with import and ignore not found
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-11 16:18:45 +03:00
Hasan Turken 83266ece2a
Generate controller with feature flag
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:28:36 +03:00
Hasan Turken bf400acec4
Observe with terraform import when Observe Only
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:17:43 +03:00
Hasan Turken c2aa5f89ed
Bump crossplane runtime to latest with Observe Only support
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:15:07 +03:00
Hasan Turken e376ed83dd
Add comment for previously added observation types
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:11:37 +03:00
Hasan Turken a0306ce5fd
Do not add fields with no tag to status
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:11:37 +03:00
Hasan Turken ce46d387cb
Include full state under status.atProvider
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2023-04-10 14:11:37 +03:00
Alper Rifat Ulucinar b89baca4ae
Merge pull request #184 from ulucinar/fix-aws-646
Add config.Resource.Path to support configuring plural names of generated resources
2023-04-03 22:37:42 +03:00
Alper Rifat Ulucinar 337b34bc36
Add config.Resource.Path to support configuring plural names of generated resources
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-04-03 18:05:04 +03:00
Jean du Plessis a91d97030c
Merge pull request #182 from sergenyalcin/sizing-guide 2023-03-31 19:13:51 +02:00
Jean du Plessis 2937bee203
Update docs/sizing-guide.md 2023-03-31 19:13:37 +02:00
Sergen Yalçın 78a4573c9a
Add a sizing guide for official providers
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-03-31 19:53:39 +03:00
Jean du Plessis 692c92d3b9
Update feature_request.md 2023-03-31 18:40:23 +02:00
Jean du Plessis 95faebb34b
Update bug_report.md 2023-03-31 18:40:08 +02:00
Alper Rifat Ulucinar 009d7e2038
Merge pull request #181 from ulucinar/monitor-md
Add an introductory Upjet-based provider monitoring guide
2023-03-31 19:01:20 +03:00
Alper Rifat Ulucinar d017bbe074
Add an introductory Upjet-based provider monitoring guide
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-31 18:08:07 +03:00
Alper Rifat Ulucinar 05c3d628e7
Merge pull request #178 from ulucinar/scheduler
Add terraform.ProviderScheduler
2023-03-27 18:12:45 +03:00
Alper Rifat Ulucinar 76d344cacb
Rename ttlBudget as ttlMargin
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-27 18:10:56 +03:00
Alper Rifat Ulucinar 616ea71453
Call InUse.Increment from reconciliation goroutine
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-27 18:10:43 +03:00
Alper Rifat Ulucinar 967a3a5a8f
Move scheduler from WorskpaceStore to ExternalClient
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-27 18:09:56 +03:00
Alper Rifat Ulucinar 68c9112dce
Add terraform.ProviderScheduler that manages the lifecycles of ProviderRunners
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-22 15:19:46 +03:00
Alper Rifat Ulucinar 33f3576c3a
Add Registry.RegisterConversionFunctions back for backward-compatibility
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-12 01:44:12 +03:00
Alper Rifat Ulucinar ca7be7d0d5
Decompose migration.Converter into migration.ResourceConverter & migration.ComposedTemplateConverter
for decoupling and modularizing converters.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-12 00:50:26 +03:00
Alper Rifat Ulucinar 27663cec05
ComposedTemplate converters no longer receive PatchSets as
PatchSet conversion is handled by PatchSet converters with global context

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-11 23:44:45 +03:00
Alper Rifat Ulucinar 4620fb03f8
Add Composition-wide migration PatchSet converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-10 18:49:54 +03:00
Alper Rifat Ulucinar 2e6b4739ee
Add Patch migration step type
- Convert migration plan steps
  pause-managed, pause-composites, edit-composites, edit-claims,
  deletion-policy-orphan, start-managed, start-composites
  to this Patch type

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-10 18:49:54 +03:00
Alper Rifat Ulucinar dffb7c0c0f
Include references to patch sets in the set of patches for migration targets if they conform to the target's schema
- Remove patches from targets if the patch target/source field does not exist or is of a different type in the target
- Include patches in targets if the patch target/source field exists and is of the same type in the target

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-10 18:49:54 +03:00
Alper Rifat Ulucinar acc1fd691d
Merge pull request #172 from ulucinar/name-from-path
[scraper]: Add support for deducing resource name from file path
2023-03-09 23:06:54 +03:00
Alper Rifat Ulucinar dc98682e7e
[scraper]: Add support for deducing resource name from file path
if scraper cannot extract it from the markdown file.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-09 21:38:26 +03:00
Alper Rifat Ulucinar 5377e5db79
Merge pull request #170 from ulucinar/fix-167
Add upjet runtime Prometheus metrics
2023-03-08 13:32:51 +03:00
Alper Rifat Ulucinar 7a3f20a51d
Add upjet runtime Prometheus metrics:
- upjet_terraform_cli_duration: Reports statistics, in seconds, on how long it takes a Terraform CLI invocation to complete
- upjet_terraform_active_cli_invocations: The number of active (running) Terraform CLI invocations
- upjet_terraform_running_processes: The number of running Terraform CLI and Terraform provider processes
- upjet_resource_ttr: Measures, in seconds, the time-to-readiness for managed resources
- terraform.Operation.MarkStart now atomically checks for any previous ongoing operation
  before starting a new one
- terraform.Operation.{Start,End}Time no longer return pointers that could potentially be used
  to modify the shared state outside of critical sections.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-03-07 06:10:11 +03:00
Jean du Plessis 4a422c693c
Fix CODEOWNERS format 2023-03-03 16:29:12 +02:00
Fatih Türken 03a61f17c0
Merge pull request #169 from turkenf/add-owners
Add `OWNERS.md` and `CODEOWNERS`
2023-03-03 10:52:34 +03:00
Jean du Plessis cce0edbf6c
Merge pull request #165 from Brocaneli/update-doc-2317573 2023-03-03 08:53:43 +02:00
Marcus Brocaneli 550d6d9ed7 docs: enphasize func usage image on new resource doc
Signed-off-by: Marcus Brocaneli <marcusbrocaneli@gmail.com>

Author:    Marcus Brocaneli <marcusbrocaneli@gmail.com>
Signed-off-by: Marcus Brocaneli <marcusbrocaneli@gmail.com>
2023-03-02 17:54:20 -03:00
Fatih Türken f9ac7e3e63 Add OWNERS.md and CODEOWNERS 2023-03-02 19:48:44 +03:00
Alper Rifat Ulucinar 615535f510
Merge pull request #157 from ulucinar/fix-e1
Bump Github workflow dependencies
2023-02-08 13:33:48 +03:00
Alper Rifat Ulucinar c9d94eebbd
Bump Github workflow dependencies
- Bump fkirc/skip-duplicate-actions Github action to v5.3.0
- Remove the deprecated set-output command usage
- Bump actions/cache Github action to v3

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-08 10:58:00 +03:00
Alper Rifat Ulucinar b1ed9245d0
Merge pull request #156 from ulucinar/fix-154
Filter failed "terraform init" output for sensitive information
2023-02-07 13:53:59 +03:00
Alper Rifat Ulucinar 4f63299704
Filter failed "terraform init" output for sensitive information
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-07 13:50:02 +03:00
Alper Rifat Ulucinar aeb5f15054
Merge pull request #148 from ulucinar/migrate-patchsets
Rename migration.Converter.ComposedTemplates as Composition
2023-02-02 15:14:07 +03:00
Alper Rifat Ulucinar 78a6825a29
Include references to patch sets in the set of patches for migration targets if they conform to the target's schema
- Remove patches from targets if the patch target/source field does not exist or is of a different type in the target
- Include patches in targets if the patch target/source field exists and is of the same type in the target

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:57:32 +03:00
Alper Rifat Ulucinar 5f49ddeb28
Remove nil values from unstructured migration source objects
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:18 +03:00
Alper Rifat Ulucinar b92bfe1731
Delete all patches on migration target composed templates if source and target kinds do not match
- Generate unique names for target composed templates

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:18 +03:00
Alper Rifat Ulucinar 2613003083
Assert non-empty name/generateName on converted resources by resource migration converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:18 +03:00
Alper Rifat Ulucinar 89f9fa643b
Assert non-empty GVK on converted resources by migration converters
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:17 +03:00
Alper Rifat Ulucinar 7b6f6dbc15
Rename migration.Converter.Resources to Resource
- This method is called on a single migration source MR
  to have it converted/upgraded. So the semantics is
  to convert a resource.

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:17 +03:00
Alper Rifat Ulucinar c6d89e097f
Rename migration.Converter.ComposedTemplates as Composition
- And allow conversions on a Composite's spec.patchSets

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-02-02 14:23:17 +03:00
Alper Rifat Ulucinar 531a9950f9
Merge pull request #149 from ulucinar/no-ext-name-in-template
Add a test-case for config.GetExternalNameFromTemplated where the external-name template variable is not available
2023-01-23 11:26:09 +03:00
Sergen Yalçın 5d0efccaa2
Merge pull request #150 from sergenyalcin/provider-upgrade
Add check for native provider upgrade
2023-01-22 21:49:08 +03:00
Sergen Yalçın 1dc7e14aae
Use struct for mainTF serialization
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-01-20 14:07:13 +03:00
Sergen Yalçın d797d51411
Add native provider upgrade support
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2023-01-19 16:36:30 +03:00
Alper Rifat Ulucinar 1e90abeb1f
Add a test-case for config.GetExternalNameFromTemplated where the external-name template variable is not available
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-01-19 16:24:25 +03:00
Alper Rifat Ulucinar 84e87589eb
Post release commit after v0.8.0
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2023-01-13 14:34:37 +03:00
Alper Rifat Ulucinar d381e878de
Merge pull request #146 from ulucinar/local-scheme
Use a local runtime.Scheme with the migration.Registry
2023-01-03 12:46:07 +03:00
Sergen Yalçın 5ee2f20cb3
Fix go.mod
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2022-12-27 13:59:11 +03:00
Alper Rifat Ulucinar c3814cc93d
Do not use runtime.DefaultUnstructuredConverter.FromUnstructured on v1.Compositions
- FromUnstructured may fail on v1.Compositions due to:
  https://github.com/kubernetes-sigs/structured-merge-diff/issues/230

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-23 18:51:39 +03:00
Alper Rifat Ulucinar 2dc9637a15
Use meta.UnsafeGuessKindToResource instead of a similar custom implementation in Kubernetes migration source
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-15 15:19:01 +03:00
Alper Rifat Ulucinar 6c4945f485
Kubernetes migration source now puts resource category metadata
- Plan generated now versions the plans it generates

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-15 15:04:16 +03:00
Alper Rifat Ulucinar 592684a474
Use a local runtime.Scheme with the migration.Registry
- Mkdir all folders before putting a file with the FileSystem target

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-15 02:03:20 +03:00
Sergen Yalçın 15411ad488
Merge pull request #124 from sergenyalcin/source-target
Migration source and target implementations
2022-12-14 15:37:39 +03:00
Sergen Yalçın 42eac94d3a
Remove Patch function
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2022-12-14 15:16:16 +03:00
Sergen Yalçın 6c39f7947b
Address review comments
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2022-12-14 13:24:05 +03:00
Sergen Yalçın 1e4d385285
Migration source and target implementations
Signed-off-by: Sergen Yalçın <yalcinsergen97@gmail.com>
2022-12-14 13:23:40 +03:00
Jean du Plessis 52337e628d
Merge pull request #119 from ulucinar/wearemoving
Fixes https://github.com/upbound/team-extensions/issues/29
2022-12-14 09:12:29 +02:00
Alper Rifat Ulucinar 81c9dbeb3b
Add migration tests framework and migration package tests
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-14 01:32:03 +03:00
Alper Rifat Ulucinar 4b0a6748ba
Add plan generator
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-12 19:20:52 +03:00
Alper Rifat Ulucinar ac0125a643
Add migration.{Converter,Source,Target} interfaces
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-12 19:20:22 +03:00
Alper Rifat Ulucinar 02307da6ff
Merge pull request #145 from ulucinar/fix-e-64
Add a "--extensions" command-line option to the metadata scraper
2022-12-08 13:21:28 +03:00
Jean du Plessis 59bb614909
Merge pull request #141 from upbound/jeanduplessis-patch-1 2022-12-08 11:00:39 +02:00
Jean du Plessis 959883aa19
Merge pull request #144 from tomkingchen/fix-dockerfile-link 2022-12-08 06:57:09 +02:00
Alper Rifat Ulucinar 515bd323cf
Add a "--extensions" command-line option to the metadata scraper
- Improve the error given when no markdown nodes match the prelude conditions
- Make page_title in preludes optional

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-12-07 23:50:38 +03:00
Tom Chen 821ac490d4
Update dockerfile link with the correct code section 2022-12-07 10:23:57 +11:00
Tom Chen 2ba611d001
Fix broken link for Dockerfile code example 2022-12-07 10:05:40 +11:00
Jean du Plessis 2bd418a552
Merge pull request #133 from ulucinar/fix-832 2022-11-25 18:31:25 +02:00
Alper Rifat Ulucinar e36e124fa8
Mention bootstrapping a new provider from scratch in
the Terrajet-to-Upjet migration guide

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-11-25 18:19:16 +03:00
Alper Rifat Ulucinar 42be93d2fc
Add migration steps to enable new Upjet features
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-11-25 15:58:44 +03:00
Alper Rifat Ulucinar 6f60a4e662
Add terrajet-to-upjet migration guide
- Discuss the Go module migration from github.com/crossplane/terrajet
  to github.com/upbound/upjet

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-11-25 15:58:43 +03:00
Jean du Plessis 3d10ff1fa7
Delete OWNERS.md
This file is not relevant with this repo being under the `/upbound/` org.
2022-11-21 14:26:22 +02:00
Hasan Turken 606a1db65f
Merge pull request #138 from turkenh/secretmap
Handle sensitive input maps via SecretReference
2022-11-15 10:54:53 +03:00
Hasan Turken 06e26cb020
Do not fail if referenced secret is missing
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2022-11-12 00:13:26 +03:00
Hasan Turken 01e3a3fb60
Get sensitive parameters from secretReference
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2022-11-11 23:55:30 +03:00
Hasan Turken be0307ffe9
Use secretReference for sensitive input maps
Signed-off-by: Hasan Turken <turkenh@gmail.com>
2022-11-11 19:14:32 +03:00
Jean du Plessis 6de200a3a8
Merge pull request #137 from ulucinar/fix-867 2022-11-11 14:31:28 +02:00
Alper Rifat Ulucinar df1b435adc
Consolidate condition checks in the Terraform provider schema loop
Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-11-11 12:20:00 +03:00
Alper Rifat Ulucinar 17aa3b94b8
Add config.Provider.GetSkippedResourceNames that returns a list of Terraform resource names
that are available in the Terraform schema but not generated

Signed-off-by: Alper Rifat Ulucinar <ulucinar@users.noreply.github.com>
2022-11-10 17:48:31 +03:00
muvaffak d9f4624a66
Merge pull request #123 from sboschman/patch-1
docs: add preferred provider repository naming
2022-11-07 21:50:40 +03:00
Sverre Boschman 9bb4dba1f1
docs: add preferred provider repository naming 2022-10-24 15:34:50 +02:00
Sergen Yalçın c82119f5ef
Merge pull request #117 from turkenf/fix-new-resources-doc
Fixes in add-new-resource-short.md doc
2022-10-24 14:17:21 +03:00
Fatih Türken 33c8a8ab55 Fix the issue link 2022-10-24 12:20:52 +03:00
Fatih Türken e4bc412e44 Update config method names in add-new-resource-short.md 2022-10-23 17:02:09 +03:00
Fatih Türken eecaa2b832 Fix broken link in add-new-resource-short.md 2022-10-23 16:13:02 +03:00
muvaffak f2c9c80527
Merge pull request #115 from muvaf/upd-new-provider
Update new provider documentation
2022-10-22 13:36:18 +03:00
Muvaffak Onus fd6485cf39 github: remove the no-op e2e-tests job
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-22 13:30:53 +03:00
Muvaffak Onus 045dccab87 docs: small fixes
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-22 13:27:14 +03:00
Muvaffak Onus 2a20f45320 config.provider: use upbound.io postfix as default for root group
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-20 14:11:18 +03:00
Muvaffak Onus 785f496462 docs.new provider: add native provider path instructions
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-20 13:01:58 +03:00
Muvaffak Onus d7d94285e7 Update new provider documentation
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-20 12:51:34 +03:00
muvaffak 7e84c638a8
Merge pull request #112 from fhopfensperger/poll_interval
Add PollInterval to controller setup functions
2022-10-17 20:54:05 +03:00
Florian Hopfensperger 53c12a4b83 Add PollInterval to controller setup functions
Signed-off-by: Florian Hopfensperger <florian.hopfensperger@allianz.de>
2022-10-17 14:58:33 +02:00
Muvaffak Onus aca6159f02 readme: update contact info
Signed-off-by: Muvaffak Onus <me@muvaf.com>
2022-10-17 11:32:57 +03:00
219 changed files with 24808 additions and 4655 deletions

11
.editorconfig Normal file
View File

@ -0,0 +1,11 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
[*]
charset = utf-8
insert_final_newline = true
end_of_line = lf
indent_style = space
indent_size = 2
max_line_length = 80

View File

@ -17,7 +17,6 @@ Please let us know what behaviour you expected and how Upjet diverged from
that behaviour.
-->
### How can we reproduce it?
<!--
Help us to reproduce your bug as succinctly and precisely as possible. Artifacts

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC0-1.0

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC0-1.0

View File

@ -1,27 +1,28 @@
<!--
Thank you for helping to improve Upjet!
Please read through https://git.io/fj2m9 if this is your first time opening a
Upjet pull request. Find us in https://slack.crossplane.io/messages/dev if
you need any help contributing.
Please read through Upjet's contribution process if this is your first time
opening an Upjet pull request. Find us in
https://crossplane.slack.com/archives/C05T19TB729 if you need any help
contributing.
-->
### Description of your changes
<!--
Briefly describe what this pull request does. Be sure to direct your reviewers'
attention to anything that needs special consideration.
Briefly describe what this pull request does. Be sure to direct your
reviewers' attention to anything that needs special consideration.
We love pull requests that resolve an open Upjet issue. If yours does, you
can uncomment the below line to indicate which issue your PR fixes, for example
"Fixes #500":
-->
Fixes #
I have:
- [ ] Read and followed Crossplane's [contribution process].
- [ ] Read and followed Upjet's [contribution process].
- [ ] Run `make reviewable` to ensure this PR is ready for review.
- [ ] Added `backport release-x.y` labels to auto-backport this PR if necessary.
@ -33,4 +34,4 @@ needs to tested and shown to be correct. Briefly describe the testing that has
already been done or which is planned for this change.
-->
[contribution process]: https://git.io/fj2m9
[contribution process]: https://github.com/crossplane/upjet/blob/main/CONTRIBUTING.md

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC0-1.0

4
.github/stale.yml vendored
View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale

View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
name: Backport
on:
@ -18,16 +22,16 @@ jobs:
# The main gotchas with this action are that it _only_ supports merge commits,
# and that PRs _must_ be labelled before they're merged to trigger a backport.
open-pr:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: github.event.pull_request.merged
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@v0.0.4
uses: zeebe-io/backport-action@be567af183754f6a5d831ae90f648954763f17f5 # v3.1.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}

View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
name: CI
on:
@ -10,9 +14,9 @@ on:
env:
# Common versions
GO_VERSION: '1.19'
GOLANGCI_VERSION: 'v1.50.0'
DOCKER_BUILDX_VERSION: 'v0.8.2'
GO_VERSION: "1.23"
GOLANGCI_VERSION: "v1.64.4"
DOCKER_BUILDX_VERSION: "v0.18.0"
# Common users. We can't run a step 'if secrets.AWS_USR != ""' but we can run
# a step 'if env.AWS_USR' != ""', so we copy these to succinctly test whether
@ -22,53 +26,57 @@ env:
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v2.1.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.png", "**.jpg"]'
do_not_skip: '["workflow_dispatch", "schedule", "push"]'
concurrent_skipping: false
lint:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
- name: Find the Go Build Cache
id: go
run: echo "::set-output name=cache::$(go env GOCACHE)"
run: echo "cache=$(go env GOCACHE)" >> $GITHUB_OUTPUT
- name: Find the Go Build Cache
id: gomod
run: echo "::set-output name=cache::$(go env GOCACHE)"
run: echo "cache=$(go env GOCACHE)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go.outputs.cache }}
key: ${{ runner.os }}-build-lint-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-lint-
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: .work/pkg
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -81,42 +89,46 @@ jobs:
# this action because it leaves 'annotations' (i.e. it comments on PRs to
# point out linter violations).
- name: Lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@639cd343e1d3b897ff35927a75193d57cfcba299 # v3
with:
version: ${{ env.GOLANGCI_VERSION }}
skip-go-installation: true
check-diff:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
- name: Find the Go Build Cache
id: go-cache-paths
run: |
echo "::set-output name=go-build::$(make go.cachedir)"
echo "::set-output name=go-mod::$(make go.mod.cachedir)"
echo "go-build=$(make go.cachedir)" >> $GITHUB_OUTPUT
echo "go-mod=$(make go.mod.cachedir)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-build }}
key: ${{ runner.os }}-build-check-diff-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-check-diff-
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-mod }}
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -129,13 +141,18 @@ jobs:
run: make check-diff
unit-tests:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Cleanup Disk
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be # v1.3.1
with:
large-packages: false
swap-storage: false
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
@ -143,25 +160,25 @@ jobs:
run: git fetch --prune --unshallow
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@41dfa10bad2bb2ae585af6ee5bb4d7d973ad74ed # v5.1.0
with:
go-version: ${{ env.GO_VERSION }}
- name: Find the Go Build Cache
id: go-cache-paths
run: |
echo "::set-output name=go-build::$(make go.cachedir)"
echo "::set-output name=go-mod::$(make go.mod.cachedir)"
echo "go-build=$(make go.cachedir)" >> $GITHUB_OUTPUT
echo "go-mod=$(make go.mod.cachedir)" >> $GITHUB_OUTPUT
- name: Cache the Go Build Cache
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-build }}
key: ${{ runner.os }}-build-unit-tests-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-unit-tests-
- name: Cache Go Dependencies
uses: actions/cache@v2
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ steps.go-cache-paths.outputs.go-mod }}
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
@ -174,53 +191,7 @@ jobs:
run: make -j2 test
- name: Publish Unit Test Coverage
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d # v3
with:
flags: unittests
file: _output/tests/linux_amd64/coverage.txt
e2e-tests:
runs-on: ubuntu-20.04
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@v2
with:
submodules: true
- name: Fetch History
run: git fetch --prune --unshallow
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: ${{ env.GO_VERSION }}
- name: Find the Go Build Cache
id: go-cache-paths
run: |
echo "::set-output name=go-build::$(make go.cachedir)"
echo "::set-output name=go-mod::$(make go.mod.cachedir)"
- name: Cache the Go Build Cache
uses: actions/cache@v2
with:
path: ${{ steps.go-cache-paths.outputs.go-build }}
key: ${{ runner.os }}-build-e2e-tests-${{ hashFiles('**/go.sum') }}
restore-keys: ${{ runner.os }}-build-e2e-tests-
- name: Cache Go Dependencies
uses: actions/cache@v2
with:
path: ${{ steps.go-cache-paths.outputs.go-mod }}
key: ${{ runner.os }}-pkg-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-pkg-
- name: Vendor Dependencies
run: make vendor vendor.check
- name: Run E2E Tests
run: make e2e

View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
name: CodeQL
on:
@ -9,13 +13,13 @@ on:
jobs:
detect-noop:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
outputs:
noop: ${{ steps.noop.outputs.should_skip }}
steps:
- name: Detect No-op Changes
id: noop
uses: fkirc/skip-duplicate-actions@v2.1.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
paths_ignore: '["**.md", "**.png", "**.jpg"]'
@ -23,20 +27,20 @@ jobs:
concurrent_skipping: false
analyze:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
needs: detect-noop
if: needs.detect-noop.outputs.noop != 'true'
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
submodules: true
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
uses: github/codeql-action/init@396bb3e45325a47dd9ef434068033c6d5bb0d11a # v3.27.3
with:
languages: go
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
uses: github/codeql-action/analyze@396bb3e45325a47dd9ef434068033c6d5bb0d11a # v3.27.3

View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
name: Comment Commands
on: issue_comment
@ -10,7 +14,7 @@ jobs:
steps:
- name: Extract Command
id: command
uses: xt0rted/slash-command-action@v1
uses: xt0rted/slash-command-action@bf51f8f5f4ea3d58abc7eca58f77104182b23e88 # v2.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: points
@ -19,7 +23,7 @@ jobs:
allow-edits: "false"
permission-level: write
- name: Handle Command
uses: actions/github-script@v4
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
env:
POINTS: ${{ steps.command.outputs.command-arguments }}
with:
@ -65,12 +69,12 @@ jobs:
# NOTE(negz): See also backport.yml, which is the variant that triggers on PR
# merge rather than on comment.
backport:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: github.event.issue.pull_request && startsWith(github.event.comment.body, '/backport')
steps:
- name: Extract Command
id: command
uses: xt0rted/slash-command-action@v1
uses: xt0rted/slash-command-action@bf51f8f5f4ea3d58abc7eca58f77104182b23e88 # v2.0.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: backport
@ -80,13 +84,12 @@ jobs:
permission-level: write
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Open Backport PR
uses: zeebe-io/backport-action@v0.0.4
uses: zeebe-io/backport-action@be567af183754f6a5d831ae90f648954763f17f5 # v3.1.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
version: v0.0.4

View File

@ -0,0 +1,19 @@
# SPDX-FileCopyrightText: 2022 Free Software Foundation Europe e.V. <https://fsfe.org>
#
# SPDX-License-Identifier: CC0-1.0
name: REUSE Compliance Check
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: REUSE Compliance Check
uses: fsfe/reuse-action@3ae3c6bdf1257ab19397fab11fd3312144692083 # v4.0.0
- name: REUSE SPDX SBOM
uses: fsfe/reuse-action@3ae3c6bdf1257ab19397fab11fd3312144692083 # v4.0.0
with:
args: spdx

View File

@ -1,25 +1,29 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
name: Tag
on:
workflow_dispatch:
inputs:
version:
description: 'Release version (e.g. v0.1.0)'
description: "Release version (e.g. v0.1.0)"
required: true
message:
description: 'Tag message'
description: "Tag message"
required: true
jobs:
create-tag:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Create Tag
uses: negz/create-tag@v1
uses: negz/create-tag@39bae1e0932567a58c20dea5a1a0d18358503320 # v1
with:
version: ${{ github.event.inputs.version }}
message: ${{ github.event.inputs.message }}

4
.gitignore vendored
View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
/.cache
/.work
/_output

8
.gitmodules vendored
View File

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
[submodule "build"]
path = build
url = https://github.com/upbound/build
path = build
url = https://github.com/crossplane/build

View File

@ -1,12 +1,16 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
run:
timeout: 10m
skip-files:
- "zz_generated\\..+\\.go$"
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
formats:
- format: colored-line-number
print-linter-name: true
show-stats: true
linters-settings:
errcheck:
@ -18,14 +22,10 @@ linters-settings:
# default is false: such cases aren't reported by default.
check-blank: false
# [deprecated] comma-separated list of pairs of the form pkg:regex
# the regex is used to ignore names within pkg. (default "fmt:.*").
# see https://github.com/kisielk/errcheck#the-deprecated-method for details
ignore: fmt:.*,io/ioutil:^Read.*
govet:
# report about shadowed variables
check-shadowing: false
exclude-functions:
- io/ioutil.ReadFile
- io/ioutil.ReadDir
- io/ioutil.ReadAll
revive:
# confidence for issues, default is 0.8
@ -35,19 +35,19 @@ linters-settings:
# simplify code: gofmt with `-s` option, true by default
simplify: true
goimports:
# put imports beginning with prefix after 3rd-party packages;
# it's a comma-separated list of prefixes
local-prefixes: github.com/upbound/upjet
gci:
custom-order: true
sections:
- standard
- default
- prefix(github.com/crossplane/upjet)
- blank
- dot
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
@ -62,13 +62,6 @@ linters-settings:
# tab width in spaces. Default to 1.
tab-width: 1
unused:
# treat code as a program (not a library) and report unused exported identifiers; default is false.
# XXX: if you enable this setting, unused will report a lot of false-positives in text editors:
# if it's called for subdir of a project it can't find funcs usages. All text editor integrations
# with golangci-lint call it on a directory with the changed file.
check-exported: false
unparam:
# Inspect exported functions, default is false. Set to true if no external program/library imports your code.
# XXX: if you enable this setting, unparam will report a lot of false-positives in text editors:
@ -102,28 +95,33 @@ linters-settings:
rangeValCopy:
sizeThreshold: 32
nolintlint:
require-explanation: false
require-specific: true
linters:
enable:
- megacheck
- govet
- gocyclo
- gocritic
- interfacer
- goconst
- goimports
- gci
- gofmt # We enable this as well as goimports for its simplify mode.
- gosimple
- prealloc
- revive
- staticcheck
- unconvert
- unused
- misspell
- nakedret
- nolintlint
presets:
- bugs
- unused
fast: false
issues:
# Excluding configuration per-path and per-linter
exclude-rules:
@ -174,12 +172,20 @@ issues:
- gosec
- gas
# The Azure AddToUserAgent method appends to the existing user agent string.
# It returns an error if you pass it an empty string lettinga you know the
# user agent did not change, making it more of a warning.
- text: \.AddToUserAgent
# Some k8s dependencies do not have JSON tags on all fields in structs.
- path: k8s.io/
linters:
- errcheck
- musttag
# This exclusion is necessary because this package relies on deprecated
# functions and fields to maintain compatibility with older versions. This
# package acts as a migration framework, supporting users coming from
# previous versions. In the future, we may extract this package into a
# separate module and decouple its dependencies from upjet.
- path: pkg/migration/
linters:
- staticcheck
text: "SA1019:"
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
@ -196,7 +202,7 @@ issues:
new: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

25
CODEOWNERS Normal file
View File

@ -0,0 +1,25 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: CC0-1.0
# This file controls automatic PR reviewer assignment. See the following docs:
#
# * https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
# * https://docs.github.com/en/organizations/organizing-members-into-teams/managing-code-review-settings-for-your-team
#
# The goal of this file is for most PRs to automatically and fairly have one
# maintainer set as PR reviewers. All maintainers have permission to approve
# and merge PRs. All PRs must be approved by at least one maintainer before being merged.
#
# Where possible, prefer explicitly specifying a maintainer who is a subject
# matter expert for a particular part of the codebase rather than using fallback
# owners. Fallback owners are listed at the bottom of this file.
#
# See also OWNERS.md for governance details
# Subject matter experts
pkg/migrations/* @sergenyalcin
# Fallback owners
* @ulucinar @sergenyalcin @erhancagirici @mergenci

View File

@ -1,3 +1,9 @@
## Code of Conduct
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
Upjet is under [the Apache 2.0 license](LICENSE) with [notice](NOTICE).
SPDX-License-Identifier: CC0-1.0
-->
# Community Code of Conduct
This project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).

View File

@ -1,3 +1,9 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Contributing to Upjet
Welcome, and thank you for considering contributing to Upjet. We encourage
@ -85,7 +91,7 @@ change in Upjet, the best way to test it is to use a `replace` statement in the
`go.mod` file of the provider to use your local version as shown below.
```
replace github.com/upbound/upjet => ../upjet
replace github.com/crossplane/upjet => ../upjet
```
Once you complete your change, make sure to run `make reviewable` before opening
@ -98,13 +104,13 @@ in your provider to point to a certain commit in your branch of the provider tha
you opened a PR for.
```
replace github.com/upbound/upjet => github.com/<your user name>/upjet <hash of the last commit from your branch>
replace github.com/crossplane/upjet => github.com/<your user name>/upjet <hash of the last commit from your branch>
```
[Slack]: https://crossplane.slack.com/archives/C01TRKD4623
[code of conduct]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[code of conduct]: https://github.com/cncf/foundation/blob/main/code-of-conduct.md
[good git commit hygiene]: https://www.futurelearn.com/info/blog/telling-stories-with-your-git-history
[Developer Certificate of Origin]: https://github.com/apps/dco
[test review comments]: https://github.com/golang/go/wiki/TestComments
[docs]: docs/
[Coding Style]: https://github.com/crossplane/crossplane/blob/master/CONTRIBUTING.md#coding-style
[Coding Style]: https://github.com/crossplane/crossplane/blob/main/CONTRIBUTING.md#coding-style

210
LICENSE
View File

@ -1,201 +1,73 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Apache License
Version 2.0, January 2004
<http://www.apache.org/licenses/>
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [YEAR] Upbound Inc. All rights reserved.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
<http://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

73
LICENSES/Apache-2.0.txt Normal file
View File

@ -0,0 +1,73 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

156
LICENSES/CC-BY-4.0.txt Normal file
View File

@ -0,0 +1,156 @@
Creative Commons Attribution 4.0 International
Creative Commons Corporation (“Creative Commons”) is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an “as-is” basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright. More considerations for licensors.
Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensors permission is not necessary for any reasonfor example, because of any applicable exception or limitation to copyrightthen that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public.
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
Section 1 Definitions.
a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
Section 2 Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
A. reproduce and Share the Licensed Material, in whole or in part; and
B. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
3. Term. The term of this Public License is specified in Section 6(a).
4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
5. Downstream recipients.
A. Offer from the Licensor Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
B. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this Public License.
3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.
Section 3 License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified form), You must:
A. retain the following if it is supplied by the Licensor with the Licensed Material:
i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of warranties;
v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License.
Section 4 Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;
b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
Section 5 Disclaimer of Warranties and Limitation of Liability.
a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.
b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.
c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
Section 6 Term and Termination.
a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
2. upon express reinstatement by the Licensor.
c. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
d. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
e. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
Section 7 Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
Section 8 Interpretation.
a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
Creative Commons may be contacted at creativecommons.org.

121
LICENSES/CC0-1.0.txt Normal file
View File

@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

View File

@ -1,11 +1,19 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
#
# SPDX-License-Identifier: Apache-2.0
# ====================================================================================
# Setup Project
PROJECT_NAME := upjet
PROJECT_REPO := github.com/upbound/$(PROJECT_NAME)
PROJECT_REPO := github.com/crossplane/$(PROJECT_NAME)
GO_PROJECT := github.com/crossplane/$(PROJECT_NAME)/v2
GOLANGCILINT_VERSION ?= 1.50.0
GO_REQUIRED_VERSION ?= 1.19
# GOLANGCILINT_VERSION is inherited from build submodule by default.
# Uncomment below if you need to override the version.
GOLANGCILINT_VERSION ?= 1.64.4
# GO_REQUIRED_VERSION ?= 1.22
PLATFORMS ?= linux_amd64 linux_arm64
# -include will silently skip missing files, which allows us
@ -14,14 +22,6 @@ PLATFORMS ?= linux_amd64 linux_arm64
# to run a target until the include commands succeeded.
-include build/makelib/common.mk
# ====================================================================================
# Setup Images
# even though this repo doesn't build images (note the no-op img.build target below),
# some of the init is needed for the cross build container, e.g. setting BUILD_REGISTRY
-include build/makelib/image.mk
img.build:
# ====================================================================================
# Setup Go
@ -53,12 +53,6 @@ fallthrough: submodules
@echo Initial setup complete. Running make again . . .
@make
# Generate a coverage report for cobertura applying exclusions on
# - generated file
cobertura:
@cat $(GO_TEST_OUTPUT)/coverage.txt | \
$(GOCOVER_COBERTURA) > $(GO_TEST_OUTPUT)/cobertura-coverage.xml
# Update the submodules, such as the common build scripts.
submodules:
@git submodule sync
@ -76,4 +70,4 @@ go.cachedir:
go.mod.cachedir:
@go env GOMODCACHE
.PHONY: cobertura reviewable submodules fallthrough go.mod.cachedir go.cachedir
.PHONY: reviewable submodules fallthrough go.mod.cachedir go.cachedir

33
NOTICE
View File

@ -1,3 +1,9 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC0-1.0
-->
This project is a larger work that combines with software written by third
parties, licensed under their own terms.
@ -5,32 +11,17 @@ Notably, this larger work combines with the following Terraform components,
which are licensed under the Mozilla Public License 2.0 (see
<https://www.mozilla.org/en-US/MPL/2.0/> or the individual projects listed
below).
<https://github.com/hashicorp/terraform>
<https://github.com/hashicorp/hcl>
<https://github.com/hashicorp/terraform-json>
<https://github.com/hashicorp/terraform-plugin-framework>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-sdk>
<https://github.com/hashicorp/go-getter>
<https://github.com/hashicorp/vault>
<https://github.com/hashicorp/errwrap>
<https://github.com/hashicorp/go-cleanhttp>
<https://github.com/hashicorp/go-cty>
<https://github.com/hashicorp/go-hclog>
<https://github.com/hashicorp/go-multierror>
<https://github.com/hashicorp/go-safetemp>
<https://github.com/hashicorp/go-plugin>
<https://github.com/hashicorp/go-uuid>
<https://github.com/hashicorp/go-version>
<https://github.com/hashicorp/hcl>
<https://github.com/hashicorp/logutils>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-log>
<https://github.com/hashicorp/terraform-registry-address>
<https://github.com/hashicorp/terraform-svchost>
<https://github.com/hashicorp/go-hclog>
<https://github.com/hashicorp/go-immutable-radix>
<https://github.com/hashicorp/go-plugin>
<https://github.com/hashicorp/go-retryablehttp>
<https://github.com/hashicorp/go-rootcerts>
<https://github.com/hashicorp/go-secure-stdlib>
<https://github.com/hashicorp/go-sockaddr>
<https://github.com/hashicorp/golang-lru>
<https://github.com/hashicorp/terraform-plugin-go>
<https://github.com/hashicorp/terraform-plugin-log>
<https://github.com/hashicorp/yamux>

View File

@ -1,11 +1,21 @@
# Upjet Maintainers
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
Please see [GOVERNANCE.md](https://github.com/crossplane/crossplane/blob/master/GOVERNANCE.md) for governance guidelines and responsibilities for the
steering committee and maintainers of repositories under Crossplane organization.
SPDX-License-Identifier: CC0-1.0
-->
In alphabetical order:
# OWNERS
* Alper Rifat Uluçınar <alper@upbound.io> ([ulucinar](https://github.com/ulucinar))
* Hasan Türken <hasan@upbound.io> ([turkenh](https://github.com/turkenh))
* Muvaffak Onuş <monus@upbound.io> ([muvaf](https://github.com/muvaf))
* Sergen Yalçın <sergen@upbound.io> ([sergenyalcin](https://github.com/sergenyalcin))
This page lists all maintainers for **this** repository. Each repository in the
[Crossplane organization](https://github.com/crossplane/) will list their
repository maintainers in their own `OWNERS.md` file.
## Maintainers
* Alper Ulucinar <alper@upbound.com> ([ulucinar](https://github.com/ulucinar))
* Sergen Yalcin <sergen@upbound.com> ([sergenyalcin](https://github.com/sergenyalcin))
* Jean du Plessis <jean@upbound.com> ([jeanduplessis](https://github.com/jeanduplessis))
* Erhan Cagirici <erhan@upbound.com> ([erhancagirici](https://github.com/erhancagirici))
* Cem Mergenci <cem@upbound.com> ([mergenci](https://github.com/mergenci))
See [CODEOWNERS](./CODEOWNERS) for automatic PR assignment.

View File

@ -1,40 +1,49 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Upjet - Generate Crossplane Providers from any Terraform Provider
<div align="center">
![CI](https://github.com/upbound/upjet/workflows/CI/badge.svg) [![GitHub release](https://img.shields.io/github/release/upbound/upjet/all.svg?style=flat-square)](https://github.com/upbound/upjet/releases) [![Go Report Card](https://goreportcard.com/badge/github.com/upbound/upjet)](https://goreportcard.com/report/github.com/upbound/upjet) [![Slack](https://slack.crossplane.io/badge.svg)](https://crossplane.slack.com/archives/C01TRKD4623) [![Twitter Follow](https://img.shields.io/twitter/follow/upbound_io.svg?style=social&label=Follow)](https://twitter.com/intent/follow?screen_name=upbound_io&user_id=788180534543339520)
![CI](https://github.com/crossplane/upjet/workflows/CI/badge.svg)
[![GitHub release](https://img.shields.io/github/release/crossplane/upjet/all.svg)](https://github.com/crossplane/upjet/releases)
[![Go Report Card](https://goreportcard.com/badge/github.com/crossplane/upjet)](https://goreportcard.com/report/github.com/crossplane/upjet)
[![Contributors](https://img.shields.io/github/contributors/crossplane/upjet)](https://github.com/crossplane/upjet/graphs/contributors)
[![Slack](https://img.shields.io/badge/Slack-4A154B?logo=slack)](https://crossplane.slack.com/archives/C05T19TB729)
[![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/crossplane_io)](https://twitter.com/crossplane_io)
</div>
Upjet is a code generator framework that allows developers to build code
generation pipelines that can generate Crossplane controllers. Developers can
start building their code generation pipeline targeting specific Terraform Providers
by importing Upjet and wiring all generators together, customizing the whole
pipeline in the process.
start building their code generation pipeline targeting specific Terraform
Providers by importing Upjet and wiring all generators together, customizing the
whole pipeline in the process.
Here is some Crossplane providers built using Upjet:
Here are some Crossplane providers built using Upjet:
* [Provider AWS](https://github.com/upbound/provider-aws)
* [Provider Azure](https://github.com/upbound/provider-azure)
* [Provider GCP](https://github.com/upbound/provider-gcp)
- [upbound/provider-aws](https://github.com/upbound/provider-aws)
- [upbound/provider-azure](https://github.com/upbound/provider-azure)
- [upbound/provider-gcp](https://github.com/upbound/provider-gcp)
- [aviatrix/crossplane-provider-aviatrix](https://github.com/Aviatrix/crossplane-provider-aviatrix)
## Getting Started
You can get started by following the guides in [docs](docs/README.md) directory!
You can get started by following the guides in the [docs](docs/README.md)
directory.
## Report a Bug
For filing bugs, suggesting improvements, or requesting new features, please
open an [issue](https://github.com/upbound/upjet/issues).
open an [issue](https://github.com/crossplane/upjet/issues).
## Contact
Please use the following to reach members of the community:
* Slack: Join our [slack channel](https://slack.crossplane.io)
* Forums:
[crossplane-dev](https://groups.google.com/forum/#!forum/crossplane-dev)
* Twitter: [@crossplane_io](https://twitter.com/crossplane_io)
* Email: [info@crossplane.io](mailto:info@crossplane.io)
[#upjet](https://crossplane.slack.com/archives/C05T19TB729) channel in
[Crossplane Slack](https://slack.crossplane.io)
## Prior Art
@ -43,7 +52,7 @@ Upjet originates from the [Terrajet][terrajet] project. See the original
## Licensing
Provider AWS is under [the Apache 2.0 license](LICENSE) with [notice](NOTICE).
Upjet is under [the Apache 2.0 license](LICENSE) with [notice](NOTICE).
[terrajet-design-doc]: https://github.com/crossplane/crossplane/blob/master/design/design-doc-terrajet.md
[terrajet-design-doc]: https://github.com/crossplane/crossplane/blob/main/design/design-doc-terrajet.md
[terrajet]: https://github.com/crossplane/terrajet

2
build

@ -1 +1 @@
Subproject commit 8c8269dfb278dff101fae2dc6f397bd89b5f4f65
Subproject commit cc14f9cdac034e0eaaeb43479f57ee85d5490473

33
cmd/resolver/main.go Normal file
View File

@ -0,0 +1,33 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"os"
"path/filepath"
"github.com/alecthomas/kingpin/v2"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/spf13/afero"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/crossplane/upjet/v2/pkg/transformers"
)
func main() {
var (
app = kingpin.New(filepath.Base(os.Args[0]), "Transformer for the generated resolvers by the crossplane-tools so that cross API-group imports are removed.").DefaultEnvars()
apiGroupSuffix = app.Flag("apiGroupSuffix", "Resource API group suffix, such as aws.upbound.io. The resource API group names are suffixed with this to get the canonical API group name.").Short('g').Required().String()
apiGroupOverride = app.Flag("apiGroupOverride", "API group overrides").Short('o').StringMap()
apiResolverPackage = app.Flag("apiResolverPackage", "The package that contains the implementation for the GetManagedResource function, such as github.com/upbound/provider-aws/internal/apis.").Short('a').Required().String()
pattern = app.Flag("pattern", "List patterns for the packages to process, such as ./...").Short('p').Default("./...").Strings()
resolverFilePattern = app.Flag("resolver", "Name of the generated resolver files to process.").Short('r').Default("zz_generated.resolvers.go").String()
ignorePackageLoadErrors = app.Flag("ignoreLoadErrors", "Ignore errors encountered while loading the packages.").Short('s').Bool()
)
kingpin.MustParse(app.Parse(os.Args[1:]))
logger := logging.NewLogrLogger(zap.New().WithName("transformer-resolver"))
r := transformers.NewResolver(afero.NewOsFs(), *apiGroupSuffix, *apiResolverPackage, *ignorePackageLoadErrors, logger, transformers.WithAPIGroupOverrides(*apiGroupOverride))
kingpin.FatalIfError(r.TransformPackages(*resolverFilePattern, *pattern...), "Failed to transform the resolver files in the specified packages.")
}

View File

@ -1,6 +1,6 @@
/*
Copyright 2022 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package main
@ -8,9 +8,9 @@ import (
"os"
"path/filepath"
"gopkg.in/alecthomas/kingpin.v2"
"github.com/alecthomas/kingpin/v2"
"github.com/upbound/upjet/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/registry"
)
func main() {
@ -18,12 +18,14 @@ func main() {
app = kingpin.New(filepath.Base(os.Args[0]), "Terraform Registry provider metadata scraper.").DefaultEnvars()
outFile = app.Flag("out", "Provider metadata output file path").Short('o').Default("provider-metadata.yaml").OpenFile(os.O_CREATE, 0644)
providerName = app.Flag("name", "Provider name").Short('n').Required().String()
resourcePrefix = app.Flag("resource-prefix", `Terraform resource name prefix for the Terraform provider. For example, this is "google" for the google Terraform provider.`).String()
codeXPath = app.Flag("code-xpath", "Code XPath expression").Default(`//code[@class="language-terraform" or @class="language-hcl"]/text()`).String()
preludeXPath = app.Flag("prelude-xpath", "Prelude XPath expression").Default(`//text()[contains(., "description") and contains(., "subcategory")]`).String()
preludeXPath = app.Flag("prelude-xpath", "Prelude XPath expression").Default(`//text()[contains(., "description") and contains(., "page_title")]`).String()
fieldXPath = app.Flag("field-xpath", "Field documentation XPath expression").Default(`//ul/li//code[1]/text()`).String()
importXPath = app.Flag("import-xpath", "Import statements XPath expression").Default(`//code[@class="language-shell"]/text()`).String()
repoPath = app.Flag("repo", "Terraform provider repo path").Short('r').Required().ExistingDir()
debug = app.Flag("debug", "Output debug messages").Short('d').Default("false").Bool()
fileExtensions = app.Flag("extensions", "Extensions of the files to be scraped").Short('e').Default(".md", ".markdown").Strings()
)
kingpin.MustParse(app.Parse(os.Args[1:]))
@ -35,6 +37,8 @@ func main() {
PreludeXPath: *preludeXPath,
FieldDocXPath: *fieldXPath,
ImportXPath: *importXPath,
FileExtensions: *fileExtensions,
ResourcePrefix: *resourcePrefix,
}), "Failed to scrape Terraform provider metadata")
kingpin.FatalIfError(pm.Store((*outFile).Name()), "Failed to store Terraform provider metadata to file: %s", (*outFile).Name())
}

View File

@ -1,35 +1,45 @@
# Using Upjet
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
Upjet consists of three main pieces:
* Framework to build a code generator pipeline.
* Generic reconciler implementation used by all generated `CustomResourceDefinition`s.
* A scraper to extract documentation for all generated `CustomResourceDefinition`s.
SPDX-License-Identifier: CC-BY-4.0
-->
The usual flow of development of a new provider is as following:
1. Create a provider by following the guide [here][generate-a-provider].
2. Follow the guide [here][new-v1beta1] to add a `CustomResourceDefinition` for
every resource in the given Terraform provider.
# What is Upjet?
In most cases, the two guides above would be enough for you to get up and running
with a provider.
Upjet consists of four main components:
The guides below are longer forms for when you get stuck and want a deeper
understanding:
* Description of all configuration knobs can be found [here][full-guide].
* Detailed explanation of how to use Uptest to test your resources can be found
[here][uptest-guide].
* You can find a troubleshooting guide [here][testing-instructions] that can
be useful to debug a failed test.
* References are inferred from the generated examples with a best effort manner.
Details about the process can be found [here][reference-generation].
![Upjet components](images/upjet-components.png)
Feel free to ask your questions by opening an issue, starting a discussion or
shooting a message on [Slack]!
1. Framework to build a code generator pipeline for Crossplane providers.
1. Generic reconciler implementation (also known as the Upjet runtime) used by
all generated `CustomResourceDefinitions`.
1. A scraper to extract documentation for all generated
`CustomResourceDefinitions`.
1. Migration framework to support migrating from community providers to Official
Providers.
[generate-a-provider]: generating-a-provider.md
[new-v1beta1]: add-new-resource-short.md
[full-guide]: add-new-resource-long.md
[uptest-guide]: testing-resources-by-using-uptest.md
[testing-instructions]: testing-instructions.md
[reference-generation]: reference-generation.md
[Slack]: https://crossplane.slack.com/archives/C01TRKD4623
## Generating a Crossplane provider using Upjet
Follow the guide to start [generating a Crossplane
provider](generating-a-provider.md).
Further information on developing a provider:
- Guide for how to [configure a resource](configuring-a-resource.md) in your
provider.
- Guide on how to use Uptest to [test your resources](testing-with-uptest.md)
end to end.
- Guide on how to add support for
[management policies](adding-support-for-management-policies.md) to an existing
provider.
- Guide on how to [add a new resource](adding-new-resource.md) to an existing provider.
## Additional documentation
- [Provider identity based authentication](design-doc-provider-identity-based-auth.md)
- [Monitoring](monitoring.md) the Upjet runtime using Prometheus.
- [Migration Framework](migration-framework.md)
Feel free to ask your questions by opening an issue or starting a discussion in
the [#upjet](https://crossplane.slack.com/archives/C05T19TB729) channel in
[Crossplane Slack](https://slack.crossplane.io).

View File

@ -1,410 +0,0 @@
## Adding a New Resource
There are a long and detailed guides showing [how to bootstrap a
provider][provider-guide] and [how to configure resources][config-guide]. Here
we will go over the steps that will take us to `v1beta1` quality without going
into much detail so that it can be followed repeatedly quickly.
The steps are generally identical, so we'll just take a resource issue from AWS
[#258][issue-258] and you can generalize steps pretty much to all other
resources in all official providers. It has several resources from different API
groups, such as `glue`, `grafana`, `guardduty` and `iam`.
1. Assign issue to yourself.
1. Start from the top and click the link for the first resource,
[`aws_glue_workflow`] in this case.
1. Here we'll look for clues about how the Terraform ID is shaped so that we can
infer the external name configuration. In this case, there is a `name`
argument seen under `Argument Reference` section and when we look at `Import`
section, we see that this is what's used to import, i.e. Terraform ID is same
as `name` argument. This means that we can use `config.NameAsExternalName`
configuration from Upjet as our external name config. See section [External
Name Cases](#external-name-cases) to see how you can infer in many different
cases of Terraform ID.
1. First of all, please see the [Moving Untested Resources to v1beta1]
documentation.
Go to `config/external_name.go` and add the following line to
`ExternalNameConfigs` table:
```golang
// glue
//
// Imported using "name".
"aws_glue_workflow": config.NameAsIdentifier,
```
1. Run `make reviewable`.
1. Go through the "Warning" boxes (if any) in the Terraform Registry page to see
whether any of the fields are represented as separate resources as well. It
usually goes like
```
Routes can be defined either directly on the azurerm_iothub
resource, or using the azurerm_iothub_route resource - but the two cannot be
used together.
```
In such cases, the field should be moved to status since we prefer to
represent it only as a separate CRD. Go ahead and add a configuration block
for that resource similar to the following:
```golang
p.AddResourceConfigurator("azurerm_iothub", func(r *config.Resource) {
// Mutually exclusive with azurerm_iothub_route
config.MoveToStatus(r.TerraformResource, "route")
})
```
1. Go to the end of the TF registry page to see the timeouts. If they are longer
than 10 minutes, then we need to set the `UseAsync` property of the resource
to `true`. Go ahead and add a configuration block for that resource similar to
the following if it doesn't exist already:
```golang
p.AddResourceConfigurator("azurerm_iothub", func(r *config.Resource) {
r.UseAsync = true
})
```
Note that some providers have certain defaults, like Azure has this on by
default, in such cases you need to set this parameter to `false` if the
timeouts are less than 10 minutes.
1. Resource configuration is largely done, so we need to prepare the example
YAML for testing. Copy `examples-generated/glue/workflow.yaml` into
`examples/glue/workflow.yaml` and then remove `spec.forProvider.name` field.
If there is nothing left under `spec.forProvider`, then give it empty struct,
e.g. `forProvider: {}`
1. Repeat the same process for other resources under `glue`.
1. Once `glue` is completed, the following would be the additions we made to the
external name table and we'd have new examples under `examples/glue` folder.
```golang
// glue
//
// Imported using "name".
"aws_glue_workflow": config.NameAsIdentifier,
// Imported using arn: arn:aws:glue:us-west-2:123456789012:schema/example/example
"aws_glue_schema": config.IdentifierFromProvider,
// Imported using "name".
"aws_glue_trigger": config.NameAsIdentifier,
// Imported using the catalog_id:database_name:function_name
// 123456789012:my_database:my_func
"aws_glue_user_defined_function": config.TemplatedStringAsIdentifier("name", "{{ .parameters.catalog_id }}:{{ .parameters.database_name }}:{{ .externalName }}"),
"aws_glue_security_configuration": config.NameAsIdentifier,
// Imported using the account ID: 12356789012
"aws_glue_resource_policy": config.IdentifierFromProvider,
```
1. Create a commit to cover all manual changes so that it's easier for reviewer
with a message like the following `aws: add glue group`.
1. Run `make reviewable` so that new resources are generated.
1. Create another commit with a message like `aws: regenerate for glue group`.
That's pretty much all we need to do in the codebase. With these two commits, we
can open a new PR.
## Testing
Our first option is to run it by the automated testing tool we have. In order to
trigger it, you can drop a comment on the PR containing the following:
```console
# Wildcards like provider-aws/examples/glue/*.yaml also work.
/test-examples="provider-aws/examples/glue/catalogdatabase.yaml,provider-aws/examples/glue/catalogtable.yaml"
```
Once the automated tests pass, we're good to go. However, in some cases there is
a bug you can fix right away and in others resource is just not suitable for
automated testing, such as the ones that require you to take a special action
that a Crossplane provider cannot, such as uploading a file.
Our goal is to make it work with automated testing as much as possible. So, the
next step is to test the resources manually in your local and try to spot the
problems that prevent it from working with the automated testing. The steps for
manual testing are roughly like the following (no Crossplane is needed):
* `kubectl apply -f package/crds` to install all CRDs into cluster.
* `make run` to start the controllers.
* You need to create a `ProviderConfig` named as `default` with correct
credentials.
* Now, you can create the examples you've got generated and check events/logs to
spot problems and fix them.
There are cases where the resource requires user to take an action that is not
possible with a Crossplane provider or automated testing tool. In such cases, we
should leave the actions to be taken as annotation on the resource like the
following:
```yaml
apiVersion: apigatewayv2.aws.upbound.io/v1beta1
kind: VPCLink
metadata:
name: example
annotations:
upjet.upbound.io/manual-intervention: "User needs to upload a authorization script and give its path in spec.forProvider.filePath"
```
If, for some reason, we cannot successfully test a managed resource even manually,
then we do not ship it with the `v1beta1` version and thus the external-name
configuration should be commented out with an appropriate code comment
explaining the situation.
An issue in the official-providers repo explaining the situation
[should be opened](https://github.com/upbound/official-providers/issues/new/choose)
preferably with the example manifests (and any resource configuration) already tried.
As explained above, if the resource can successfully be manually tested but
not as part of the automated tests, the example manifest successfully validated
should still be included under the examples directory but with the proper
`upjet.upbound.io/manual-intervention` annotation.
And successful manual testing still meets the `v1beta1` criteria.
## External Name Cases
### Case 1: `name` As Identifier
There is a `name` argument under `Argument Reference` section and `Import`
section suggests to use `name` to import the resource.
Use `config.NameAsExternalName`.
An example would be
[`aws_eks_cluster`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster)
and
[here](https://github.com/upbound/official-providers/blob/1f88c68/provider-aws/config/external_name.go#L139)
is its configuration.
### Case 2: Parameter As Identifier
There is an argument under `Argument Reference` section that is used like name,
i.e. `cluster_name` or `group_name`, and `Import` section suggests to use the
value of that argument to import the resource.
Use `config.ParameterAsExternalName(<name of the argument parameter>)`.
An example would be
[`aws_elasticache_cluster`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster)
and
[here](https://github.com/upbound/official-providers/blob/1f88c68/provider-aws/config/external_name.go#L154)
is its configuration.
### Case 3: Random Identifier From Provider
The ID used in `Import` section is completely random and assigned by provider,
like a UUID, where you don't have any means of impact on it.
Use `config.IdentifierFromProvider`.
An example would be
[`aws_vpc`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc)
and
[here](https://github.com/upbound/official-providers/blob/1f88c68/provider-aws/config/external_name.go#L74)
is its configuration.
### Case 4: Random Identifier Substring From Provider
The ID used in `Import` section is partially random and assigned by provider.
For example, a node in a cluster could have a random ID like `13213` but the
Terraform Identifier could include the name of the cluster that's represented as
an argument field under `Argument Reference`, i.e. `cluster-name:23123`. In that
case, we'll use only the randomly assigned part as external name and we need to
tell Upjet how to construct the full ID back and forth.
```golang
func resourceName() config.ExternalName{
e := config.IdentifierFromProvider
e.GetIDFn = func(_ context.Context, externalName string, parameters map[string]interface{}, _ map[string]interface{}) (string, error) {
cl, ok := parameters["cluster_name"]
if !ok {
return "", errors.New("cluster_name cannot be empty")
}
return fmt.Sprintf("%s:%s", cl.(string), externalName), nil
}
e.GetExternalNameFn = func(tfstate map[string]interface{}) (string, error) {
id, ok := tfstate["id"]
if !ok {
return "", errors.New("id in tfstate cannot be empty")
}
w := strings.Split(s.(string), ":")
return w[len(w)-1], nil
}
}
```
### Case 5: Non-random Substrings as Identifier
There are more than a single argument under `Argument Reference` that are
concatenated to make up the whole identifier, e.g. `<region>/<cluster
name>/<node name>`. We will need to tell Upjet to use `<node name>` as external
name and take the rest from parameters.
Use `config.TemplatedStringAsIdentifier("<name argument>", "<go template>")` in
such cases. The following is the list of available parameters for you to use in
your go template:
```
parameters: A tree of parameters that you'd normally see in a Terraform HCL
file. You can use TF registry documentation of given resource to
see what's available.
terraformProviderConfig: The Terraform configuration object of the provider. You can
take a look at the TF registry provider configuration object
to see what's available. Not to be confused with ProviderConfig
custom resource of the Crossplane provider.
externalName: The value of external name annotation of the custom resource.
It is required to use this as part of the template.
```
You can see example usages in the big three providers below.
#### AWS
For `aws_glue_user_defined_function`, we see that `name` argument is used to
name the resource and the import instructions read as following:
```
Glue User Defined Functions can be imported using the
`catalog_id:database_name:function_name`. If you have not set a Catalog ID
specify the AWS Account ID that the database is in, e.g.,
$ terraform import aws_glue_user_defined_function.func 123456789012:my_database:my_func
```
Our configuration would look like the following:
```golang
"aws_glue_user_defined_function": config.TemplatedStringAsIdentifier("name", "{{ .parameters.catalog_id }}:{{ .parameters.database_name }}:{{ .externalName }}")
```
Another prevalent case in AWS is the usage of Amazon Resource Name (ARN) to
identify a resource. We can use `config.TemplatedStringAsIdentifier` in many of
those cases like the following:
```
"aws_glue_registry": config.TemplatedStringAsIdentifier("registry_name", "arn:aws:glue:{{ .parameters.region }}:{{ .setup.client_metadata.account_id }}:registry/{{ .external_name }}"),
```
However, there are cases where the ARN includes random substring and that would
fall under Case 4. The following is such an example:
```
// arn:aws:acm-pca:eu-central-1:609897127049:certificate-authority/ba0c7989-9641-4f36-a033-dee60121d595
"aws_acmpca_certificate_authority_certificate": config.IdentifierFromProvider,
```
#### Azure
Most Azure resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `azurerm_mariadb_firewall_rule`, we see that `name` argument is used to name
the resource and the import instructions read as following:
```
MariaDB Firewall rules can be imported using the resource id, e.g.
terraform import azurerm_mariadb_firewall_rule.rule1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.DBforMariaDB/servers/server1/firewallRules/rule1
```
Our configuration would look like the following:
```golang
"azurerm_mariadb_firewall_rule": config.TemplatedStringAsIdentifier("name", "/subscriptions/{{ .terraformProviderConfig.subscription_id }}/resourceGroups/{{ .parameters.resource_group_name }}/providers/Microsoft.DBforMariaDB/servers/{{ .parameters.server_name }}/firewallRules/{{ .externalName }}")
```
In some resources, an argument requires ID, like `azurerm_cosmosdb_sql_function`
where it has `container_id` and `name` but no separate `resource_group_name`
which would be required to build the full ID. Our configuration would look like
the following in this case:
```golang
config.TemplatedStringAsIdentifier("name", "{{ .parameters.container_id }}/userDefinedFunctions/{{ .externalName }}")
```
#### GCP
Most GCP resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `google_container_cluster`, we see that `name` argument is used to name the
resource and the import instructions read as following:
```console
GKE clusters can be imported using the project , location, and name.
If the project is omitted, the default provider value will be used.
Examples:
$ terraform import google_container_cluster.mycluster projects/my-gcp-project/locations/us-east1-a/clusters/my-cluster
$ terraform import google_container_cluster.mycluster my-gcp-project/us-east1-a/my-cluster
$ terraform import google_container_cluster.mycluster us-east1-a/my-cluster
```
In cases where there are multiple ways to construct the ID, we should take the
one with the least parameters so that we rely only on required fields because
optional fields may have some defaults that are assigned after the creation
which may make it tricky to work with. In this case, the following would be our
configuration:
```golang
"google_compute_instance": config.TemplatedStringAsIdentifier("name", "{{ .parameters.location }}/{{ .externalName }}")
```
There are cases where one of the example import commands uses just `name`, like
`google_compute_instance`:
```console
terraform import google_compute_instance.default {{name}}
```
In such cases, we should use `config.NameAsIdentifier` since we'd like to have
the least complexity in our configuration as possible.
### Case 6: No Import Statement
There is no instructions under `Import` section of the resource page in
Terraform Registry, like `aws_acm_certificate_validation` from AWS.
Use the following in such cases with comment indicating the case:
```golang
// No import documented.
"aws_acm_certificate_validation": config.IdentifierFromProvider,
```
### Case 7: Using Identifier of Another Resource
There are auxiliary resources that don't have an ID and since they map
one-to-one to another resource, they just opt to use the identifier of that
other resource. In many cases, the identifier is also a valid argument, maybe
even the only argument, to configure this resource.
An example would be
[`aws_ecrpublic_repository_policy`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecrpublic_repository)
from AWS where the identifier is `repository_name`.
Use `config.IdentifierFromProvider` because in these cases `repository_name` is
more meaningful as an argument rather than the name of the policy for users,
hence we assume the ID is coming from provider.
### Case 8: Using Identifiers of Other Resources
There are resources that mostly represent a relation between two resources
without any particular name that identifies the relation. An example would be
[`azurerm_subnet_nat_gateway_association`](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet_nat_gateway_association)
where the ID is made up of two arguments `nat_gateway_id` and `subnet_id`
without any particular field used to give a name to the resource.
Use `config.IdentifierFromProvider` because in these cases, there is no name
argument to be used as external name and both creation and import scenarios
would work the same way even if you configured the resources with conversion
functions between arguments and ID.
## No Matching Case
If it doesn't match any of the cases above, then we'll need to implement the
external name configuration from the ground up. Though in most cases, it's just
a little bit different that we only need to override a few things on top of
common functions.
One example is [`aws_route`] resource where the ID could use a different
argument depending on which one is given. You can take a look at the
implementation [here][route-impl]. [This section][external-name-in-guide] in the
detailed guide could also help you.
[provider-guide]:
https://github.com/upbound/upjet/blob/main/docs/generating-a-provider.md
[config-guide]:
https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md
[issue-258]: https://github.com/upbound/official-providers/issues/258
[`aws_glue_workflow`]:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/glue_workflow
[`aws_ecrpublic_repository_policy`]:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecrpublic_repository_policy#import
[`aws_route`]:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route
[route-impl]:
https://github.com/upbound/official-providers/blob/74a254b/provider-aws/config/external_name.go#L342
[external-name-in-guide]:
https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md#external-name
[Moving Untested Resources to v1beta1]: https://github.com/upbound/official-providers/blob/main/docs/moving-resources-to-v1beta1.md

753
docs/adding-new-resource.md Normal file
View File

@ -0,0 +1,753 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
### Prerequisites
To follow this guide, you will need:
1. A Kubernetes Cluster: For manual/local effort, generally a Kind cluster
is sufficient and can be used. For detailed information about Kind see
[this repo]. An alternative way to obtain a cluster is: [k3d]
2. [Go] installed and configured. Check the provider repo you will be working
with and install the version in the `go.mod` file.
3. [Terraform v1.5.5] installed locally. The last version we used before the
license change.
4. [goimports] installed.
# Adding a New Resource
There are long and detailed guides showing [how to bootstrap a
provider][provider-guide] and [how to configure resources][config-guide]. Here
we will go over the steps that will take us to `v1beta1` quality.
1. Fork the provider repo to which you will add resources and create a feature
branch.
2. Go to the Terraform Registry page of the resource you will add. We will add
the resource [`aws_redshift_endpoint_access`] as an example in this guide.
We will use this page in the following steps, especially in determining the
external name configuration, determining conflicting fields, etc.
3. Determine the resource's external name configuration:
Our external name configuration relies on the Terraform ID format of the
resource which we find in the import section on the Terraform Registry page.
Here we'll look for clues about how the Terraform ID is shaped so that we can
infer the external name configuration. In this case, there is an `endpoint_name`
argument seen under the `Argument Reference` section and when we look at
[Import] section, we see that this is what's used to import, i.e. Terraform ID
is the same as the `endpoint_name` argument. This means that we can use
`config.ParameterAsIdentifier("endpoint_name")` configuration from Upjet as our
external name config. See section [External Name Cases] to see how you can infer
in many different cases of Terraform ID.
4. Check if the resource is an Terraform Plugin SDK resource or Terraform Plugin
Framework resource from the [source code].
- For SDK resources, you will see a comment line like `// @SDKResource` in the
source code.
The `aws_redshift_endpoint_access` resource is an SDK resource, go to
`config/externalname.go` and add the following line to the
`TerraformPluginSDKExternalNameConfigs` table:
- Check the `redshift` group, if there is a group, add the external-name config below:
```golang
// redshift
...
// Redshift endpoint access can be imported using the endpoint_name
"aws_redshift_endpoint_access": config.ParameterAsIdentifier("endpoint_name"),
```
- If there is no group, continue by adding the group name as a comment line.
- For Framework resources, you will see a comment line like
`// @FrameworkResource` in the source code. If the resource is a Framework
resource, add the external-name config to the
`TerraformPluginFrameworkExternalNameConfigs` table.
*Note: Look at the `config/externalnamenottested.go` file and check if there is
a configuration for the resource and remove it from there.*
5. Run `make submodules` to initialize the build submodule and run
`make generate`. When the command process is completed, you will see that the
controller, CRD, generated example, and other necessary files for the resource
have been created and modified.
```bash
> git status
On branch add-redshift-endpoint-access
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: apis/redshift/v1beta1/zz_generated.conversion_hubs.go
modified: apis/redshift/v1beta1/zz_generated.deepcopy.go
modified: apis/redshift/v1beta1/zz_generated.managed.go
modified: apis/redshift/v1beta1/zz_generated.managedlist.go
modified: apis/redshift/v1beta1/zz_generated.resolvers.go
modified: config/externalname.go
modified: config/externalnamenottested.go
modified: config/generated.lst
modified: internal/controller/zz_monolith_setup.go
modified: internal/controller/zz_redshift_setup.go
Untracked files:
(use "git add <file>..." to include in what will be committed)
apis/redshift/v1beta1/zz_endpointaccess_terraformed.go
apis/redshift/v1beta1/zz_endpointaccess_types.go
examples-generated/redshift/v1beta1/endpointaccess.yaml
internal/controller/redshift/endpointaccess/
package/crds/redshift.aws.upbound.io_endpointaccesses.yaml
```
6. Go through the "Warning" boxes (if any) in the Terraform Registry page to
see whether any of the fields are represented as separate resources as well.
It usually goes like this:
> Routes can be defined either directly on the azurerm_iothub
> resource, or using the azurerm_iothub_route resource - but the two cannot be
> used together.
In such cases, the field should be moved to status since we prefer to
represent it only as a separate CRD. Go ahead and add a configuration block
for that resource similar to the following:
```golang
p.AddResourceConfigurator("azurerm_iothub", func(r *config.Resource) {
// Mutually exclusive with azurerm_iothub_route
config.MoveToStatus(r.TerraformResource, "route")
})
```
7. Resource configuration is largely done, so we need to prepare the example
YAML for testing. Copy `examples-generated/redshift/v1beta1/endpointaccess.yaml`
into `examples/redshift/v1beta1/endpointaccess.yaml` and check the dependent
resources, if not, please add them to the YAML file.
```
NOTE: The resources that are tried to be created may have dependencies. For
example, you might actually need resources Y and Z while trying to test resource
X. Many of the generated examples include these dependencies. However, in some
cases, there may be missing dependencies. In these cases, please add the
relevant dependencies to your example manifest. This is important both for you
to pass the tests and to provide the correct manifests.
```
- In our case, the generated example has required fields
`spec.forProvider.clusterIdentifierSelector` and
`spec.forProvider.subnetGroupNameSelector`. We need to check its argument list
in Terraform documentation and figure out which field needs a reference to
which resource. Let's check the [cluster_identifier] field, we see that the
field requires a reference to the `Cluster.redshift` resource identifier.
For the [subnet_group_name] field, we see that the field requires a reference
to the `SubnetGroup.redshift` resource ID.
Then add the `Cluster.redshift` and `SubnetGroup.redshift` resource examples
to our YAML file and edit the annotations and labels.
```yaml
apiVersion: redshift.aws.upbound.io/v1beta1
kind: EndpointAccess
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example
name: example-endpointaccess
spec:
forProvider:
clusterIdentifierSelector:
matchLabels:
testing.upbound.io/example-name: example-endpointaccess
region: us-west-1
subnetGroupNameSelector:
matchLabels:
testing.upbound.io/example-name: example-endpointaccess
---
apiVersion: redshift.aws.upbound.io/v1beta1
kind: Cluster
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example-endpointaccess
name: example-endpointaccess-c
spec:
forProvider:
clusterType: single-node
databaseName: mydb
masterPasswordSecretRef:
key: example-key
name: cluster-secret
namespace: upbound-system
masterUsername: exampleuser
nodeType: ra3.xlplus
region: us-west-1
skipFinalSnapshot: true
---
apiVersion: redshift.aws.upbound.io/v1beta1
kind: SubnetGroup
metadata:
annotations:
meta.upbound.io/example-id: redshift/v1beta1/endpointaccess
labels:
testing.upbound.io/example-name: example-endpointaccess
name: example-endpointaccess-sg
spec:
forProvider:
region: us-west-1
subnetIdRefs:
- name: foo
- name: bar
tags:
environment: Production
```
Here the references for `clusterIdentifier` and `subnetGroupName` are
[automatically] defined.
If it is not defined automatically or if you want to define a reference for
another field, please see [Cross Resource Referencing].
8. Create a commit to cover all changes so that it's easier for the reviewer
with a message like the following:
`Configure EndpointAccess.redshift resource and add example`
9. Run `make reviewable` to ensure this PR is ready for review.
10. That's pretty much all we need to do in the codebase, we can open a
new PR: `git push --set-upstream origin add-redshift-endpoint-access`
# Testing Instructions
While configuring resources, the testing effort is the longest part, because the
characteristics of cloud providers and services can change. This test effort can
be executed in two main methods. The first one is testing the resources in a
manual way and the second one is using the [Uptest] which is an automated test
tool for Official Providers. `Uptest` provides a framework to test resources in
an end-to-end pipeline during the resource configuration process. Together with
the example manifest generation tool, it allows us to avoid manual interventions
and shortens testing processes.
## Automated Tests - Uptest
After providing all the required fields of the resource we added and added
dependent resources, if any, we can start with automatic testing. To trigger
automated tests, you must have one approved PR and be a contributor in the
relevant repo. In other cases, maintainers will trigger automatic tests when
your PR is ready. To trigger it, you can drop [a comment] on the PR containing
the following:
```
/test-examples="examples/redshift/v1beta1/endpointaccess.yaml"
```
Once the automated tests pass, we're good to go. All you have to do is put
the link to the successful uptest run in the `How has this code been tested`
section in the PR description.
If the automatic test fails, click on the uptest run details, then click
`e2e/uptest` -> `Run uptest` and try to debug from the logs.
In adding the `EndpointAccess.redshift` resource case, we see the following
error from uptest run logs:
```
logger.go:42: 14:32:49 | case/0-apply | - lastTransitionTime: "2024-05-20T14:25:08Z"
logger.go:42: 14:32:49 | case/0-apply | message: 'cannot resolve references: mg.Spec.ForProvider.SubnetGroupName: no
logger.go:42: 14:32:49 | case/0-apply | resources matched selector'
logger.go:42: 14:32:49 | case/0-apply | reason: ReconcileError
logger.go:42: 14:32:49 | case/0-apply | status: "False"
logger.go:42: 14:32:49 | case/0-apply | type: Synced
```
Make the fixes, create a [new commit], and trigger the automated test again.
**Ignoring Some Resources in Automated Tests**
Some resources require manual intervention such as providing valid public keys
or using on-the-fly values. These cases can be handled in manual tests, but in
cases where we cannot provide generic values for automated tests, we can skip
some resources in the tests of the relevant group via an annotation:
```yaml
upjet.upbound.io/manual-intervention: "The Certificate needs to be provisioned successfully which requires a real domain."
```
The key is important for skipping. We are checking this
`upjet.upbound.io/manual-intervention` annotation key and if it is in there, we
skip the related resource. The value is also important to see why we skip this
resource.
```
NOTE: For resources that are ignored during Automated Tests, manual testing is a
must, because we need to make sure that all resources published in the `v1beta1`
version is working.
```
### Running Uptest locally
For a faster feedback loop, you might want to run `uptest` locally in your
development setup. For this, you can use the e2e make target available in
the provider repositories. This target requires the following environment
variables to be set:
- `UPTEST_CLOUD_CREDENTIALS`: cloud credentials for the provider being tested.
- `UPTEST_EXAMPLE_LIST`: a comma-separated list of examples to test.
- `UPTEST_DATASOURCE_PATH`: (optional), see [Injecting Dynamic Values (and Datasource)]
You can check the e2e target in the Makefile for each provider. Let's check the [target]
in provider-upjet-aws and run a test for the resource `examples/ec2/v1beta1/vpc.yaml`.
- You can either save your credentials in a file as stated in the target's [comments],
or you can do it by adding your credentials to the command below.
```console
export UPTEST_CLOUD_CREDENTIALS="DEFAULT='[default]
aws_access_key_id = <YOUR-ACCESS_KEY_ID>
aws_secret_access_key = <YOUR-ACCESS_KEY'"
```
```console
export UPTEST_EXAMPLE_LIST="examples/ec2/v1beta1/vpc.yaml"
```
After setting the above environment variables, run `make e2e`. If the test is
successful, you will see a log like the one below, kindly add to the PR
description this log:
```console
--- PASS: kuttl (37.41s)
--- PASS: kuttl/harness (0.00s)
--- PASS: kuttl/harness/case (36.62s)
PASS
14:02:30 [ OK ] running automated tests
```
## Manual Test
Configured resources can be tested by using manual methods. This method generally
contains the environment preparation and creating the example manifest in the
Kubernetes cluster steps. The following steps can be followed for preparing the
environment:
1. Registering the CRDs (Custom Resource Definitions) to Cluster: We need to
apply the CRD manifests to the cluster. The relevant manifests are located in
the `package/crds` folder of provider subdirectories such as:
`provider-aws/package/crds`. For registering them please run the following
command: `kubectl apply -f package/crds`
2. Create ProviderConfig: ProviderConfig Custom Resource contains some
configurations and credentials for the provider. For example, to connect to the
cloud provider, we use the credentials field of ProviderConfig. For creating the
ProviderConfig with correct credentials, please see:
- [Create a Kubernetes secret with the AWS credentials]
- [Create a Kubernetes secret with the Azure credentials]
- [Create a Kubernetes secret with the GCP credentials]
3. Start Provider: For every Custom Resource, there is a controller and these
controllers are part of the provider. So, for starting the reconciliations for
Custom Resources, we need to run the provider (collect of controllers). For
running provider: Run `make run`
4. Now, you can create the examples you've generated and check events/logs to
spot problems and fix them.
- Start Testing: After completing the steps above, your environment is ready for
testing. There are 3 steps we need to verify in manual tests: `Apply`, `Import`,
`Delete`.
### Apply:
We need to apply the example manifest to the cluster.
```bash
kubectl apply -f examples/redshift/v1beta1/endpointaccess.yaml
```
Successfully applying the example manifests to the cluster is only the first
step. After successfully creating the Managed Resources, we need to check
whether their statuses are ready or not. So we need to expect a `True` value for
`Synced` and `Ready` conditions. To check the statuses of all created example
manifests quickly you can run the `kubectl get managed` command. We will wait
for all values to be `True` in this list:
```bash
NAME SYNCED READY EXTERNAL-NAME AGE
subnet.ec2.aws.upbound.io/bar True True subnet-0149bf6c20720d596 26m
subnet.ec2.aws.upbound.io/foo True True subnet-02971ebb943f5bb6e 26m
NAME SYNCED READY EXTERNAL-NAME AGE
vpc.ec2.aws.upbound.io/foo True True vpc-0ee6157df1f5a116a 26m
NAME SYNCED READY EXTERNAL-NAME AGE
cluster.redshift.aws.upbound.io/example-endpointaccess-c True True example-endpointaccess-c 26m
NAME SYNCED READY EXTERNAL-NAME AGE
endpointaccess.redshift.aws.upbound.io/example-endpointaccess True True example-endpointaccess 26m
NAME SYNCED READY EXTERNAL-NAME AGE
subnetgroup.redshift.aws.upbound.io/example-endpointaccess-sg True True example-endpointaccess-sg 26m
```
As a second step, we need to check the `UpToDate` status condition. This status
condition will be visible when you set the annotation: `upjet.upbound.io/test=true`.
Without adding this annotation you cannot see the mentioned condition. The rough
significance of this condition is to make sure that the resource does not remain
in an update loop. To check the `UpToDate` condition for all MRs in the cluster,
run:
```bash
kubectl annotate managed --all upjet.upbound.io/test=true --overwrite
# check the conditions
kubectl get endpointaccess.redshift.aws.upbound.io/example-endpointaccess -o yaml
```
You should see the output below:
```yaml
conditions:
- lastTransitionTime: "2024-05-20T17:37:20Z"
reason: Available
status: "True"
type: Ready
- lastTransitionTime: "2024-05-20T17:37:11Z"
reason: ReconcileSuccess
status: "True"
type: Synced
- lastTransitionTime: "2024-05-20T17:37:15Z"
reason: Success
status: "True"
type: LastAsyncOperation
- lastTransitionTime: "2024-05-20T17:37:48Z"
reason: UpToDate
status: "True"
type: Test
```
When all of the fields are `True`, the `Apply` test was successfully completed!
### Import
There are a few steps to perform the import test, here we will stop the provider,
delete the status conditions, and check the conditions when we re-run the provider.
- Stop `make run`
- Delete the status conditions with the following command:
```bash
kubectl --subresource=status patch endpointaccess.redshift.aws.upbound.io/example-endpointaccess --type=merge -p '{"status":{"conditions":[]}}'
```
- Store the `status.atProvider.id` field for comparison
- Run `make run`
- Make sure that the `Ready`, `Synced`, and `UpToDate` conditions are `True`
- Compare the new `status.atProvider.id` with the one you stored and make sure
they are the same
The import test was successful when the above conditions were met.
### Delete
Make sure the resource has been successfully deleted by running the following
command:
```bash
kubectl delete endpointaccess.redshift.aws.upbound.io/example-endpointaccess
```
When the resource is successfully deleted, the manual testing steps are completed.
```
IMPORTANT NOTE: `make generate` and `kubectl apply -f package/crds` commands
must be run after any change that will affect the schema or controller of the
configured/tested resource.
In addition, the provider needs to be restarted after the changes in the
controllers, because the controller change actually corresponds to the changes
made in the running code.
```
You can look at the [PR] we created for the `EndpointAccess.redshift` resource
we added in this guide.
## External Name Cases
### Case 1: `name` As Identifier
There is a `name` argument under the `Argument Reference` section and `Import`
section suggests to use `name` to import the resource.
Use `config.NameAsIdentifier`.
An example would be [`aws_eks_cluster`] and [here][eks-config] is its
configuration.
### Case 2: Parameter As Identifier
There is an argument under the `Argument Reference` section that is used like
name, i.e. `cluster_name` or `group_name`, and the `Import` section suggests
using the value of that argument to import the resource.
Use `config.ParameterAsIdentifier(<name of the argument parameter>)`.
An example would be [`aws_elasticache_cluster`] and [here][cache-config] is its
configuration.
### Case 3: Random Identifier From Provider
The ID used in the `Import` section is completely random and assigned by the
provider, like a UUID, where you don't have any means of impact on it.
Use `config.IdentifierFromProvider`.
An example would be [`aws_vpc`] and [here][vpc-config] is its configuration.
### Case 4: Random Identifier Substring From Provider
The ID used in the `Import` section is partially random and assigned by the
provider. For example, a node in a cluster could have a random ID like `13213`
but the Terraform Identifier could include the name of the cluster that's
represented as an argument field under `Argument Reference`, i.e.
`cluster-name:23123`. In that case, we'll use only the randomly assigned part
as external name and we need to tell Upjet how to construct the full ID back
and forth.
```golang
func resourceName() config.ExternalName{
e := config.IdentifierFromProvider
e.GetIDFn = func(_ context.Context, externalName string, parameters map[string]interface{}, _ map[string]interface{}) (string, error) {
cl, ok := parameters["cluster_name"]
if !ok {
return "", errors.New("cluster_name cannot be empty")
}
return fmt.Sprintf("%s:%s", cl.(string), externalName), nil
}
e.GetExternalNameFn = func(tfstate map[string]interface{}) (string, error) {
id, ok := tfstate["id"]
if !ok {
return "", errors.New("id in tfstate cannot be empty")
}
w := strings.Split(id.(string), ":")
return w[len(w)-1], nil
}
}
```
### Case 5: Non-random Substrings as Identifier
There are more than a single argument under `Argument Reference` that are
concatenated to make up the whole identifier, e.g. `<region>/<cluster
name>/<node name>`. We will need to tell Upjet to use `<node name>` as external
name and take the rest from the parameters.
Use `config.TemplatedStringAsIdentifier("<name argument>", "<go template>")` in
such cases. The following is the list of available parameters for you to use in
your go template:
```
parameters: A tree of parameters that you'd normally see in a Terraform HCL
file. You can use TF registry documentation of given resource to
see what's available.
terraformProviderConfig: The Terraform configuration object of the provider. You can
take a look at the TF registry provider configuration object
to see what's available. Not to be confused with ProviderConfig
custom resource of the Crossplane provider.
externalName: The value of external name annotation of the custom resource.
It is required to use this as part of the template.
```
You can see example usages in the big three providers below.
#### AWS
For `aws_glue_user_defined_function`, we see that the `name` argument is used
to name the resource and the import instructions read as the following:
> Glue User Defined Functions can be imported using the
> `catalog_id:database_name:function_name`. If you have not set a Catalog ID
> specify the AWS Account ID that the database is in, e.g.,
> $ terraform import aws_glue_user_defined_function.func
123456789012:my_database:my_func
Our configuration would look like the following:
```golang
"aws_glue_user_defined_function": config.TemplatedStringAsIdentifier("name", "{{ .parameters.catalog_id }}:{{ .parameters.database_name }}:{{ .externalName }}")
```
Another prevalent case in AWS is the usage of Amazon Resource Name (ARN) to
identify a resource. We can use `config.TemplatedStringAsIdentifier` in many of
those cases like the following:
```
"aws_glue_registry": config.TemplatedStringAsIdentifier("registry_name", "arn:aws:glue:{{ .parameters.region }}:{{ .setup.client_metadata.account_id }}:registry/{{ .external_name }}"),
```
However, there are cases where the ARN includes random substring and that would
fall under Case 4. The following is such an example:
```
// arn:aws:acm-pca:eu-central-1:609897127049:certificate-authority/ba0c7989-9641-4f36-a033-dee60121d595
"aws_acmpca_certificate_authority_certificate": config.IdentifierFromProvider,
```
#### Azure
Most Azure resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `azurerm_mariadb_firewall_rule`, we see that the `name` argument is used to
name the resource and the import instructions read as the following:
> MariaDB Firewall rules can be imported using the resource ID, e.g.
>
> `terraform import azurerm_mariadb_firewall_rule.rule1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.DBforMariaDB/servers/server1/firewallRules/rule1`
Our configuration would look like the following:
```golang
"azurerm_mariadb_firewall_rule": config.TemplatedStringAsIdentifier("name", "/subscriptions/{{ .terraformProviderConfig.subscription_id }}/resourceGroups/{{ .parameters.resource_group_name }}/providers/Microsoft.DBforMariaDB/servers/{{ .parameters.server_name }}/firewallRules/{{ .externalName }}")
```
In some resources, an argument requires ID, like `azurerm_cosmosdb_sql_function`
where it has `container_id` and `name` but no separate `resource_group_name`
which would be required to build the full ID. Our configuration would look like
the following in this case:
```golang
config.TemplatedStringAsIdentifier("name", "{{ .parameters.container_id }}/userDefinedFunctions/{{ .externalName }}")
```
#### GCP
Most GCP resources fall under this case since they use fully qualified
identifier as Terraform ID.
For `google_container_cluster`, we see that the `name` argument is used to name
the resource and the import instructions read as the following:
> GKE clusters can be imported using the project, location, and name.
> If the project is omitted, the default provider value will be used.
> Examples:
>
> ```console
> $ terraform import google_container_cluster.mycluster projects/my-gcp-project/locations/us-east1-a/clusters/my-cluster
> $ terraform import google_container_cluster.mycluster my-gcp-project/us-east1-a/my-cluster
> $ terraform import google_container_cluster.mycluster us-east1-a/my-cluster
> ```
In cases where there are multiple ways to construct the ID, we should take the
one with the least parameters so that we rely only on required fields because
optional fields may have some defaults that are assigned after the creation
which may make it tricky to work with. In this case, the following would be our
configuration:
```golang
"google_compute_instance": config.TemplatedStringAsIdentifier("name", "{{ .parameters.location }}/{{ .externalName }}")
```
There are cases where one of the example import commands uses just `name`, like
`google_compute_instance`:
```console
terraform import google_compute_instance.default {{name}}
```
In such cases, we should use `config.NameAsIdentifier` since we'd like to have
the least complexity in our configuration as possible.
### Case 6: No Import Statement
There are no instructions under the `Import` section of the resource page in
Terraform Registry, like `aws_acm_certificate_validation` from AWS.
Use the following in such cases with a comment indicating the case:
```golang
// No import documented.
"aws_acm_certificate_validation": config.IdentifierFromProvider,
```
### Case 7: Using Identifier of Another Resource
There are auxiliary resources that don't have an ID and since they map
one-to-one to another resource, they just opt to use the identifier of that
other resource. In many cases, the identifier is also a valid argument, maybe
even the only argument, to configure this resource.
An example would be
[`aws_ecrpublic_repository_policy`] from AWS where the identifier is
`repository_name`.
Use `config.IdentifierFromProvider` because in these cases `repository_name` is
more meaningful as an argument rather than the name of the policy for users,
hence we assume the ID is coming from the provider.
### Case 8: Using Identifiers of Other Resources
There are resources that mostly represent a relation between two resources
without any particular name that identifies the relation. An example would be
[`azurerm_subnet_nat_gateway_association`] where the ID is made up of two
arguments `nat_gateway_id` and `subnet_id` without any particular field used
to give a name to the resource.
Use `config.IdentifierFromProvider` because in these cases, there is no name
argument to be used as external name and both creation and import scenarios
would work the same way even if you configured the resources with conversion
functions between arguments and ID.
## No Matching Case
If it doesn't match any of the cases above, then we'll need to implement the
external name configuration from the ground up. Though in most cases, it's just
a little bit different that we only need to override a few things on top of
common functions.
One example is [`aws_route`] resource where the ID could use a different
argument depending on which one is given. You can take a look at the
implementation [here][route-impl]. [This section] in the
detailed guide could also help you.
[comment]: <> (References)
[this repo]: https://github.com/kubernetes-sigs/kind
[k3d]: https://k3d.io/
[Go]: https://go.dev/doc/install
[Terraform v1.5.5]: https://developer.hashicorp.com/terraform/install
[goimports]: https://pkg.go.dev/golang.org/x/tools/cmd/goimports
[provider-guide]: https://github.com/upbound/upjet/blob/main/docs/generating-a-provider.md
[config-guide]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md
[`aws_redshift_endpoint_access`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshift_endpoint_access
[Import]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/redshift_endpoint_access#import
[External Name Cases]: #external-name-cases
[source code]: https://github.com/hashicorp/terraform-provider-aws/blob/f222bd785228729dc1f5aad7d85c4d04a6109075/internal/service/redshift/endpoint_access.go#L24
[cluster_identifier]: https://registry.terraform.io/providers/hashicorp/aws/5.35.0/docs/resources/redshift_endpoint_access#cluster_identifier
[subnet_group_name]: https://registry.terraform.io/providers/hashicorp/aws/5.35.0/docs/resources/redshift_endpoint_access#subnet_group_name
[automatically]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md#auto-cross-resource-reference-generation
[Cross Resource Referencing]: https://github.com/crossplane/upjet/blob/main/docs/configuring-a-resource.md#cross-resource-referencing
[a comment]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314#issuecomment-2120539099
[new commit]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314/commits/b76e566eea5bd53450f2175e7e5a6e274934255b
[Create a Kubernetes secret with the AWS credentials]: https://docs.crossplane.io/latest/getting-started/provider-aws/#create-a-kubernetes-secret-with-the-aws-credentials
[Create a Kubernetes secret with the Azure credentials]: https://docs.crossplane.io/latest/getting-started/provider-azure/#create-a-kubernetes-secret-with-the-azure-credentials
[Create a Kubernetes secret with the GCP credentials]: https://docs.crossplane.io/latest/getting-started/provider-gcp/#create-a-kubernetes-secret-with-the-gcp-credentials
[PR]: https://github.com/crossplane-contrib/provider-upjet-aws/pull/1314
[`aws_eks_cluster`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster
[eks-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L284
[`aws_elasticache_cluster`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_cluster
[cache-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L299
[`aws_vpc`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc
[vpc-config]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L155
[`aws_ecrpublic_repository_policy`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecrpublic_repository_policy
[`azurerm_subnet_nat_gateway_association`]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet_nat_gateway_association
[`aws_route`]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route
[route-impl]: https://github.com/upbound/provider-aws/blob/8b3887c91c4b44dc14e1123b3a5ae1a70e0e45ed/config/externalname.go#L172
[This section]: #external-name-cases
[Injecting Dynamic Values (and Datasource)]: https://github.com/crossplane/uptest?tab=readme-ov-file#injecting-dynamic-values-and-datasource
[target]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L257
[comments]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/e4b8f222a4baf0ea37caf1d348fe109bf8235dc2/Makefile#L259
[Uptest]: https://github.com/crossplane/uptest

View File

@ -0,0 +1,255 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Adding Support for Management Policies and initProvider
## Regenerating a provider with Management Policies
Check out the provider repo, e.g., upbound/provider-aws, and go to the project
directory on your local machine.
1. Generate with management policy and update crossplane-runtime dependency:
```bash
# Consume the latest crossplane-tools:
go get github.com/crossplane/crossplane-tools@master
go mod tidy
# Generate getters/setters for management policies
make generate
# Consume the latest crossplane-runtime:
go get github.com/crossplane/crossplane-runtime@main
go mod tidy
```
1. Introduce a feature flag for `Management Policies`.
Add the feature flag definition into the `internal/features/features.go`
file.
```diff
diff --git a/internal/features/features.go b/internal/features/features.go
index 9c6b1fc8..de261ca4 100644
--- a/internal/features/features.go
+++ b/internal/features/features.go
@@ -12,4 +12,9 @@ const (
// External Secret Stores. See the below design for more details.
// https://github.com/crossplane/crossplane/blob/390ddd/design/design-doc-external-secret-stores.md
EnableAlphaExternalSecretStores feature.Flag = "EnableAlphaExternalSecretStores"
+
+ // EnableAlphaManagementPolicies enables alpha support for
+ // Management Policies. See the below design for more details.
+ // https://github.com/crossplane/crossplane/pull/3531
+ EnableAlphaManagementPolicies feature.Flag = "EnableAlphaManagementPolicies"
)
```
Add the actual flag in `cmd/provider/main.go` file and pass the flag to the
workspace store:
```diff
diff --git a/cmd/provider/main.go b/cmd/provider/main.go
index 669b01f9..a60df983 100644
--- a/cmd/provider/main.go
+++ b/cmd/provider/main.go
@@ -48,6 +48,7 @@ func main() {
namespace = app.Flag("namespace", "Namespace used to set as default scope in default secret store config.").Default("crossplane-system").Envar("POD_NAMESPACE").String()
enableExternalSecretStores = app.Flag("enable-external-secret-stores", "Enable support for ExternalSecretStores.").Default("false").Envar("ENABLE_EXTERNAL_SECRET_STORES").Bool()
+ enableManagementPolicies = app.Flag("enable-management-policies", "Enable support for Management Policies.").Default("false").Envar("ENABLE_MANAGEMENT_POLICIES").Bool()
)
kingpin.MustParse(app.Parse(os.Args[1:]))
@@ -122,6 +123,11 @@ func main() {
})), "cannot create default store config")
}
terraform.WithSharedProviderOptions(terraform.WithNativeProviderPath(*setupConfig.NativeProviderPath), terraform.WithNativeProviderName("registry.terraform.io/"+*setupConfig.NativeProviderSource)))
}
+ featureFlags := &feature.Flags{}
o := tjcontroller.Options{
Options: xpcontroller.Options{
Logger: log,
GlobalRateLimiter: ratelimiter.NewGlobal(*maxReconcileRate),
PollInterval: *pollInterval,
MaxConcurrentReconciles: *maxReconcileRate,
- Features: &feature.Flags{},
+ Features: featureFlags,
},
Provider: config.GetProvider(),
- WorkspaceStore: terraform.NewWorkspaceStore(log, terraform.WithDisableInit(len(*setupConfig.NativeProviderPath) != 0), terraform.WithProcessReportInterval(*pollInterval)),
+ WorkspaceStore: terraform.NewWorkspaceStore(log, terraform.WithDisableInit(len(*setupConfig.NativeProviderPath) != 0), terraform.WithProcessReportInterval(*pollInterval), terraform.WithFeatures(featureFlags)),
SetupFn: clients.SelectTerraformSetup(log, setupConfig),
EventHandler: eventHandler,
}
+ if *enableManagementPolicies {
+ o.Features.Enable(features.EnableAlphaManagementPolicies)
+ log.Info("Alpha feature enabled", "flag", features.EnableAlphaManagementPolicies)
+ }
+
kingpin.FatalIfError(controller.Setup(mgr, o), "Cannot setup AWS controllers")
kingpin.FatalIfError(mgr.Start(ctrl.SetupSignalHandler()), "Cannot start controller manager")
}
```
> [!NOTE]
> If the provider was already updated to support observe-only resources, just
add the feature flag to the `workspaceStore`.
1. Generate with the latest upjet and management policies:
```bash
# Bump to the latest upjet
go get github.com/crossplane/upjet@main
go mod tidy
```
Enable management policies in the generator by adding
`config.WithFeaturesPackage` option:
```diff
diff --git a/config/provider.go b/config/provider.go
index 964883670..1c06a53e2 100644
--- a/config/provider.go
+++ b/config/provider.go
@@ -141,6 +141,7 @@ func GetProvider() *config.Provider {
config.WithReferenceInjectors([]config.ReferenceInjector{reference.NewInjector(modulePath)}),
config.WithSkipList(skipList),
config.WithBasePackages(BasePackages),
+ config.WithFeaturesPackage("internal/features"),
config.WithDefaultResourceOptions(
GroupKindOverrides(),
KindOverrides(),
```
Generate:
```bash
make generate
```
## Testing: Locally Running the Provider with Management Policies Enabled
1. Create a fresh Kubernetes cluster.
1. Apply all of the provider's CRDs with `kubectl apply -f package/crds`.
1. Run the provider with `--enable-management-policies`.
You can update the `run` target in the Makefile as below
```diff
diff --git a/Makefile b/Makefile
index d529a0d6..84411669 100644
--- a/Makefile
+++ b/Makefile
@@ -111,7 +111,7 @@ submodules:
run: go.build
@$(INFO) Running Crossplane locally out-of-cluster . . .
@# To see other arguments that can be provided, run the command with --help instead
- UPBOUND_CONTEXT="local" $(GO_OUT_DIR)/provider --debug
+ UPBOUND_CONTEXT="local" $(GO_OUT_DIR)/provider --debug --enable-management-policies
```
and run with:
```shell
make run
```
1. Create some resources in the provider's management console and try observing
them by creating a managed resource with `managementPolicies: ["Observe"]`.
For example:
```yaml
apiVersion: rds.aws.upbound.io/v1beta1
kind: Instance
metadata:
name: an-existing-dbinstance
spec:
managementPolicies: ["Observe"]
forProvider:
region: us-west-1
```
You should see the managed resource is ready & synced:
```bash
NAME READY SYNCED EXTERNAL-NAME AGE
an-existing-dbinstance True True an-existing-dbinstance 3m
```
and the `status.atProvider` is updated with the actual state of the resource:
```bash
kubectl get instance.rds.aws.upbound.io an-existing-dbinstance -o yaml
```
> [!NOTE]
> You need the `terraform` executable installed on your local machine.
1. Create a managed resource without `LateInitialize` like
`managementPolicies: ["Observe", "Create", "Update", "Delete"]` with
`spec.initProvider` fields to see the provider create the resource with
combining `spec.initProvider` and `spec.forProvider` fields:
For example:
```yaml
apiVersion: dynamodb.aws.upbound.io/v1beta1
kind: Table
metadata:
name: example
annotations:
meta.upbound.io/example-id: dynamodb/v1beta1/table
spec:
managementPolicies: ["Observe", "Create", "Update", "Delete"]
initProvider:
writeCapacity: 20
readCapacity: 19
forProvider:
region: us-west-1
attribute:
- name: UserId
type: S
- name: GameTitle
type: S
- name: TopScore
type: "N"
billingMode: PROVISIONED
globalSecondaryIndex:
- hashKey: GameTitle
name: GameTitleIndex
nonKeyAttributes:
- UserId
projectionType: INCLUDE
rangeKey: TopScore
readCapacity: 10
writeCapacity: 10
hashKey: UserId
rangeKey: GameTitle
```
You should see the managed resource is ready & synced:
```bash
NAME READY SYNCED EXTERNAL-NAME AGE
example True True example 3m
```
and the `status.atProvider` is updated with the actual state of the resource,
including the `initProvider` fields:
```bash
kubectl get tables.dynamodb.aws.upbound.io example -o yaml
```
As the late initialization is skipped, the `spec.forProvider` should be the
same when we created the resource.
In the provider console, you should see that the resource was created with
the values in the `initProvider` field.

View File

@ -1,11 +1,15 @@
## Configuring a Resource
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Configuring a resource
[Upjet] generates as much as it could using the available information in the
Terraform resource schema. This includes an XRM-conformant schema of the
resource, controller logic, late initialization, sensitive data handling etc.
However, there are still couple of information that requires some input
configuration which could easily be provided by checking the Terraform
documentation of the resource:
resource, controller logic, late initialization, sensitive data handling, etc.
However, there are still information that requires some input configuration
which can be found by checking the Terraform documentation of the resource:
- [External name]
- [Cross Resource Referencing]
@ -14,22 +18,28 @@ documentation of the resource:
- [Overriding Terraform Resource Schema]
- [Initializers]
### External Name
## External Name
Crossplane uses an annotation in managed resource CR to identify the external
resource which is managed by Crossplane. See [the external name documentation]
for more details. The format and source of the external name depends on the
cloud provider; sometimes it could simply be the name of resource
(e.g. S3 Bucket), and sometimes it is an auto-generated id by cloud API
(e.g. VPC id ). This is something specific to resource, and we need some input
configuration for upjet to appropriately generate a resource.
Crossplane uses `crossplane.io/external-name` annotation in managed resource CR
to identify the external resource which is managed by Crossplane and always
shows the final value of the external resource name.
See [the external name documentation]
and [Naming Conventions - One Pager Managed Resource API Design] for more
details
The format and source of the external name depends on the cloud provider;
sometimes it could simply be the name of resource (e.g. S3 Bucket), and
sometimes it is an auto-generated id by cloud API (e.g. VPC id ). That is
something specific to resource, and we need some input configuration for `upjet`
to appropriately generate a resource.
Since Terraform already needs [a similar identifier] to import a resource, most
helpful part of resource documentation is the [import section].
Upjet performs some back and forth conversions between Crossplane resource
model and Terraform configuration. We need a custom, per resource configuration
to adapt Crossplane `external name` from Terraform `id`.
Upjet performs some back and forth conversions between Crossplane resource model
and Terraform configuration. We need a custom, per resource configuration to
adapt Crossplane `external name` from Terraform `id`.
Here are [the types for the External Name configuration]:
@ -93,7 +103,7 @@ type ExternalName struct {
Comments explain the purpose of each field but let's clarify further with some
example cases.
#### Case 1: Name as External Name and Terraform ID
### Case 1: Name as External Name and Terraform ID
This is the simplest and most straightforward case with the following
conditions:
@ -106,7 +116,7 @@ conditions:
```go
import (
"github.com/upbound/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -133,7 +143,7 @@ also omit `bucket` and `bucket_prefix` arguments from the spec with
```go
import (
"github.com/upbound/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -151,21 +161,21 @@ import (
}
```
#### Case 2: Identifier from Provider
### Case 2: Identifier from Provider
In this case, the (cloud) provider generates an identifier for the resource
independent of what we provided as arguments.
Checking the [import section of aws_vpc], we see that this resource is being
imported with `vpc id`. When we check the [arguments list] and provided
[example usages], it is clear that this **id** is **not** something that user
provides, rather generated by AWS API.
imported with `vpc id`. When we check the [arguments list] and provided [example
usages], it is clear that this **id** is **not** something that user provides,
rather generated by AWS API.
Here, we can just use [IdentifierFromProvider] configuration:
```go
import (
"github.com/upbound/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -176,17 +186,17 @@ import (
}
```
#### Case 3: Terraform ID as a Formatted String
### Case 3: Terraform ID as a Formatted String
For some resources, Terraform uses a formatted string as `id` which include
resource identifier that Crossplane uses as external name but may also contain
some other parameters.
Most `azurerm` resources fall into this category. Checking the
[import section of azurerm_sql_server], we see that can be imported with an `id`
in the following format:
Most `azurerm` resources fall into this category. Checking the [import section
of azurerm_sql_server], we see that can be imported with an `id` in the
following format:
```
```text
/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.Sql/servers/myserver
```
@ -196,7 +206,7 @@ this id back (`GetIDFn`).
```go
import (
"github.com/upbound/upjet/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config"
...
)
@ -256,8 +266,13 @@ With this, we have covered most common scenarios for configuring external name.
You can always check resource configurations of existing jet Providers as
further examples under `config/<group>/config.go` in their repositories.
_Please see [this figure] to understand why we really need 3 different
functions to configure external names and, it visualizes which is used how._
_Please see [this figure] to understand why we really need 3 different functions
to configure external names and, it visualizes which is used how:_
![Alt text](../docs/images/upjet-externalname.png) _Note that, initially, GetIDFn
will use the external-name annotation to set the terraform.tfstate id and, after
that, it uses the terraform.tfstate id to update the external-name annotation.
For cases where both values are different, both GetIDFn and GetExternalNameFn
must be set in order to have the correct configuration._
### Cross Resource Referencing
@ -267,8 +282,8 @@ managed resource, and you want to create an Access Key for that user, you would
need to refer to the User CR from the Access Key resource. This is handled by
cross resource referencing.
See how the [user] referenced at `forProvider.userRef.name` field of the
Access Key in the following example:
See how the [user] referenced at `forProvider.userRef.name` field of the Access
Key in the following example:
```yaml
apiVersion: iam.aws.tf.crossplane.io/v1alpha1
@ -304,9 +319,14 @@ In Upjet, we have a [configuration] to provide this information for a field:
// Reference represents the Crossplane options used to generate
// reference resolvers for fields
type Reference struct {
// Type is the type name of the CRD if it is in the same package or
// Type is the Go type name of the CRD if it is in the same package or
// <package-path>.<type-name> if it is in a different package.
Type string
// TerraformName is the name of the Terraform resource
// which will be referenced. The supplied resource name is
// converted to a type name of the corresponding CRD using
// the configured TerraformTypeMapper.
TerraformName string
// Extractor is the function to be used to extract value from the
// referenced type. Defaults to getting external name.
// Optional
@ -315,13 +335,20 @@ type Reference struct {
// <field-name>Ref or <field-name>Refs.
// Optional
RefFieldName string
// SelectorFieldName is the field name for the Selector field. Defaults to
// SelectorFieldName is the Go field name for the Selector field. Defaults to
// <field-name>Selector.
// Optional
SelectorFieldName string
}
```
> [!Warning]
> Please note the `Reference.Type` field has been deprecated, use
`Reference.TerraformName` instead. `TerraformName` is a more stable and
less error prone API compared to `Type` because it automatically accounts
for the configuration changes affecting the cross-resource reference target's
kind name, group or version.
For a resource that we want to generate, we need to check its argument list in
Terraform documentation and figure out which field needs reference to which
resource.
@ -334,41 +361,215 @@ following referencing configuration:
func Configure(p *config.Provider) {
p.AddResourceConfigurator("aws_iam_access_key", func (r *config.Resource) {
r.References["user"] = config.Reference{
Type: "User",
TerraformName: "aws_iam_user",
}
})
}
```
Please note the value of `Type` field needs to be a string representing the Go
type of the resource. Since, `AccessKey` and `User` resources are under the same
go package, we don't need to provide the package path. However, this is not
always the case and referenced resources might be in different package. In that
case, we would need to provide the full path. Referencing to a [kms key] from
`aws_ebs_volume` resource is a good example here:
`TerraformName` is the name of the Terraform resource, such as
`aws_iam_user`. Because `TerraformName` uniquely identifies a
Terraform resource, it remains the same when the referenced and
referencing resources are in different groups. A good example is
referencing a [kms key] from `aws_ebs_volume` resource:
```go
func Configure(p *config.Provider) {
p.AddResourceConfigurator("aws_ebs_volume", func(r *config.Resource) {
r.References["kms_key_id"] = config.Reference{
Type: "github.com/crossplane-contrib/provider-tf-aws/apis/kms/v1alpha1.Key",
TerraformName: "aws_kms_key",
}
})
}
```
### Additional Sensitive Fields and Custom Connection Details
### Auto Cross Resource Reference Generation
Cross Resource Referencing is one of the key concepts of the resource
configuration. As a very common case, cloud services depend on other cloud
services. For example, AWS Subnet resource needs an AWS VPC for creation. So,
for creating a Subnet successfully, before you have to create a VPC resource.
Please see the [Managed Resources] documentation for more details.
These documentations focus on the general concepts and manual configurations
of Cross Resource References. However, the main topic of this documentation is
automatic example&reference generation.
Upjet has a scraper tool for scraping provider metadata from the Terraform
Registry. The scraped metadata are:
- Resource Descriptions
- Examples of Resources (in HCL format)
- Field Documentations
- Import Statements
These are very critical information for our automation processes. We use this
scraped metadata in many contexts. For example, field documentation of
resources and descriptions are used as Golang comments for schema fields and
CRDs.
Another important scraped information is examples of resources. As a part
of testing efforts, finding the correct combination of field values is not easy
for every scenario. So, having a working example (combination) is very important
for easy testing.
At this point, this example that is in HCL format is converted to a Managed
Resource manifest, and we can use this manifest in our test efforts.
This is an example from Terraform Registry AWS Ebs Volume resource:
```hcl
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 40
tags = {
Name = "HelloWorld"
}
}
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = aws_ebs_volume.example.id
tags = {
Name = "HelloWorld_snap"
}
}
```
The generated example:
```yaml
apiVersion: ec2.aws.upbound.io/v1beta1
kind: EBSSnapshot
metadata:
annotations:
meta.upbound.io/example-id: ec2/v1beta1/ebssnapshot
labels:
testing.upbound.io/example-name: example_snapshot
name: example-snapshot
spec:
forProvider:
region: us-west-1
tags:
Name: HelloWorld_snap
volumeIdSelector:
matchLabels:
testing.upbound.io/example-name: example
---
apiVersion: ec2.aws.upbound.io/v1beta1
kind: EBSVolume
metadata:
annotations:
meta.upbound.io/example-id: ec2/v1beta1/ebssnapshot
labels:
testing.upbound.io/example-name: example
name: example
spec:
forProvider:
availabilityZone: us-west-2a
region: us-west-1
size: 40
tags:
Name: HelloWorld
```
Here, there are three very important points that scraper makes easy our life:
- We do not have to find the correct value combinations for fields. So, we can
easily use the generated example manifest in our tests.
- The HCL example was scraped from registry documentation of the
`aws_ebs_snapshot` resource. In the example, you also see the `aws_ebs_volume`
resource manifest because, for the creation of an EBS Snapshot, you need an
EBS Volume resource. Thanks to the source Registry, (in many cases, there are
the dependent resources of target resources) we can also scrape the
dependencies of target resources.
- The last item is actually what is intended to be explained in this document.
For using the Cross Resource References, as I mentioned above, you need to add
some references to the resource configuration. But, in many cases, if in the
scraped example, the mentioned dependencies are already described you do not
have to write explicit references to resource configuration. The Cross
Resource Reference generator generates the mentioned references.
### Validating the Cross Resource References
As I mentioned, many references are generated from scraped metadata by an auto
reference generator. However, there are two cases where we miss generating the
references.
The first one is related to some bugs or improvement points in the generator.
This means that the generator can handle many references in the scraped examples
and correctly generate them. But we cannot say that the ratio is 100%. For some
cases, the generator cannot generate references, even though they are in the
scraped example manifests.
The second one is related to the scraped example itself. As I mentioned above,
the source of the generator is the scraped example manifest. So, it checks the
manifest and tries to generate the found cross-resource references. In some
cases, although there are other reference fields, these do not exist in the
example manifest. They can only be mentioned in schema/field documentation.
For these types of situations, you must configure cross-resource references
explicitly.
### Removing Auto-Generated Cross Resource References In Some Corner Cases
In some cases, the generated references can narrow the reference pool covered by
the field. For example, X resource has an A field and Y and Z resources can be
referenced via this field. However, since the reference to Y is mentioned in the
example manifest, this reference field will only be defined over Y. In this
case, since the reference pool of the relevant field will be narrowed, it would
be more appropriate to delete this reference. For example,
```hcl
resource "aws_route53_record" "www" {
zone_id = aws_route53_zone.primary.zone_id
name = "example.com"
type = "A"
alias {
name = aws_elb.main.dns_name
zone_id = aws_elb.main.zone_id
evaluate_target_health = true
}
}
```
Route53 Record resources alias.name field has a reference. In the example, this
reference is shown by using the `aws_elb` resource. However, when we check the
field documentation, we see that this field can also be referenced by other
resources:
```text
Alias
Alias records support the following:
name - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB,
or another resource record set in this hosted zone.
```
### Conclusion
As a result, mentioned scraper and example&reference generators are very useful
for making easy the test efforts. But using these functionalities, we must be
careful to avoid undesired states.
[Managed Resources]: https://docs.crossplane.io/latest/concepts/managed-resources/#referencing-other-resources
## Additional Sensitive Fields and Custom Connection Details
Crossplane stores sensitive information of a managed resource in a Kubernetes
secret, together with some additional fields that would help consumption of the
resource, a.k.a. [connection details].
In Upjet, we already handle sensitive fields that are marked as sensitive
in Terraform schema and no further action required for them. Upjet will
properly hide these fields from CRD spec and status by converting to a secret
reference or storing in connection details secret respectively. However, we
still have some custom configuration API that would allow including additional
fields into connection details secret no matter they are sensitive or not.
In Upjet, we already handle sensitive fields that are marked as sensitive in
Terraform schema and no further action required for them. Upjet will properly
hide these fields from CRD spec and status by converting to a secret reference
or storing in connection details secret respectively. However, we still have
some custom configuration API that would allow including additional fields into
connection details secret no matter they are sensitive or not.
As an example, let's use `aws_iam_access_key`. Currently, Upjet stores all
sensitive fields in Terraform schema as prefixed with `attribute.`, so without
@ -410,17 +611,17 @@ kind: Secret
### Late Initialization Configuration
Late initialization configuration is only required if there are conflicting
arguments in terraform resource configuration.
Unfortunately, there is _no easy way_ to figure that out without testing the
resource, _so feel free to skip this configuration_ at the first place and
revisit _only if_ you have errors like below while testing the resource.
arguments in terraform resource configuration. Unfortunately, there is _no easy
way_ to figure that out without testing the resource, _so feel free to skip this
configuration_ at the first place and revisit _only if_ you have errors like
below while testing the resource.
```
```text
observe failed: cannot run refresh: refresh failed: Invalid combination of arguments:
"address_prefix": only one of `address_prefix,address_prefixes` can be specified, but `address_prefix,address_prefixes` were specified.: File name: main.tf.json
```
If you would like to have the late-initialization library *not* to process the
If you would like to have the late-initialization library _not_ to process the
[`address_prefix`] parameter field, then the following configuration where we
specify the parameter field path is sufficient:
@ -439,19 +640,21 @@ so please consider configuring late initialization behaviour whenever you got
some unexpected error starting with `observe failed:`, once you are sure that
you provided all necessary parameters to your resource._
#### Further details on Late Initialization
### Further details on Late Initialization
Upjet runtime automatically performs late-initialization during
an [`external.Observe`] call with means of runtime reflection.
State of the world observed by Terraform CLI is used to initialize
any `nil`-valued pointer parameters in the managed resource's `spec`.
In most of the cases no custom configuration should be necessary for
late-initialization to work. However, there are certain cases where
you will want/need to customize late-initialization behaviour. Thus,
Upjet provides an extensible [late-initialization customization API]
that controls late-initialization behaviour.
Upjet runtime automatically performs late-initialization during an
[`external.Observe`] call with means of runtime reflection. State of the world
observed by Terraform CLI is used to initialize any `nil`-valued pointer
parameters in the managed resource's `spec`. In most of the cases no custom
configuration should be necessary for late-initialization to work. However,
there are certain cases where you will want/need to customize
late-initialization behaviour. Thus, Upjet provides an extensible
[late-initialization customization API] that controls late-initialization
behaviour.
The associated resource struct is defined [here](https://github.com/upbound/upjet/blob/c9e21387298d8ed59fcd71c7f753ec401a3383a5/pkg/config/resource.go#L91) as follows:
The associated resource struct is defined
[here](https://github.com/crossplane/upjet/blob/c9e21387298d8ed59fcd71c7f753ec401a3383a5/pkg/config/resource.go#L91)
as follows:
```go
// LateInitializer represents configurations that control
@ -463,11 +666,11 @@ type LateInitializer struct {
}
```
Currently, it only involves a configuration option to specify
certain `spec` parameters to be ignored during late-initialization.
Each element of the `LateInitializer.IgnoredFields` slice represents
the canonical path relative to the parameters struct for the managed resource's `Spec`
using `Go` type names as path elements. As an example, with the following type definitions:
Currently, it only involves a configuration option to specify certain `spec`
parameters to be ignored during late-initialization. Each element of the
`LateInitializer.IgnoredFields` slice represents the canonical path relative to
the parameters struct for the managed resource's `Spec` using `Go` type names as
path elements. As an example, with the following type definitions:
```go
type Subnet struct {
@ -498,8 +701,8 @@ type SubnetParameters struct {
```
In most cases, custom late-initialization configuration will not be necessary.
However, after generating a new managed resource and observing its behaviour
(at runtime), it may turn out that late-initialization behaviour needs
However, after generating a new managed resource and observing its behaviour (at
runtime), it may turn out that late-initialization behaviour needs
customization. For certain resources like the `provider-tf-azure`'s
`PostgresqlServer` resource, we have observed that Terraform state contains
values for mutually exclusive parameters, e.g., for `PostgresqlServer`, both
@ -513,17 +716,17 @@ message in its `status.conditions`, we do the `LateInitializer.IgnoreFields`
custom configuration detailed above to skip one of the mutually exclusive fields
during late-initialization.
### Overriding Terraform Resource Schema
## Overriding Terraform Resource Schema
Upjet generates Crossplane resource schemas (CR spec/status) using the
[Terraform schema of the resource]. As of today, Upjet leverages the
following attributes in the schema:
[Terraform schema of the resource]. As of today, Upjet leverages the following
attributes in the schema:
- [Type] and [Elem] to identify the type of the field.
- [Sensitive] to see if we need to keep it in a Secret instead of CR.
- [Description] to add as a description to the field in CRD.
- [Optional] and [Computed] to identify whether the fields go under spec or
status:
status:
- Not Optional & Not Computed => Spec (required)
- Optional & Not Computed => Spec (optional)
- Optional & Computed => Spec (optional, to be late-initialized)
@ -534,12 +737,12 @@ resource schema just works as is. However, there could be some rare edge cases
like:
- Field contains sensitive information but not marked as `Sensitive` or vice
versa.
- An attribute does not make sense to have in CRD schema, like
[tags_all for jet AWS resources].
- Moving parameters from Terraform provider config to resources schema to
fit Crossplane model, e.g. [AWS region] parameter is part of provider config
in Terraform but Crossplane expects it in CR spec.
versa.
- An attribute does not make sense to have in CRD schema, like [tags_all for
provider-upjet-aws resources].
- Moving parameters from Terraform provider config to resources schema to fit
Crossplane model, e.g. [AWS region] parameter is part of provider config in
Terraform but Crossplane expects it in CR spec.
Schema of a resource could be overridden as follows:
@ -557,13 +760,15 @@ p.AddResourceConfigurator("aws_autoscaling_group", func(r *config.Resource) {
})
```
### Initializers
## Initializers
Initializers involve the operations that run before beginning of reconciliation. This configuration option will
provide that setting initializers for per resource.
Initializers involve the operations that run before beginning of reconciliation.
This configuration option will provide that setting initializers for per
resource.
Many resources in aws have `tags` field in their schema. Also, in Crossplane there is a [tagging convention].
To implement the tagging convention for jet-aws provider, this initializer configuration support was provided.
Many resources in aws have `tags` field in their schema. Also, in Crossplane
there is a [tagging convention]. To implement the tagging convention for jet-aws
provider, this initializer configuration support was provided.
There is a common struct (`Tagger`) in upjet to use the tagging convention:
@ -599,12 +804,13 @@ func (t *Tagger) Initialize(ctx context.Context, mg xpresource.Managed) error {
}
```
As seen above, the `Tagger` struct accepts a `fieldName`. This `fieldName` specifies which value of field to set in the
resource's spec. You can use the common `Initializer` by specifying the field name that points to the external tags
in the configured resource.
As seen above, the `Tagger` struct accepts a `fieldName`. This `fieldName`
specifies which value of field to set in the resource's spec. You can use the
common `Initializer` by specifying the field name that points to the external
tags in the configured resource.
There is also a default initializer for tagging convention, `TagInitializer`. It sets the value of `fieldName` to `tags`
as default:
There is also a default initializer for tagging convention, `TagInitializer`. It
sets the value of `fieldName` to `tags` as default:
```go
// TagInitializer returns a tagger to use default tag initializer.
@ -613,8 +819,9 @@ var TagInitializer NewInitializerFn = func(client client.Client) managed.Initial
}
```
In jet-aws provider, as a default process, if a resource has `tags` field in its schema, then the default initializer
(`TagInitializer`) is added to Initializer list of resource:
In jet-aws provider, as a default process, if a resource has `tags` field in its
schema, then the default initializer (`TagInitializer`) is added to Initializer
list of resource:
```go
// AddExternalTagsField adds ExternalTagsFieldName configuration for resources that have tags field.
@ -627,8 +834,9 @@ func AddExternalTagsField() tjconfig.ResourceOption {
}
```
However, if the field name that used for the external label is different from the `tags`, the `NewTagger` function can be
called and the specific `fieldName` can be passed to this:
However, if the field name that used for the external label is different from
the `tags`, the `NewTagger` function can be called and the specific `fieldName`
can be passed to this:
```go
r.InitializerFns = append(r.InitializerFns, func(client client.Client) managed.Initializer {
@ -636,11 +844,14 @@ r.InitializerFns = append(r.InitializerFns, func(client client.Client) managed.I
})
```
If the above tagging convention logic does not work for you, and you want to use this configuration option for a reason
other than tagging convention (for another custom initializer operation), you need to write your own struct in provider
and have this struct implement the `Initializer` function with a custom logic.
If the above tagging convention logic does not work for you, and you want to use
this configuration option for a reason other than tagging convention (for
another custom initializer operation), you need to write your own struct in
provider and have this struct implement the `Initializer` function with a custom
logic.
This configuration option is set by using the [InitializerFns] field that is a list of [NewInitializerFn]:
This configuration option is set by using the [InitializerFns] field that is a
list of [NewInitializerFn]:
```go
// NewInitializerFn returns the Initializer with a client.
@ -655,45 +866,42 @@ type Initializer interface {
}
```
So, an interface must be passed to the related configuration field for adding initializers for a resource.
So, an interface must be passed to the related configuration field for adding
initializers for a resource.
[comment]: <> (References)
[Upjet]: https://github.com/upbound/upjet
[Upjet]: https://github.com/crossplane/upjet
[External name]: #external-name
[Cross Resource Referencing]: #cross-resource-referencing
[Additional Sensitive Fields and Custom Connection Details]: #additional-sensitive-fields-and-custom-connection-details
[Late Initialization Behavior]: #late-initialization-configuration
[Overriding Terraform Resource Schema]: #overriding-terraform-resource-schema
[the external name documentation]: https://crossplane.io/docs/v1.7/concepts/managed-resources.html#external-name
[concept to identify a resource]: https://www.terraform.io/docs/glossary#id
[the external name documentation]: https://docs.crossplane.io/master/concepts/managed-resources/#naming-external-resources
[import section]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#import
[the types for the External Name configuration]: https://github.com/upbound/upjet/blob/2299925ea2541e6a8088ede463cd865bd64eba32/pkg/config/resource.go#L67
[the types for the External Name configuration]: https://github.com/crossplane/upjet/blob/main/pkg/config/resource.go#L68
[aws_iam_user]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_user
[NameAsIdentifier]: https://github.com/upbound/upjet/blob/2299925ea2541e6a8088ede463cd865bd64eba32/pkg/config/defaults.go#L31
[NameAsIdentifier]: https://github.com/crossplane/upjet/blob/main/pkg/config/externalname.go#L28
[aws_s3_bucket]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket
[import section of s3 bucket]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#import
[bucket]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#bucket
[cluster_identifier]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#cluster_identifier
[aws_rds_cluster]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster.
[aws_vpc]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc
[aws_rds_cluster]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
[import section of aws_vpc]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#import
[arguments list]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#argument-reference
[example usages]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc#example-usage
[IdentifierFromProvider]: https://github.com/upbound/upjet/blob/2299925ea2541e6a8088ede463cd865bd64eba32/pkg/config/defaults.go#L46
[IdentifierFromProvider]: https://github.com/crossplane/upjet/blob/main/pkg/config/externalname.go#L42
[a similar identifier]: https://www.terraform.io/docs/glossary#id
[import section of azurerm_sql_server]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/sql_server#import
[handle dependencies]: https://crossplane.io/docs/v1.7/concepts/managed-resources.html#dependencies
[handle dependencies]: https://docs.crossplane.io/master/concepts/managed-resources/#referencing-other-resources
[user]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#user
[generate reference resolution methods]: https://github.com/crossplane/crossplane-tools/pull/35
[configuration]: https://github.com/upbound/upjet/blob/874bb6ad5cff9741241fb790a3a5d71166900860/pkg/config/resource.go#L77
[configuration]: https://github.com/crossplane/upjet/blob/942508c5370a697b1cb81d074933ba75d8f1fb4f/pkg/config/resource.go#L172
[iam_access_key]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#argument-reference
[kms key]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ebs_volume#kms_key_id
[connection details]: https://crossplane.io/docs/v1.7/concepts/managed-resources.html#connection-details
[connection details]: https://docs.crossplane.io/master/concepts/managed-resources/#writeconnectionsecrettoref
[id]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#id
[secret]: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_access_key#secret
[`external.Observe`]: https://github.com/upbound/upjet/blob/874bb6ad5cff9741241fb790a3a5d71166900860/pkg/controller/external.go#L149
[late-initialization customization API]: https://github.com/upbound/upjet/blob/874bb6ad5cff9741241fb790a3a5d71166900860/pkg/resource/lateinit.go#L86
[`external.Observe`]: https://github.com/crossplane/upjet/blob/main/pkg/controller/external.go#L175
[late-initialization customization API]: https://github.com/crossplane/upjet/blob/main/pkg/resource/lateinit.go#L45
[`address_prefix`]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet#address_prefix
[Terraform schema of the resource]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L34
[Type]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L52
@ -702,13 +910,12 @@ So, an interface must be passed to the related configuration field for adding in
[Description]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L120
[Optional]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L80
[Computed]: https://github.com/hashicorp/terraform-plugin-sdk/blob/e3325b095ef501cf551f7935254ce942c44c1af0/helper/schema/schema.go#L139
[tags_all for jet AWS resources]: https://github.com/upbound/provider-aws/blob/main/config/overrides.go#L62
[boot_disk.initialize_params.labels]: https://github.com/upbound/provider-gcp/blob/main/config/compute/config.go#L121
[AWS region]: https://github.com/upbound/provider-aws/blob/main/config/overrides.go#L32
[this figure]: images/upjet-externalname.png
[tags_all for provider-upjet-aws resources]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/199dbf93b8c67632db50b4f9c0adbd79021146a3/config/overrides.go#L72
[AWS region]: https://github.com/crossplane-contrib/provider-upjet-aws/blob/199dbf93b8c67632db50b4f9c0adbd79021146a3/config/overrides.go#L42
[this figure]: ../docs/images/upjet-externalname.png
[Initializers]: #initializers
[InitializerFns]: https://github.com/upbound/upjet/blob/ae78a0a4c438f01717002e00fac761524aa6e951/pkg/config/resource.go#L289
[NewInitializerFn]: https://github.com/upbound/upjet/blob/ae78a0a4c438f01717002e00fac761524aa6e951/pkg/config/resource.go#L207
[InitializerFns]: https://github.com/crossplane/upjet/blob/92d1af84d24241bef08e6b4a2cfe1ab66a93308a/pkg/config/resource.go#L427
[NewInitializerFn]: https://github.com/crossplane/upjet/blob/92d1af84d24241bef08e6b4a2cfe1ab66a93308a/pkg/config/resource.go#L265
[crossplane-runtime]: https://github.com/crossplane/crossplane-runtime/blob/428b7c3903756bb0dcf5330f40298e1fa0c34301/pkg/reconciler/managed/reconciler.go#L138
[some external labels]: https://github.com/crossplane/crossplane-runtime/blob/428b7c3903756bb0dcf5330f40298e1fa0c34301/pkg/resource/resource.go#L397
[tagging convention]: https://github.com/crossplane/crossplane/blob/60c7df9/design/one-pager-managed-resource-api-design.md#external-resource-labeling
[Naming Conventions - One Pager Managed Resource API Design]: https://github.com/crossplane/crossplane/blob/main/design/one-pager-managed-resource-api-design.md#naming-conventions

View File

@ -0,0 +1,582 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Identity Based Authentication for Crossplane Providers
- Owner: Alper Rifat Uluçınar (@ulucinar)
- Reviewers: Crossplane Maintainers
- Status: Draft
## Background
Crossplane providers need to authenticate themselves to their respective Cloud
providers. This establishes an identity for the Crossplane provider that's later
used by the Cloud provider to authorize the requests made by the Crossplane
provider and for various other purposes such as audit logging, etc. Each
Crossplane provider supports a subset of the underlying Cloud provider's
authentication mechanisms and this subset is currently implemented in-tree,
i.e., in the Crossplane provider's repo, there exists a CRD that's
conventionally named as `ProviderConfig` and each managed resource of the
provider has a
[v1.Reference](https://docs.crossplane.io/v1.12/concepts/managed-resources/#providerconfigref)
to a `ProviderConfig` CR. This `ProviderConfig` holds the authentication
configuration (chosen authentication method, any required credentials for that
method, etc.) together with any other provider specific configuration. Different
authentication methods and/or different sets of credentials can be configured
using separate cluster-scoped `ProviderConfig` CRs and by having different
managed resources refer to these `ProviderConfig` instances.
The Crossplane provider establishes an identity for the requests it will issue
to the Cloud provider in the
[managed.ExternalConnecter](https://pkg.go.dev/github.com/crossplane/crossplane-runtime@v0.19.2/pkg/reconciler/managed#ExternalConnecter)'s
`Connect` implementation. This involves calling the associated authentication
functions from the Cloud SDK libraries (such as the [AWS SDK for Go][aws-sdk] or
the [Azure SDK for Go][azure-sdk]) with the supplied configuration and
credentials from the referenced `ProviderConfig` instance.
Managed resources and `ProviderConfig`s are cluster-scoped, i.e., they do not
exist within a Kubernetes namespace but rather exist at the global (cluster)
scope. This does not fit well into a namespace-based multi-tenancy model, where
each tenant is confined to its own namespace. The cluster scope is shared
between all namespaces. In the namespace-based multi-tenancy model, the common
approach is to have Role-Based Access Control ([RBAC]) rules that disallow a
tenant from accessing API resources that do not reside in its namespace. Another
dimension to consider here is that all namespaced tenants are serviced by a
shared Crossplane provider deployment typically running in the
`crossplane-system` namespace. This shared provider instance (or more precisely,
the [Kubernetes ServiceAccount][k8s-sa] that the provider's pod uses) is
allowed, via RBAC, to `get` the (cluster-scoped) `ProviderConfig` resources. If
tenant `subjects` (groups, users, ServiceAccounts) are allowed to directly
`create` managed resources, then we cannot constrain them from referring to any
`ProviderConfig` (thus to any Cloud provider credential set) in the cluster
solely using RBAC. This is because:
1. RBAC rules allow designated verbs (`get`, `list`, `create`, `update`, etc.)
on the specified API resources for the specified subjects. If a subject,
e.g., a `ServiceAccount`, is allowed to `create` a managed resource, RBAC
alone cannot be used to constrain the set of `ProviderConfig`s that can be
referenced by the `create`d managed resource.
1. The tenant subject itself does not directly access the `ProviderConfig` and
in turn the Cloud provider credential set referred by the `ProviderConfig`.
It's the Crossplane provider's `ServiceAccount` that accesses these
resources, and as mentioned above, this ServiceAccount currently serves all
tenants. This implies that we cannot isolate Cloud provider credentials among
namespaced tenants by only using RBAC rules if we allow tenant subjects to
have `edit` access (`create`, `update`, `patch`) to managed resources.
Although it's possible to prevent them from reading Cloud provider
credentials of other tenants in the cluster via RBAC rules, it's not possible
to prevent them from _using_ those credentials solely with RBAC.
As discussed in detail in the
[Crossplane Multi-tenancy Guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/),
Crossplane is opinionated about the different personas in an organization
adopting Crossplane. We make a distinction between the _infrastructure
operators_ (or _platform builders_) who are expected to manage cluster-scoped
resources (like `ProviderConfig`s, XRDs and `Composition`s) and _application
operators_, who are expected to consume the infrastructure for their
applications. And tenant subjects are classified as _application operators_,
i.e., it's the infrastructure operator's responsibility to manage the
infrastructure _across_ the tenants via cluster-scoped Crossplane resources, and
it's possible and desirable from an isolation perspective to disallow
application operators, who are tenant subjects, to directly access these shared
cluster-scoped resources. This distinction is currently possible with Crossplane
because:
1. Crossplane `Claim` types are defined via cluster-scoped XRDs by
infrastructure operators and _namespaced_ `Claim` instances are used by the
tenant subjects. This allows infrastructure operators to define RBAC rules
that allow tenant subjects to only access resources in their respective
namespaces, e.g., `Claim`s.
1. However, [1] is not sufficient on itself, as the scheme is still prone to
privilege escalation attacks if the API exposed by the XR is not well
designed. The (shared) provider `ServiceAccount` has access to all Cloud
provider credentials in the cluster and if the exposed XR API allows a
`Claim` to reference cross-tenant `ProviderConfig`s, then a misbehaving
tenant subject can `create` a `Claim` which references some other tenant's
credential set. Thus in our multi-tenancy
[guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/), we
propose a security scheme where:
1. The infrastructure operator follows a specific naming convention for the
`ProviderConfig`s she provisions: The `ProviderConfig`s for different
tenants are named after those tenants' namespaces.
2. The infrastructure operator carefully designs `Composition`s that patch
`spec.providerConfigRef` of composed resources using the `Claim`'s
namespace.
3. Tenant subjects are **not** allowed to provision managed resources
directly (and also XRDs or `Composition`s) but only `Claim`s in their
namespaces. And any `Composition` they can select with their `Claim`s will
compose resources that refer to a `ProviderConfig` provisioned for their
tenant (the `ProviderConfig` with the same name as the tenant's
namespace).
4. We also suggest that the naming conventions imposed by this scheme on
`ProviderConfig`s can be relaxed to some degree by using `Composition`'s
[patching capabilities](https://docs.crossplane.io/v1.12/concepts/composition/#compositions).
For instance, a string [transform][patch-transform] of type `Format` can
be used to combine the `Claim`'s namespace with an XR field's value to
allow multiple `ProviderConfig`s per tenant and to allow selection of the
`ProviderConfig` with the `Claim`.
As explained above, RBAC rules can only impose restrictions on the actions
(`get`, `update`, etc.) performed on the API resource endpoints but they cannot
impose constraints on the API resources themselves (objects) available at these
endpoints. Thus, we also discuss using one of the available policy engines that
can run integrated with the Kubernetes API server to further impose restrictions
on the resources. For example, the following [kyverno] [policy][kyverno-policy]
prevents a tenant subject (`tenant1/user1`) from specifying in `Claim`s any
`ProviderConfig` names without the prefix `tenant1` (_please do not use this
example policy in production environments as it has a security vulnerability as
we will discuss shortly_):
```yaml
# XRD
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeresourcegroups.example.org
spec:
group: example.org
names:
kind: CompositeResourceGroup
plural: compositeresourcegroups
claimNames:
kind: ClaimResourceGroup
plural: claimresourcegroups
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
name:
type: string
providerConfigName:
type: string
required:
- name
---
# kyverno ClusterPolicy
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: tenant1
spec:
validationFailureAction: enforce
background: false
rules:
- name: check-for-providerconfig-ref
match:
any:
- resources:
kinds:
# G/V/K for the Claim type
- example.org/v1alpha1/ClaimResourceGroup
subjects:
- kind: User
name: tenant1/user1
validate:
message:
"Only ProviderConfig names that have the prefix tenant1 are allowed
for users under tenant1"
pattern:
spec:
providerConfigName: tenant1*
---
# related patch in a Composition
---
patches:
- fromFieldPath: spec.providerConfigName
toFieldPath: spec.providerConfigRef.name
```
### Limitations of Naming Convention-based or Admission Controller-based Approaches
The naming convention-based or admission controller-based approaches described
above are not straightforward to configure, especially if you also consider that
in addition to the RBAC configurations needed to isolate the tenants
(restricting access to the cluster-wide resources), resource quotas and network
policies are also needed to properly isolate and fairly distribute the worker
node resources and the network resources, respectively. Also due to the
associated complexity, it's easy to misconfigure the cluster and difficult to
verify a given security configuration guarantees proper isolation between the
tenants.
As an example, consider the Kyverno `ClusterPolicy` given above: While the
intent is to restrict the users under `tenant1` to using only the
`ProviderConfig`s installed for them (e.g., those with names `tenant1*`), the
scheme is broken if there exists a tenant in the system with `tenant1` as a
prefix to its name, such as `tenant10`.
Organizations, especially with hard multi-tenancy requirements (i.e., with
tenants assumed to be untrustworthy or actively malicious), may not prefer or
strictly forbid such approaches. The architectural problem here, from a security
perspective, is that the Crossplane provider (and also the core Crossplane
components) is a shared resource itself and it requires cross-tenant privileges
such as accessing cluster-wide resources and accessing each tenant's namespaced
resources (especially tenant Cloud credentials). This increases the attack
surface in the dimensions of:
- Logical vulnerabilities (see the above example for a misconfiguration)
- Isolation vulnerabilities: For instance, controller *workqueue*s become shared
resources between the tenants. How can we ensure, for instance, that the
workqueue capacity is fairly shared between the tenants?
- Code vulnerabilities: As an example, consider a hypothetical Crossplane
provider bug in which the provider fetches another `ProviderConfig` than the
one declared in the managed resource, or other credentials than the ones
declared in the referred `ProviderConfig`. Although the logical barriers
enforced by the `Composition`s or the admission controllers as descibed above
are not broken, the too privileged provider itself breaks the cross-tenant
barrier.
In the current Crossplane provider deployment model, when a Crossplane provider
package is installed, there can be a single _active_ `ProviderRevision`
associated with it, which owns (via an owner reference) the Kubernetes
deployment for running the provider. This single deployment, in turn, specifies
a single Kubernetes service account under which the provider runs.
Apart from a vulnerability perspective, there are also some other limitations to
this architecture, which are related to identity-based authentication.
> [!NOTE]
> The [multi-tenancy guide](https://docs.crossplane.io/knowledge-base/guides/multi-tenant/)
also mentions multi-cluster multi-tenancy, where tenants are run on their
respective Kubernetes clusters. This form of multi-tenancy is out of scope in
this document.
### Identity-based Authentication Schemes
Various Cloud providers, such as AWS, Azure and GCP, have some means of
identity-based authentication. With identity-based authentication an entity,
such as a Cloud service (a database server, a Kubernetes cluster, etc.) or a
workload (an executable running in a VM, a pod running in a Kubernetes cluster)
is assigned a Cloud identity and further authorization checks are performed
against this identity. The advantage with identity-based authentication is that
no manually provisioned credentials are required.
The traditional way for authenticating a Crossplane provider to the Cloud
provider is to first provision a Cloud identity such as an AWS IAM user or a GCP
service account or an Azure AD service principal and a set of credentials
associated with that identity (such as an AWS access key or a GCP service
account key or Azure client ID & secret) and then to provision a Kubernetes
secret containing these credentials. Then a `ProviderConfig` refers to this
Kubernetes secret. There are some undesirable consequences of this flow:
- The associated Cloud credentials are generally long-term credentials and
require manual rotation.
- For fine-grained access control, you need multiple identities with such
credentials to be manually managed & rotated.
- These generally result in reusing such credentials, which in turn prevents
fine-grained access control and promotes aggregation of privileges.
Different Cloud providers have different identity-based authentication
implementations:
**AWS**: [EKS node IAM roles][aws-eks-node-iam], or IAM roles for service
accounts ([IRSA]) both allow for identity-based authentication. IRSA has
eliminated the need for some third-party solutions such as [kiam] or [kube2iam]
and associates an IAM role with a Kubernetes service account. Using IRSA for
authenticating `provider-aws` is [possible][provider-aws-irsa]. IRSA leverages
the [service account token volume projection][k8s-sa-projection] support
introduced with Kubernetes 1.12. When enabled, `kubelet`
[projects][k8s-volume-projection] a signed OIDC JWT for a pod's service account
at the requested volume mount path in a container and periodically rotates the
token. An AWS client can then exchange this token (issued by the API server)
with _temporary_ credentials for an IAM role via the AWS Security Token Service
([STS]) [AssumeRoleWithWebIdentity] API operation. The IAM role to be associated
with the Kubernetes service account can be specified via an annotation on the
service account (`eks.amazonaws.com/role-arn`). As we will discuss later, this
can also be used in conjunction with IAM role chaining to implement fine-grained
access control.
As of this writing, `provider-aws` [supports][provider-aws-auth] `IRSA`, role
chaining (via the [STS] [AssumeRole] API operation), and the [STS
AssumeRoleWithWebIdentity][AssumeRoleWithWebIdentity] API operation. This allows
us to authenticate `provider-aws` using the projected service account token by
exhanging it with a set of temporary credentials associated with an IAM role.
This set of temporary credentials consists of an access key ID, a secret access
key and a security token. Also the target IAM role ARN (Amazon Resource Name) is
configurable via the `provider-aws`'s `ProviderConfig` API. This allows
Crossplane users to implement a fine-grained access policy for different tenants
possibly using different AWS accounts:
- The initial IAM role, which is the target IAM role for the `IRSA`
authentication (via the `AssumeRoleWithWebIdentity` STS API operation) does
not need privileges on the managed external resources when role chaining is
used.
- `provider-aws` then assumes another IAM role by exchanging the initial set of
temporary credentials via STS role chaining. However, currently the
`ProviderConfig` API does not allow chains of length greater than one, i.e.,
`provider-aws` can only call the STS `AssumeRole` API once in a given chain.
This is currently an artificial limitation in `provider-aws` imposed by the
`ProviderConfig` API.
- The target role ARN for the initial IRSA `AssumeRoleWithWebIdentity` operation
is configurable via the `ProviderConfig` API. Thus, if a proper cross-AWS
account trust policy exists between the EKS cluster's OIDC provider and a
target IAM role in a different account (than the account owning the EKS
cluster and the OIDC provider), then it's possible to switch to an IAM role in
that target AWS account.
- Privileges on the managed external resources need to be defined on the target
IAM roles of the STS `Assume*` operations. And as mentioned, fine-grained
access policies can be defined on these target roles which are configurable
with the `ProviderConfig` API.
- When combined with the already available single-cluster multi-tenancy
techniques discussed above, this allows `provider-aws` users to isolate their
tenant identities and the privileges required for those identities.
From the relevant discussions on `provider-aws` surveyed for this writing, this
level of tenant isolation has mostly been sufficient for `provider-aws` users.
But as discussed above, a deeper isolation is still possible. Especially in the
currently feasible `provider-aws` authentication scheme, the initial
`AssumeRoleWithWebIdentity` target IAM role is still shared by the tenants
although it does not require privileges on the managed external resources. But
due to vulnerabilities discussed in the [Limitations of Naming Convention-based
or Admission Controller-based Approaches] section above, it could still be
possible for a tenant to assume an IAM role with more privileges than it needs,
starting with the shared `AssumeRoleWithWebIdentity` target IAM role. A deeper
isolation between tenants would be possible if it were possible to have a
Kubernetes service account and an associated (initial) non-shared IAM role
assigned to each tenant.
As of this writing, `provider-jet-aws` supports IRSA authentication with support
for role chaining via the STS `AssumeRole` API operation. Similar to
`provider-aws`, only chains of length `1` are allowed. Also, `provider-jet-aws`
does not currently support specifying the target `AssumeRoleWithWebIdentity` IAM
role via the `ProviderConfig` API. And unlike `provider-aws`, `provider-jet-aws`
does not support specifying external IDs, session tags or transitive tag keys
for the `AssumeRole` operation, or specifying session names for the
`AssumeRoleWithWebIdentity` operation.
**Azure**: Azure has the notion of system-assigned or user-assigned [managed
identities][azure-msi], which allow authentication to any resource that supports
Azure AD authentication. Some Azure services, such as EKS, allow a managed
identity to be enabled directly on a service's instance (system-assigned). Or a
user-assigned managed identity can be provisioned and assigned to the service
instance. Similar to AWS IRSA, Azure has also introduced [Azure AD workload
identities][azure-wi], which work in a similar way to IRSA:
| |
| :--------------------------------------------------: |
| <img src="images/azure-wi.png" alt="drawing" /> |
| Azure AD Workload Identities (reproduced from [[1]]) |
In Azure AD workload identities, similar to IRSA, a Kubernetes service account
is associated with an Azure AD application client ID via the
`azure.workload.identity/client-id` annotation on the service account object.
As of this writing, none of `provider-azure` or `provider-jet-azure` supports
Azure workload identities. Terraform native `azurerm` provider itself currently
does _not_ support workload identities, thus there are technical challenges if
we would like to introduce support for workload identities in
`provider-jet-azure`. However, using lower level APIs (then the [Azure Identity
SDK for Go][azidentity]), it should be possible to [implement][azure-329]
workload identities for `provider-azure`.
Both `provider-azure` and `provider-jet-azure` support system-assigned and
user-assigned managed identitites as an alternate form of identity-based
authentication (with `provider-azure` support being introduced by this
[PR][azure-330]).
Using system-assigned managed identities, it's _not_ possible to implement an
isolation between tenants (see the discussion above for `provider-aws`) by using
separate Azure AD (AAD) applications (service principals) for them, because the
system-assigned managed identity is shared between those tenants and currently
it's not possible to switch identities within the Crossplane Azure providers\*.
However, using user-assigned managed identities and per-tenant `ProviderConfig`s
as discussed above in the context of single-cluster multi-tenancy, it's possible
to implement fine-grained access control for tenants again with the same
limitations mentioned there.
\*: Whether there exists an Azure service (similar to the [STS] of AWS) that
allows us to exchange credentials of an AAD application with (temporary)
credentials of another AAD application needs further investigation.
**GCP**: GCP also [recommends][gcp-wi] workload identities for assigning
identities to workloads running in GKE clusters. With GKE workload identities, a
Kubernetes service account is associated with a GCP IAM service account. And
similar to AWS and Azure, GCP also uses an annotation
(`iam.gke.io/gcp-service-account`) on the Kubernetes service account object
which specifies the GCP service account to be impersonated.
As of this writing, both `provider-gcp` and `provider-jet-gcp` support workload
identities, which are based on Kubernetes service accounts similar to AWS IRSA
and Azure AD workload identities. Thus, current implementations share the same
limitations detailed in [Limitations of Naming Convention-based or Admission
Controller-based Approaches].
**Summary for the existing Crossplane AWS, Azure & GCP providers**:
In all the three Kubernetes workload identity schemes introduced above, a
Kubernetes service account is mapped to a Cloud provider identity (IAM
role/service account, AD application, etc.) And as explained in depth above, the
current Crossplane provider deployment model allows the provider to be run under
a single Kubernetes service account.
Users of `provider-aws` have so far combined [IRSA] with AWS STS role chaining
(`AssumeRoleWithWebIdentity` and `AssumeRole` STS API operations) to meet their
organizational requirements around least-privilege and fine-grained access
control, and they have isolated their tenants sharing the same Crossplane
control-plane using the single-cluster multi-tenancy techniques described above.
However, currently lacking similar semantics for "role chaining", to the best of
our knowledge, users of AKS and GKE workload identities cannot implement similar
fine-grained access control scenarios because the Crossplane provider is running
as a single Kubernetes deployment, which in turn is associated with a single
Kubernetes service account. And for `provider-aws` users who would like to have
more strict tenant isolation, we need more flexibility in the Crossplane
deployment model.
## Decoupling Crossplane Provider Deployment
Flexibility in Crossplane provider deployment has been discussed especially in
[[2]] and [[3]]. [[2]] proposes a provider partitioning scheme on
`ProviderConfig`s and [[3]] calls for a _Provider Runtime Interface_ for
decoupling the runtime aspects of a provider (where & how a provider is deployed
& run) from the core Crossplane package manager. We can combine these two
approaches to have an extensible, flexible and future-proof deployment model for
Crossplane providers that would also better meet the requirements around tenant
isolation. Instead of partitioning based on `ProviderConfig`s, as an
alternative, we could have an explicit partitioning API based on provider
runtime configurations specified in `Provider.pkg`s:
```yaml
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: crossplane-provider-azure
spec:
package: crossplane/provider-azure:v0.19.0
...
runtimeConfigs:
- name: deploy-1
runtime:
apiVersion: runtime.crossplane.io/v1alpha1
kind: KubernetesDeployment
spec:
# ControllerConfig reference that defines the corresponding Kubernetes deployment
controllerConfigRef:
name: cc-1
- name: deploy-2
runtime:
apiVersion: runtime.crossplane.io/v1alpha1
kind: KubernetesDeployment
spec:
# ControllerConfig reference that defines the corresponding Kubernetes deployment
controllerConfigRef:
name: cc-2
- name: container-1
runtime:
apiVersion: runtime.crossplane.io/v1alpha1
kind: DockerContainer
spec:
# some Docker client options
host: /var/run/docker.sock
config: ...
# some docker run options
runOptions:
user: ...
network: ...
- ...
```
In the proposed scheme, the `PackageRevision` controller would no longer
directly manage a Kubernetes deployment for the active revision. Instead it
would provision, for the active revision, a number of Kubernetes resources
corresponding to each runtime configuration specified in the `runtimeConfigs`
array. For the above example, the `PackageRevision` controller would provision
two `KubernetesDeployment` and one `DockerContainer` _runtime configuration_
resources for the active revision. An example `KubernetesDeployment` object
provisioned by the `PackageRevision` controller could look like the following:
```yaml
apiVersion: runtime.crossplane.io/v1alpha1
kind: KubernetesDeployment
metadata:
name: deploy-1
ownerReferences:
- apiVersion: pkg.crossplane.io/v1
controller: true
kind: ProviderRevision
name: crossplane-provider-azure-91818efefdbe
uid: 3a58c719-019f-43eb-b338-d6116e299974
spec:
crossplaneProvider: crossplane/provider-azure-controller:v0.19.0
# ControllerConfig reference that defines the corresponding Kubernetes deployment
controllerConfigRef:
name: cc-1
```
As an alternative, in order to deprecate the `ControllerConfig` API, the
`KubernetesDeployment` could also be defined as follows:
```yaml
---
runtimeConfigs:
- name: deploy-1
runtime:
apiVersion: runtime.crossplane.io/v1alpha1
kind: KubernetesDeployment
spec:
template:
# metadata that defines the corresponding Kubernetes deployment's metadata
metadata: ...
# spec that defines the corresponding Kubernetes deployment's spec
spec: ...
```
This scheme makes the runtime implementation pluggable, i.e., in different
environments we can have different _provider runtime configuration_ contollers
running (as Kubernetes controllers) with different capabilities. For instance,
the existing deployment implementation embedded into the `PackageRevision`
controller can still be shipped with the core Crossplane with a corresponding
runtime configuration object. But another runtime configuration controller,
which is also based on Kubernetes deployments, can implement advanced isolation
semantics.
[1]: https://azure.github.io/azure-workload-identity/docs/introduction.html
[2]: https://github.com/crossplane/crossplane/issues/2411
[3]: https://github.com/crossplane/crossplane/issues/2671
[aws-sdk]: https://github.com/aws/aws-sdk-go-v2
[azure-sdk]: https://github.com/Azure/azure-sdk-for-go
[RBAC]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[k8s-sa]:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
[patch-transform]:
https://github.com/crossplane/crossplane/blob/6c1b06507db47801c7a1c7d91704783e8d13856f/apis/apiextensions/v1/composition_transforms.go#L64
[kyverno]: https://kyverno.io/
[kyverno-policy]: https://kyverno.io/docs/kyverno-policies/
[aws-eks-node-iam]:
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
[IRSA]:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
[kiam]: https://github.com/uswitch/kiam
[kube2iam]: https://github.com/jtblin/kube2iam
[provider-aws-auth]:
https://github.com/crossplane/provider-aws/blob/36299026cd9435c260ad13b32223d2e5fef3c443/AUTHENTICATION.md
[provider-aws-irsa]:
https://github.com/crossplane/provider-aws/blob/36299026cd9435c260ad13b32223d2e5fef3c443/AUTHENTICATION.md#using-iam-roles-for-serviceaccounts
[k8s-sa-projection]:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection
[azure-msi]:
https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
[azure-wi]:
https://azure.github.io/azure-workload-identity/docs/introduction.html
[k8s-volume-projection]:
https://kubernetes.io/docs/concepts/storage/projected-volumes/
[STS]: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html
[AssumeRoleWithWebIdentity]:
https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html
[AssumeRole]:
https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
[gcp-wi]:
https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
[azidentity]: https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/azidentity
[azure-329]: https://github.com/crossplane/provider-azure/issues/329
[azure-330]: https://github.com/crossplane/provider-azure/pull/330

View File

@ -1,126 +1,152 @@
# Generating a Crossplane Provider
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
In this guide, we will generate a Crossplane provider based on an existing
Terraform provider using Upjet.
SPDX-License-Identifier: CC-BY-4.0
-->
# Generating a Crossplane provider
We have chosen [Terraform GitHub provider] as an example, but the process will
be quite similar for any other Terraform provider.
This guide shows you how to generate a Crossplane provider based on an existing
Terraform provider using Upjet. The guide uses the [Terraform GitHub provider]
as the example, but the process is similar for any other Terraform provider.
## Generate
## Prepare your new provider repository
1. Generate a GitHub repository for the Crossplane provider by hitting the
"**Use this template**" button in [upjet-provider-template] repository.
2. Clone the repository to your local and `cd` into the repository directory.
Fetch the [upbound/build] submodule by running the following:
1. Create a new GitHub repository for the Crossplane provider by clicking the
"**Use this template**" button in the [upjet-provider-template] repository. The
expected repository name is in the format `provider-<name>`. For example,
`provider-github`. The script in step 3 expects this format and fails if you
follow a different naming convention.
1. Clone the repository to your local environment and `cd` into the repository
directory.
1. Fetch the [upbound/build] submodule by running the following
command:
```bash
make submodules
```
3. Replace `template` with your provider name.
1. Export `ProviderName`:
```bash
export ProviderNameLower=github
export ProviderNameUpper=GitHub
```
2. Run the `./hack/prepare.sh` script from repo root to prepare the repo, e.g., to
replace all occurrences of `template` with your provider name:
1. To setup your provider name and group run the `./hack/prepare.sh`
script from the repository root to prepare the code.
```bash
./hack/prepare.sh
```
4. To configure the Terraform provider to generate from, update the following
variables in `Makefile`:
1. Ensure your organization name is correct in the `Makefile` for the
`PROJECT_REPO` variable.
1. To configure which Terraform provider to generate from, update the following
variables in the `Makefile`:
| Variable | Description |
| -------- | ----------- |
| `TERRAFORM_PROVIDER_SOURCE` | Find this variable on the Terraform registry for the provider. You can see the source value when clicking on the "`USE PROVIDER`" dropdown button in the navigation. |
|`TERRAFORM_PROVIDER_REPO` | The URL to the repository that hosts the provider's code. |
| `TERRAFORM_PROVIDER_VERSION` | Find this variable on the Terraform registry for the provider. You can see the source value when clicking on the "`USE PROVIDER`" dropdown button in the navigation. |
|`TERRAFORM_PROVIDER_DOWNLOAD_NAME` | The name of the provider in the [Terraform registry](https://releases.hashicorp.com/) |
|`TERRAFORM_NATIVE_PROVIDER_BINARY` | The name of the binary in the Terraform provider. This follows the pattern `terraform-provider-{provider name}_v{provider version}`. |
|`TERRAFORM_DOCS_PATH` | The relative path, from the root of the repository, where the provider resource documentation exist. |
For example, for the [Terraform GitHub provider], the variables are:
```makefile
export TERRAFORM_PROVIDER_SOURCE := integrations/github
export TERRAFORM_PROVIDER_VERSION := 4.19.2
export TERRAFORM_PROVIDER_REPO := https://github.com/integrations/terraform-provider-github
export TERRAFORM_PROVIDER_VERSION := 5.32.0
export TERRAFORM_PROVIDER_DOWNLOAD_NAME := terraform-provider-github
export TERRAFORM_PROVIDER_DOWNLOAD_URL_PREFIX := https://releases.hashicorp.com/terraform-provider-github/4.19.2
export TERRAFORM_NATIVE_PROVIDER_BINARY := terraform-provider-github_v5.32.0
export TERRAFORM_DOCS_PATH := website/docs/r
```
You can find `TERRAFORM_PROVIDER_SOURCE` and `TERRAFORM_PROVIDER_VERSION` in
[Terraform GitHub provider] documentation by hitting the "**USE PROVIDER**"
button. Check [this line in controller Dockerfile] to see how these
variables are used to build the provider plugin binary.
Refer to [the Dockerfile](https://github.com/upbound/upjet-provider-template/blob/main/cluster/images/upjet-provider-template/Dockerfile) to see the variables called when building the provider.
## Configure the provider resources
5. Implement `ProviderConfig` logic. In `upjet-provider-template`, there is already
a boilerplate code in file `internal/clients/${ProviderNameLower}.go` which
takes care of properly fetching secret data referenced from `ProviderConfig`
resource.
For our GitHub provider, we need to check [Terraform documentation for provider
configuration] and provide the keys there:
1. First you need to add the `ProviderConfig` logic.
- In `upjet-provider-template`, there is
already boilerplate code in the file `internal/clients/github.go` which takes
care of fetching secret data referenced from the `ProviderConfig` resource.
- Reference the [Terraform Github provider] documentation for information on
authentication and provide the necessary keys.:
```go
const (
...
keyBaseURL = "base_url"
keyOwner = "owner"
keyToken = "token"
// GitHub credentials environment variable names
envToken = "GITHUB_TOKEN"
)
```
```go
func TerraformSetupBuilder(version, providerSource, providerVersion string) terraform.SetupFn {
...
// set provider configuration
ps.Configuration = map[string]any{}
if v, ok := githubCreds[keyBaseURL]; ok {
if v, ok := creds[keyBaseURL]; ok {
ps.Configuration[keyBaseURL] = v
}
if v, ok := githubCreds[keyOwner]; ok {
if v, ok := creds[keyOwner]; ok {
ps.Configuration[keyOwner] = v
}
// set environment variables for sensitive provider configuration
ps.Env = []string{
fmt.Sprintf(fmtEnvVar, envToken, githubCreds[keyToken]),
if v, ok := creds[keyToken]; ok {
ps.Configuration[keyToken] = v
}
return ps, nil
}
```
6. Before generating all resources that the provider has, let's go step by step
and only start with generating CRDs for [github_repository] and
1. Next add external name configurations for the [github_repository] and
[github_branch] Terraform resources.
To limit the resources to be generated, we need to provide an include list
option with `tjconfig.WithIncludeList` in file `config/provider.go`:
> [!NOTE]
> Only generate resources with an external name configuration defined.
- Add external name configurations for these two resources in
`config/external_name.go` as an entry to the map called
`ExternalNameConfigs`
```go
pc := tjconfig.NewProviderWithSchema([]byte(providerSchema), resourcePrefix, modulePath,
tjconfig.WithIncludeList([]string{
"github_repository$",
"github_branch$",
}))
// ExternalNameConfigs contains all external name configurations for this
// provider.
var ExternalNameConfigs = map[string]config.ExternalName{
...
// Name is a parameter and it is also used to import the resource.
"github_repository": config.NameAsIdentifier,
// The import ID consists of several parameters. We'll use branch name as
// the external name.
"github_branch": config.TemplatedStringAsIdentifier("branch", "{{ .parameters.repository }}:{{ .external_name }}:{{ .parameters.source_branch }}"),
}
```
7. Finally, we would need to add some custom configurations for these two
resources as follows:
- Take a look at the documentation for configuring a resource for more
information about [external name configuration](configuring-a-resource.md#external-name).
1. Next add custom configurations for these two resources as follows:
- Create custom configuration directory for whole repository group
```bash
# Create custom configuration directory for whole repository group
mkdir config/repository
# Create custom configuration directory for whole branch group
```
- Create custom configuration directory for whole branch group
```bash
mkdir config/branch
```
- Create the repository group configuration file
```bash
cat <<EOF > config/repository/config.go
package repository
import "github.com/upbound/upjet/pkg/config"
import "github.com/crossplane/upjet/v2/pkg/config"
// Configure configures individual resources by adding custom ResourceConfigurators.
func Configure(p *config.Provider) {
p.AddResourceConfigurator("github_repository", func(r *config.Resource) {
// we need to override the default group that upjet generated for
// We need to override the default group that upjet generated for
// this resource, which would be "github"
r.ShortGroup = "repository"
})
@ -128,29 +154,29 @@ be quite similar for any other Terraform provider.
EOF
```
- Create the branch group configuration file
> [!NOTE]
> Note that you need to change `myorg/provider-github` to your organization.
```bash
cat <<EOF > config/branch/config.go
package branch
import "github.com/upbound/upjet/pkg/config"
import "github.com/crossplane/upjet/v2/pkg/config"
func Configure(p *config.Provider) {
p.AddResourceConfigurator("github_branch", func(r *config.Resource) {
// we need to override the default group that upjet generated for
// We need to override the default group that upjet generated for
// this resource, which would be "github"
r.ShortGroup = "branch"
// Identifier for this resource is assigned by the provider. In other
// words it is not simply the name of the resource.
r.ExternalName = config.IdentifierFromProvider
// This resource need the repository in which branch would be created
// as an input. And by defining it as a reference to Repository
// object, we can build cross resource referencing. See
// repositoryRef in the example in the Testing section below.
r.References["repository"] = config.Reference{
Type: "github.com/crossplane-contrib/provider-jet-github/apis/repository/v1alpha1.Repository",
Type: "github.com/myorg/provider-github/apis/repository/v1alpha1.Repository",
}
})
}
@ -163,18 +189,18 @@ be quite similar for any other Terraform provider.
import (
...
tjconfig "github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/types/conversion/cli"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
ujconfig "github.com/upbound/crossplane/pkg/config"
+ "github.com/crossplane-contrib/provider-jet-github/config/branch"
+ "github.com/crossplane-contrib/provider-jet-github/config/repository"
- "github.com/myorg/provider-github/config/null"
+ "github.com/myorg/provider-github/config/branch"
+ "github.com/myorg/provider-github/config/repository"
)
func GetProvider() *tjconfig.Provider {
func GetProvider() *ujconfig.Provider {
...
for _, configure := range []func(provider *tjconfig.Provider){
for _, configure := range []func(provider *ujconfig.Provider){
// add custom config functions
- null.Configure,
+ repository.Configure,
+ branch.Configure,
} {
@ -182,25 +208,22 @@ be quite similar for any other Terraform provider.
}
```
**_To learn more about custom resource configurations (in step 7), please see
the [Configuring a Resource](/docs/add-new-resource-long.md) document._**
_To learn more about custom resource configurations (in step 7), please
see the [Configuring a Resource](configuring-a-resource.md) document._
1. Now we can generate our Upjet Provider:
8. Now we can generate our Upjet Provider:
Before we run `make generate` ensure to install `goimports`
```bash
go install golang.org/x/tools/cmd/goimports@latest
```
```bash
make generate
```
### Adding New Resources
To add more resources, please **follow the steps between 6-8 for each resource**.
Alternatively, you can drop the `tjconfig.WithIncludeList` option in provider
Configuration which would generate all resources, and you can add resource
configurations as a next step.
## Test
## Testing the generated resources
Now let's test our generated resources.
@ -213,7 +236,7 @@ Now let's test our generated resources.
mkdir examples/branch
# remove the sample directory which was an example in the template
rm -rf examples/sample
rm -rf examples/null
```
Create a provider secret template:
@ -235,12 +258,11 @@ Now let's test our generated resources.
```
Create example for `repository` resource, which will use
`upjet-provider-template` repo as template for the repository
to be created:
`upjet-provider-template` repo as template for the repository to be created:
```bash
cat <<EOF > examples/repository/repository.yaml
apiVersion: repository.github.jet.crossplane.io/v1alpha1
apiVersion: repository.github.upbound.io/v1alpha1
kind: Repository
metadata:
name: hello-crossplane
@ -249,25 +271,24 @@ Now let's test our generated resources.
description: "Managed with Crossplane Github Provider (generated with Upjet)"
visibility: public
template:
- owner: crossplane-contrib
- owner: upbound
repository: upjet-provider-template
providerConfigRef:
name: default
EOF
```
Create `branch` resource which refers to the above repository
managed resource:
Create `branch` resource which refers to the above repository managed
resource:
```bash
cat <<EOF > examples/branch/branch.yaml
apiVersion: branch.github.jet.crossplane.io/v1alpha1
apiVersion: branch.github.upbound.io/v1alpha1
kind: Branch
metadata:
name: hello-upjet
spec:
forProvider:
branch: hello-upjet
repositoryRef:
name: hello-crossplane
providerConfigRef:
@ -275,6 +296,10 @@ Now let's test our generated resources.
EOF
```
In order to change the `apiVersion`, you can use `WithRootGroup` and
`WithShortName` options in `config/provider.go` as arguments to
`ujconfig.NewProvider`.
2. Generate a [Personal Access Token](https://github.com/settings/tokens) for
your Github account with `repo/public_repo` and `delete_repo` scopes.
@ -294,12 +319,16 @@ Now let's test our generated resources.
5. Run the provider:
Please make sure Terraform is installed before running the "make run"
command, you can check
[this guide](https://developer.hashicorp.com/terraform/downloads).
```bash
make run
```
6. Apply ProviderConfig and example manifests (_In another terminal since
the previous command is blocking_):
6. Apply ProviderConfig and example manifests (_In another terminal since the
previous command is blocking_):
```bash
# Create "crossplane-system" namespace if not exists
@ -327,6 +356,8 @@ Now let's test our generated resources.
Verify that repo `hello-crossplane` and branch `hello-upjet` created under
your GitHub account.
8. You can check the errors and events by calling `kubectl describe` for either
of the resources.
9. Cleanup
@ -338,12 +369,15 @@ Now let's test our generated resources.
Verify that the repo got deleted once deletion is completed on the control
plane.
## Next steps
Now that you've seen the basics of generating `CustomResourceDefinitions` for
your provider, you can learn more about
[configuring resources](configuring-a-resource.md) or
[testing your resources](testing-with-uptest.md) with Uptest.
[Terraform GitHub provider]: https://registry.terraform.io/providers/integrations/github/latest/docs
[upjet-provider-template]: https://github.com/upbound/upjet-provider-template
[upbound/build]: https://github.com/upbound/build
[Terraform documentation for provider configuration]: https://registry.terraform.io/providers/integrations/github/latest/docs#argument-reference
[github_repository]: https://registry.terraform.io/providers/integrations/github/latest/docs/resources/repository
[github_branch]: https://registry.terraform.io/providers/integrations/github/latest/docs/resources/branch
[this line in controller Dockerfile]: https://github.com/upbound/upjet-provider-template/blob/main/cluster/images/official-provider-template-controller/Dockerfile#L18-L26
[terraform-plugin-sdk]: https://github.com/hashicorp/terraform-plugin-sdk

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

BIN
docs/images/azure-wi.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

View File

@ -188,12 +188,12 @@
"updated": 1640767155602,
"fontSize": 16,
"fontFamily": 3,
"text": "apiVersion: sql.azure.jet.crossplane.io/v1alpha1\nkind: Server\nmetadata:\n name: myserver\n annotations:\n crossplane.io/external-name: myserver\nspec:\n forProvider:\n ...\n resourceGroupNameRef:\n name: myresourcegroup\n providerConfigRef:\n name: example",
"text": "apiVersion: sql.azure.upbound.io/v1alpha1\nkind: Server\nmetadata:\n name: myserver\n annotations:\n crossplane.io/external-name: myserver\nspec:\n forProvider:\n ...\n resourceGroupNameRef:\n name: myresourcegroup\n providerConfigRef:\n name: example",
"baseline": 243,
"textAlign": "left",
"verticalAlign": "top",
"containerId": null,
"originalText": "apiVersion: sql.azure.jet.crossplane.io/v1alpha1\nkind: Server\nmetadata:\n name: myserver\n annotations:\n crossplane.io/external-name: myserver\nspec:\n forProvider:\n ...\n resourceGroupNameRef:\n name: myresourcegroup\n providerConfigRef:\n name: example"
"originalText": "apiVersion: sql.azure.upbound.io/v1alpha1\nkind: Server\nmetadata:\n name: myserver\n annotations:\n crossplane.io/external-name: myserver\nspec:\n forProvider:\n ...\n resourceGroupNameRef:\n name: myresourcegroup\n providerConfigRef:\n name: example"
},
{
"id": "5TV10V0jD_m7Ba68DZ2fG",

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

View File

@ -0,0 +1,3 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0

View File

@ -1,227 +0,0 @@
## Manual Migration Guide to Official Providers
This document describes the steps that need to be applied to migrate from
community providers to official providers manually. We plan to implement a
client-based tool to automate this process.
For the sake of simplicity, we only focus on migrating managed resources
and compositions in this guide. These scenarios can be extended
with other tools like ArgoCD, Flux, Helm, Kustomize, etc.
### Migrating Managed Resources
Migrating existing managed resources to official providers can be simplified
as import scenarios. The aim is to modify the community provider's scheme to official
providers and apply those manifests to import existing cloud resources.
To prevent a conflict between two provider controllers reconciling for the same external resource,
we're scaling down the old provider. This can also be eliminated with the new [pause annotation feature].
1) Backup managed resource manifests
```bash
kubectl get managed -o yaml > backup-mrs.yaml
```
2) Update deletion policy to `Orphan` with the command below:
```bash
kubectl patch $(kubectl get managed -o name) -p '{"spec": {"deletionPolicy":"Orphan"}}' --type=merge
```
3) Install the official provider
4) Install provider config
5) Update managed resource manifests to the new API version `upbound.io`, external-name annotations and new field names/types. You can use
[Upbound Marketplace] for comparing CRD schema changes. It is also planned to extend current documentation with external-name syntax in this [issue].
```bash
cp backup-mrs.yaml op-mrs.yaml
vi op-mrs.yaml
```
6) Scale down Crossplane deployment
```bash
kubectl scale deploy crossplane --replicas=0
```
7) Scale down native provider deployment
```bash
kubectl scale deploy ${deployment_name} --replicas=0
```
8) Apply updated managed resources and wait until they become ready
```bash
kubectl apply -f op-mrs.yaml
```
9) Delete old MRs
```bash
kubectl delete -f backup-mrs.yaml
kubectl patch -f backup-mrs.yaml -p '{"metadata":{"finalizers":[]}}' --type=merge
```
10) Delete old provider config
```bash
kubectl delete providerconfigs ${provider_config_name}
```
11) Delete old provider
```bash
kubectl delete providers ${provider_name}
```
12) Scale up Crossplane deployment
```bash
kubectl scale deploy crossplane --replicas=1
```
#### Migrating VPC Managed Resource
In below, we display the required changes to migrate a native provider-aws VPC resource to an official
provider-aws VPC. As you can see, we have updated the API version and some field names/types in spec
and status subresources. To find out which fields to update, we need to compare the CRDs in the current
provider version and the target official provider version.
```diff
- apiVersion: ec2.aws.crossplane.io/v1beta1
+ apiVersion: ec2.aws.upbound.io/v1beta1
kind: VPC
metadata:
annotations:
crossplane.io/external-create-pending: "2022-09-23T12:20:31Z"
crossplane.io/external-create-succeeded: "2022-09-23T12:20:33Z"
crossplane.io/external-name: vpc-008f150c8f525bf24
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"ec2.aws.crossplane.io/v1beta1","kind":"VPC","metadata":{"annotations":{},"name":"ezgi-vpc"},"spec":{"deletionPolicy":"Delete","forProvider":{"cidrBlock":"192.168.0.0/16","enableDnsHostNames":true,"enableDnsSupport":true,"instanceTenancy":"default","region":"us-west-2","tags":[{"key":"Name","value":"platformref-vpc"},{"key":"Owner","value":"Platform Team"},{"key":"crossplane-kind","value":"vpc.ec2.aws.crossplane.io"},{"key":"crossplane-name","value":"ezgi-plat-ref-aws-tcg6t-n6zph"},{"key":"crossplane-providerconfig","value":"default"}]},"providerConfigRef":{"name":"default"}}}
creationTimestamp: "2022-09-23T12:18:21Z"
finalizers:
- finalizer.managedresource.crossplane.io
generation: 2
name: ezgi-vpc
resourceVersion: "22685"
uid: 81211d98-57f2-4f2e-a6db-04bb75cc60ff
spec:
deletionPolicy: Delete
forProvider:
cidrBlock: 192.168.0.0/16
- enableDnsHostNames: true
+ enableDnsHostnames: true
enableDnsSupport: true
instanceTenancy: default
region: us-west-2
tags:
- - key: Name
- value: platformref-vpc
- - key: Owner
- value: Platform Team
- - key: crossplane-kind
- value: vpc.ec2.aws.crossplane.io
- - key: crossplane-name
- value: ezgi-vpc
- - key: crossplane-providerconfig
- value: default
+ Name: platformref-vpc
+ Owner: Platform Team
+ crossplane-kind: vpc.ec2.aws.upbound.io
+ crossplane-name: ezgi-vpc
+ crossplane-providerconfig: default
providerConfigRef:
name: default
```
### Migrating Crossplane Configurations
Configuration migration can be more challenging. Because, in addition to managed resource migration, we need to
update our composition and claim files to match the new CRDs. Just like managed resource migration, we first start to import
our existing resources to official provider and then update our configuration package version to point to the
official provider.
1) Backup managed resource manifests
```bash
kubectl get managed -o yaml > backup-mrs.yaml
```
2) Scale down Crossplane deployment
```bash
kubectl scale deploy crossplane --replicas=0
```
3) Update deletion policy to `Orphan` with the command below:
```bash
kubectl patch $(kubectl get managed -o name) -p '{"spec": {"deletionPolicy":"Orphan"}}' --type=merge
```
4) Update composition files to the new API version `upbound.io`, external-name annotations and new field names/types. You can use
[Upbound Marketplace] for comparing CRD schema changes. It is also planned to extend current documentation with external-name syntax in this [issue].
5) Update `crossplane.yaml` file with official provider dependency.
6) Build and push the new configuration version
7) Install Official Provider
8) Install provider config
9) Update managed resource manifests with the same changes done on composition files
```bash
cp backup-mrs.yaml op-mrs.yaml
vi op-mrs.yaml
```
10) Scale down native provider deployment
```bash
kubectl scale deploy ${deployment_name} --replicas=0
```
11) Apply updated managed resources and wait until they become ready
```bash
kubectl apply -f op-mrs.yaml
```
12) Delete old MRs
```bash
kubectl delete -f backup-mrs.yaml
kubectl patch -f backup-mrs.yaml -p '{"metadata":{"finalizers":[]}}' --type=merge
```
13) Update the configuration to the new version
```bash
cat <<EOF | kubectl apply -f -
apiVersion: pkg.crossplane.io/v1
kind: Configuration
metadata:
name: ${configuration_name}
spec:
package: ${configuration_registry}/${configuration_repository}:${new_version}
EOF
```
14) Scale up Crossplane deployment
```bash
kubectl scale deploy crossplane --replicas=1
```
15) Delete old provider config
```bash
kubectl delete providerconfigs ${provider_config_name}
```
16) Delete old provider
```bash
kubectl delete providers ${provider_name}
```
#### Migrating VPC in a Composition
In the below, there is a small code snippet from platform-ref-aws to update VPC resource.
```diff
resources:
- base:
- apiVersion: ec2.aws.crossplane.io/v1beta1
+ apiVersion: ec2.aws.upbound.io/v1beta1
kind: VPC
spec:
forProvider:
spec:
region: us-west-2
cidrBlock: 192.168.0.0/16
enableDnsSupport: true
- enableDnsHostNames: true
+ enableDnsHostnames: true
- tags:
- - key: Owner
- value: Platform Team
- - key: Name
- value: platformref-vpc
+ tags:
+ Owner: Platform Team
+ Name: platformref-vpc
name: platformref-vcp
```
PRs which fully update existing platform-refs can be found below:
- platform-ref-aws: https://github.com/upbound/platform-ref-aws/pull/69
- platform-ref-azure: https://github.com/upbound/platform-ref-azure/pull/10
- platform-ref-gcp: https://github.com/upbound/platform-ref-gcp/pull/22
[pause annotation feature]: https://github.com/upbound/product/issues/227
[Upbound Marketplace]: https://marketplace.upbound.io/
[issue]: https://github.com/upbound/official-providers/issues/792

414
docs/migration-framework.md Normal file
View File

@ -0,0 +1,414 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
## Migration Framework
The [migration package](https://github.com/crossplane/upjet/tree/main/pkg/migration)
in the [upjet](https://github.com/crossplane/upjet) repository contains a framework
that allows users to write converters and migration tools that are suitable for
their system and use. This document will focus on the technical details of the
Migration Framework and how to use it.
The concept of migration in this document is, in its most basic form, the
conversion of a crossplane resource from its current state in the source API to
its new state in the target API.
Let's explain this with an example. For example, a user has a classic
provider-based VPC Managed Resource in her cluster. The user wants to migrate
her system to upjet-based providers. However, she needs to make various
conversions for this. Because there are differences between the API of the VPC
MR in classic provider (source API) and upjet-based provider (target API), such
as group name. While the group value of the VPC MR in the source API is
ec2.aws.crossplane.io, it is ec2.aws.upbound.io in upjet-based providers. There
may also be some changes in other API fields of the resource. The fact that the
values of an existing resource, such as group and kind, cannot be changed
requires that this resource be recreated with the migration procedure without
affecting the user's case.
So, lets see how the framework can help us to migrate the systems.
### Concepts
The Migration Framework had various concepts for users to perform an end-to-end
migration. These play an essential role in understanding the framework's
structure and how to use it.
#### What Is Migration?
There are two main topics when we talk about the concept of migration. The first
one is API Migration. This includes the migration of the example mentioned above.
API Migration, in its most general form, is the migration from the API/provider
that the user is currently using (e.g. Community Classic) to the target
API/provider (e.g. upjet-based provider). The context of the resources
to be migrated here are MRs and Compositions because there are various API
changes in the related resources.
The other type of migration is the migration of Crossplane Configuration
packages. After the release of family providers, migrating users with monolithic
providers and configuration packages to family providers has come to the agenda.
The migration framework is extended in this context, and monolithic package
references in Crossplane resources such as Provider, Lock, and Configuration are
replaced with family ones. There is no API migration here. Because there is no
API change in the related resources in source and targets. The purpose is only
to provide a smooth transition from monolithic packages to family packages.
#### Converters
Converters convert related resource kind from the source API or structure to the
target API or structure. There are many types of converters supported in the
migration framework. However, in this document, we will talk about converters
and examples for API migration.
1. **Resource Converter** converts a managed resource from the migration source
provider's schema to the migration target provider's schema. The function of
the interface is [Resource](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L32).
`Resource` takes a managed resource and returns zero or more managed resources to
be created.
```go
Resource(mg resource.Managed) ([]resource.Managed, error)
```
[Here](https://github.com/upbound/extensions-migration/blob/main/converters/provider-aws/kafka/cluster.go)
is an example.
As can be seen in the example, the [CopyInto](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L29)
function was called before starting the change in the resource. This function
copies all fields that can be copied from the source API to the target API. The
function may encounter an error when copying some fields. In this case, these
fields should be passed to the `skipFieldPaths` value of the function. This way,
the function will not try to copy these fields. In the Kafka Cluster resource in
the example, there have been various changes in the Spec fields, such as Group
and Kind. Related converters should be written to handle these changes. The main
points to be addressed in this regard are listed below.
- Changes in Group and Kind names.
- Changes in the Spec field.
- Changes in the [Field](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L38)
name. Changes due to differences such as lower case upper
case. The important thing here is not the field's Go name but the json path's
name. Therefore, the changes here should be made considering the changes in the
json name.
```go
target.Spec.ForProvider.EnableDNSSupport = source.Spec.ForProvider.EnableDNSSupport
target.Spec.ForProvider.EnableDNSHostnames = source.Spec.ForProvider.EnableDNSHostNames
```
- Fields with [completely changed](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L39)
names. You may need to review the API documentation to understand them.
```go
target.Spec.ForProvider.AssignGeneratedIPv6CidrBlock = source.Spec.ForProvider.AmazonProvidedIpv6CIDRBlock
```
- Changes in the [field's type](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/ec2/vpc.go#L31).
Such as a value that was previously Integer
changing to [Float64](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L38).
```go
target.Spec.ForProvider.Tags = make(map[string]*string, len(source.Spec.ForProvider.Tags))
for _, t := range source.Spec.ForProvider.Tags {
v := t.Value
target.Spec.ForProvider.Tags[t.Key] = &v
}
```
```go
target.Spec.ForProvider.ConfigurationInfo[0].Revision = common.PtrFloat64FromInt64(source.Spec.ForProvider.CustomConfigurationInfo.Revision)
```
- In Upjet-based providers, all structs in the API are defined as Slice. This is
not the case in Classic Providers. For this reason, this situation should be
taken into consideration when making the relevant [struct transformations](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/kafka/cluster.go#L40).
```go
if source.Spec.ForProvider.EncryptionInfo != nil {
target.Spec.ForProvider.EncryptionInfo = make([]targetv1beta1.EncryptionInfoParameters, 1)
target.Spec.ForProvider.EncryptionInfo[0].EncryptionAtRestKMSKeyArn = source.Spec.ForProvider.EncryptionInfo.EncryptionAtRest.DataVolumeKMSKeyID
if source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit != nil {
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit = make([]targetv1beta1.EncryptionInTransitParameters, 1)
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit[0].InCluster = source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit.InCluster
target.Spec.ForProvider.EncryptionInfo[0].EncryptionInTransit[0].ClientBroker = source.Spec.ForProvider.EncryptionInfo.EncryptionInTransit.ClientBroker
}
}
```
- External name conventions may differ between upjet-based providers and classic
providers. For this reason, external name conversions of related resources
should also be done in converter functions.
Another important case is when an MR in the source API corresponds to more than
one MR in the target API. Since the [Resource](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L32)
function returns a list of MRs, this infrastructure is ready. There is also an
example converter [here](https://github.com/upbound/extensions-migration/blob/main/converters/provider-aws/ec2/routetable.go).
2. **Composed Template Converter** converts a Composition's ComposedTemplate
from the migration source provider's schema to the migration target provider's
schema. Conversion of the `Base` must be handled by a ResourceConverter.
This interface has a function [ComposedTemplate](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L49).
`ComposedTemplate` receives a migration source v1.ComposedTemplate that has been
converted, by a resource converter, to the v1.ComposedTemplates with the new
shapes specified in the `convertedTemplates` argument. Conversion of the
v1.ComposedTemplate.Bases is handled via ResourceConverter.Resource and
ComposedTemplate must only convert the other fields (`Patches`,
`ConnectionDetails`, `PatchSet`s, etc.) Returns any errors encountered.
```go
ComposedTemplate(sourceTemplate xpv1.ComposedTemplate, convertedTemplates ...*xpv1.ComposedTemplate) error
```
There is a generic Composed Template implementation [DefaultCompositionConverter](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/registry.go#L481).
`DefaultCompositionConverter` is a generic composition converter. It takes a
`conversionMap` that is fieldpath map for conversion. Key of the conversionMap
points to the source field and the Value of the conversionMap points to the
target field. Example: "spec.forProvider.assumeRolePolicyDocument": "spec.forProvider.assumeRolePolicy".
And the fns are functions that manipulate the patchsets.
3. **PatchSetConverter** converts patch sets of Compositions. Any registered
PatchSetConverters will be called before any resource or ComposedTemplate
conversion is done. The rationale is to convert the Composition-wide patch sets
before any resource-specific conversions so that migration targets can
automatically inherit converted patch sets if their schemas match them.
Registered PatchSetConverters will be called in the order they are registered.
This interface has function [PatchSets](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L69).
`PatchSets` converts the `spec.patchSets` of a Composition from the migration
source provider's schema to the migration target provider's schema.
```go
PatchSets(psMap map[string]*xpv1.PatchSet) error
```
There is a [common PatchSets implementation](https://github.com/upbound/extensions-migration/blob/3c1d4cd0717fa915d7f23455c6622b9190e5bd6d/converters/provider-aws/common/common.go#L14)
for provider-aws resources.
```
NOTE: Unlike MR converters, Composition converters and PatchSets converters can
contain very specific cases depending on the user scenario. Therefore, it is not
possible to write a universal migrator in this context. This is due to the fact
that all compositions are inherently different, although some helper functions
can be used in common.
```
#### Registry
Registry is a bunch of converters. Every Converter is keyed with the associated
`schema.GroupVersionKind`s and an associated `runtime.Scheme` with which the
corresponding types are registered. All converters intended to be used during
migration must be registered in the Registry. For Kinds that are not registered
in the Registry, no conversion will be performed, even if the resource is
included and read in the Source.
Before registering converters in the registry, the source and target API schemes
need to be added to the registry so that the respective Kinds are recognized.
```go
sourceF := sourceapis.AddToScheme
targetF := targetapis.AddToScheme
if err := common.AddToScheme(registry, sourceF, targetF); err != nil {
panic(err)
}
```
In addition, the Composition type and, if applicable, the Composite and Claim
types must also be defined in the registry before the converters are registered.
```go
if err := registry.AddCompositionTypes(); err != nil {
panic(err)
}
registry.AddClaimType(...)
registry.AddCompositeType(...)
```
The `RegisterAPIConversionFunctions` is used for registering the API conversion
functions. These functions are, Resource Converters, Composition Converters and
PatchSet Converters.
```go
registry.RegisterAPIConversionFunctions(ec2v1beta1.VPCGroupVersionKind, ec2.VPCResource,
migration.DefaultCompositionConverter(nil, common.ConvertComposedTemplateTags),
common.DefaultPatchSetsConverter)
```
#### Source
[Source](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L114)
is a source for reading resource manifests. It is an interface used to read the
resources subject to migration.
There were currently two implementations of the Source interface.
[File System Source](https://github.com/crossplane/upjet/blob/main/pkg/migration/filesystem.go)
is a source implementation to read resources from the file system. It is the
source type that the user will use to read the resources that they want to
migrate for their cases, such as those in their local system, those in a GitHub
repository, etc.
[Kubernetes Source](https://github.com/crossplane/upjet/blob/main/pkg/migration/kubernetes.go)
is a source implementation to read resources from Kubernetes cluster. It is the
source type that the user will use to read the resources that they want to
migrate for their cases in their Kubernetes Cluster. There are two types for
reading the resources from the Kubernetes cluster.
The important point here is that the Kubernetes source does not read all the
resources in the cluster. Kubernetes Source reads using two ways. The first one
is when the user specifies the category when initializing the Migration Plan.
If this is done, Kubernetes Source will read all resources belonging to the
specified categories. Example categories: managed, claim, etc. The other way is
the Kinds of the converters registered in the Registry. As it is known, every
converter registered in the registry is registered according to a specific type.
Kubernetes Source reads the resources of the registered converter Kinds. For
example, if a converter of VPC Kind is registered in the registry, Kubernetes
Source will read the resources of the VPC type in the cluster.
```
NOTE: Multiple source is allowed. While creating the Plan Generator object,
by using the following option, you can register both sources.
migration.WithMultipleSources(sources...)
```
#### Target
[Target](https://github.com/crossplane/upjet/blob/cc55f3952474e51ee31cd645c4a9578248de7f3a/pkg/migration/interfaces.go#L132)
is a target where resource manifests can be manipulated
(e.g., added, deleted, patched, etc.). It is the interface for storing the
manifests resulting from the migration steps. Currently only File System Target
is supported. In other words, the converted manifests that the user will see as
output when they start the migration process will be stored in the File System.
#### Migration Plan
Plan represents a migration plan for migrating managed resources, and associated
composites and claims from a migration source provider to a migration target provider.
PlanGenerator generates a Migration Plan reading the manifests available from
`source`, converting managed resources and compositions using the available
`Converter`s registered in the `registry` and writing the output manifests to
the specified `target`.
There is an example plan:
```yaml
spec:
steps:
- patch:
type: merge
files:
- pause-managed/sample-vpc.vpcs.fakesourceapi.yaml
name: pause-managed
manualExecution:
- "kubectl patch --type='merge' -f pause-managed/sample-vpc.vpcs.fakesourceapi.yaml --patch-file pause-managed/sample-vpc.vpcs.fakesourceapi.yaml"
type: Patch
- patch:
type: merge
files:
- pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: pause-composites
manualExecution:
- "kubectl patch --type='merge' -f pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file pause-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
- apply:
files:
- create-new-managed/sample-vpc.vpcs.faketargetapi.yaml
name: create-new-managed
manualExecution:
- "kubectl apply -f create-new-managed/sample-vpc.vpcs.faketargetapi.yaml"
type: Apply
- apply:
files:
- new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml
name: new-compositions
manualExecution:
- "kubectl apply -f new-compositions/example-migrated.compositions.apiextensions.crossplane.io.yaml"
type: Apply
- patch:
type: merge
files:
- edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: edit-composites
manualExecution:
- "kubectl patch --type='merge' -f edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file edit-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
- patch:
type: merge
files:
- edit-claims/my-resource.myresources.test.com.yaml
name: edit-claims
manualExecution:
- "kubectl patch --type='merge' -f edit-claims/my-resource.myresources.test.com.yaml --patch-file edit-claims/my-resource.myresources.test.com.yaml"
type: Patch
- patch:
type: merge
files:
- deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml
name: deletion-policy-orphan
manualExecution:
- "kubectl patch --type='merge' -f deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml --patch-file deletion-policy-orphan/sample-vpc.vpcs.fakesourceapi.yaml"
type: Patch
- delete:
options:
finalizerPolicy: Remove
resources:
- group: fakesourceapi
kind: VPC
name: sample-vpc
version: v1alpha1
name: delete-old-managed
manualExecution:
- "kubectl delete VPC.fakesourceapi sample-vpc"
type: Delete
- patch:
type: merge
files:
- start-managed/sample-vpc.vpcs.faketargetapi.yaml
name: start-managed
manualExecution:
- "kubectl patch --type='merge' -f start-managed/sample-vpc.vpcs.faketargetapi.yaml --patch-file start-managed/sample-vpc.vpcs.faketargetapi.yaml"
type: Patch
- patch:
type: merge
files:
- start-composites/my-resource-dwjgh.xmyresources.test.com.yaml
name: start-composites
manualExecution:
- "kubectl patch --type='merge' -f start-composites/my-resource-dwjgh.xmyresources.test.com.yaml --patch-file start-composites/my-resource-dwjgh.xmyresources.test.com.yaml"
type: Patch
version: 0.1.0
```
As can be seen here, the Plan includes steps on how to migrate the user's
resources. The output manifests existing in the relevant steps are the outputs
of the converters registered by the user. Therefore, it should be underlined
once again that converters are the ones that do the actual conversion during
migration.
While creating a Plan Generator object, user may set some options by using the
option functions. The most important two option functions are:
- WithErrorOnInvalidPatchSchema returns a PlanGeneratorOption for configuring
whether the PlanGenerator should error and stop the migration plan generation in
case an error is encountered while checking a patch statement's conformance to
the migration source or target.
- WithSkipGVKs configures the set of GVKs to skip for conversion during a
migration.
### Example Usage - Template Migrator
In the [upbound/extensions-migration](https://github.com/upbound/extensions-migration)
repository there are two important things for the Migration Framework. One of
them is the [common converters](https://github.com/upbound/extensions-migration/tree/main/converters/provider-aws).
These converters include previously written API converters. The relevant
converters are added here for re-use in different migrators. By reviewing
converters here, in addition to this document, you can better understand how to
write a converter.
The second part is the [template migrator](https://github.com/upbound/extensions-migration/blob/main/converters/template/cmd/main.go).
Here, it will be possible to generate a Plan by using the above-mentioned
capabilities of the migration framework. It is worth remembering that since each
source has its own characteristics, the user has to make various changes on the
template.

122
docs/monitoring.md Normal file
View File

@ -0,0 +1,122 @@
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Monitoring the Upjet runtime
The [Kubernetes controller-runtime] library provides a Prometheus metrics
endpoint by default. The Upjet based providers including the
[upbound/provider-aws], [upbound/provider-azure], [upbound/provider-azuread] and
[upbound/provider-gcp] expose
[various metrics](https://book.kubebuilder.io/reference/metrics-reference.html)
from the controller-runtime to help monitor the health of the various runtime
components, such as the [`controller-runtime` client], the [leader election
client], the [controller workqueues], etc. In addition to these metrics, each
controller also
[exposes](https://github.com/kubernetes-sigs/controller-runtime/blob/60af59f5b22335516850ca11c974c8f614d5d073/pkg/internal/controller/metrics/metrics.go#L25)
various metrics related to the reconciliation of the custom resources and active
reconciliation worker goroutines.
In addition to these metrics exposed by the `controller-runtime`, the Upjet
based providers also expose metrics specific to the Upjet runtime. The Upjet
runtime registers some custom metrics using the
[available extension mechanism](https://book.kubebuilder.io/reference/metrics.html#publishing-additional-metrics),
and are available from the default `/metrics` endpoint of the provider pod. Here
are these custom metrics exposed from the Upjet runtime:
- `upjet_terraform_cli_duration`: This is a histogram metric and reports
statistics, in seconds, on how long it takes a Terraform CLI invocation to
complete.
- `upjet_terraform_active_cli_invocations`: This is a gauge metric and it's the
number of active (running) Terraform CLI invocations.
- `upjet_terraform_running_processes`: This is a gauge metric and it's the
number of running Terraform CLI and Terraform provider processes.
- `upjet_resource_ttr`: This is a histogram metric and it measures, in seconds,
the time-to-readiness for managed resources.
Prometheus metrics can have [labels] associated with them to differentiate the
characteristics of the measurements being made, such as differentiating between
the CLI processes and the Terraform provider processes when counting the number
of active Terraform processes running. Here is a list of labels associated with
each of the above custom Upjet metrics:
- Labels associated with the `upjet_terraform_cli_duration` metric:
- `subcommand`: The `terraform` subcommand that's run, e.g., `init`, `apply`,
`plan`, `destroy`, etc.
- `mode`: The execution mode of the Terraform CLI, one of `sync` (so that the
CLI was invoked synchronously as part of a reconcile loop), `async` (so that
the CLI was invoked asynchronously, the reconciler goroutine will poll and
collect results in future).
- Labels associated with the `upjet_terraform_active_cli_invocations` metric:
- `subcommand`: The `terraform` subcommand that's run, e.g., `init`, `apply`,
`plan`, `destroy`, etc.
- `mode`: The execution mode of the Terraform CLI, one of `sync` (so that the
CLI was invoked synchronously as part of a reconcile loop), `async` (so that
the CLI was invoked asynchronously, the reconciler goroutine will poll and
collect results in future).
- Labels associated with the `upjet_terraform_running_processes` metric:
- `type`: Either `cli` for Terraform CLI (the `terraform` process) processes
or `provider` for the Terraform provider processes. Please note that this is
a best effort metric that may not be able to precisely catch & report all
relevant processes. We may, in the future, improve this if needed by for
example watching the `fork` system calls. But currently, it may prove to be
useful to watch rouge Terraform provider processes.
- Labels associated with the `upjet_resource_ttr` metric:
- `group`, `version`, `kind` labels record the
[API group, version and kind](https://kubernetes.io/docs/reference/using-api/api-concepts/)
for the managed resource, whose
[time-to-readiness](https://github.com/crossplane/terrajet/issues/55#issuecomment-929494212)
measurement is captured.
## Examples
You can [export](https://book.kubebuilder.io/reference/metrics.html) all these
custom metrics and the `controller-runtime` metrics from the provider pod for
Prometheus. Here are some examples showing the custom metrics in action from the
Prometheus console:
- `upjet_terraform_active_cli_invocations` gauge metric showing the sync & async
`terraform init/apply/plan/destroy` invocations: <img width="3000" alt="image"
src="https://user-images.githubusercontent.com/9376684/223296539-94e7d634-58b0-4d3f-942e-8b857eb92ef7.png">
- `upjet_terraform_running_processes` gauge metric showing both `cli` and
`provider` labels: <img width="2999" alt="image"
src="https://user-images.githubusercontent.com/9376684/223297575-18c2232e-b5af-4cc1-916a-d61fe5dfb527.png">
- `upjet_terraform_cli_duration` histogram metric, showing average Terraform CLI
running times for the last 5m: <img width="2993" alt="image"
src="https://user-images.githubusercontent.com/9376684/223299401-8f128b74-8d9c-4c82-86c5-26870385bee7.png">
- The medians (0.5-quantiles) for these observations aggregated by the mode and
Terraform subcommand being invoked: <img width="2999" alt="image"
src="https://user-images.githubusercontent.com/9376684/223300766-c1adebb9-bd19-4a38-9941-116185d8d39f.png">
- `upjet_resource_ttr` histogram metric, showing average resource TTR for the
last 10m: <img width="2999" alt="image"
src="https://user-images.githubusercontent.com/9376684/223309711-edef690e-2a59-419b-bb93-8f837496bec8.png">
- The median (0.5-quantile) for these TTR observations: <img width="3002"
alt="image"
src="https://user-images.githubusercontent.com/9376684/223309727-d1a0f4e2-1ed2-414b-be67-478a0575ee49.png">
These samples have been collected by provisioning 10 [upbound/provider-aws]
`cognitoidp.UserPool` resources by running the provider with a poll interval of
1m. In these examples, one can observe that the resources were polled
(reconciled) twice after they acquired the `Ready=True` condition and after
that, they were destroyed.
## Reference
You can find a full reference of the exposed metrics from the Upjet-based
providers [here](provider_metrics_help.txt).
[Kubernetes controller-runtime]: https://github.com/kubernetes-sigs/controller-runtime
[upbound/provider-aws]: https://github.com/upbound/provider-aws
[upbound/provider-azure]: https://github.com/upbound/provider-azure
[upbound/provider-azuread]: https://github.com/upbound/provider-azuread
[upbound/provider-gcp]: https://github.com/upbound/provider-gcp
[`controller-runtime` client]: https://github.com/kubernetes-sigs/controller-runtime/blob/60af59f5b22335516850ca11c974c8f614d5d073/pkg/metrics/client_go_adapter.go#L40
[leader election client]: https://github.com/kubernetes-sigs/controller-runtime/blob/60af59f5b22335516850ca11c974c8f614d5d073/pkg/metrics/leaderelection.go#L12
[controller workqueues]: https://github.com/kubernetes-sigs/controller-runtime/blob/60af59f5b22335516850ca11c974c8f614d5d073/pkg/metrics/workqueue.go#L40
[labels]: https://prometheus.io/docs/practices/naming/#labels

View File

@ -1,19 +0,0 @@
## Moving Untested Resources to v1beta1
For outside contributors, we wanted to form a baseline for resource test
efforts. Therefore, we created a map: `ExternalNameNotTestedConfigs`. This map
contains the external name configurations of resources, but they were not tested.
And also, the resources schemas and controllers will not be generated after
running `make generate`/`make reviewable` commands.
For the generation of this resources schema and controller, we need to add it to
the `ExternalNameConfigs` map. After this addition, this resources schema and
the controller will be started to generate. By default, every resource that was
added to this map will be generated in the `v1beta1` version.
Here there are two important points. For starting to test efforts, you need a
generated CRD and controller. And for this generation, you need to move your
resource to the `ExternalNameConfigs` map. Then you can start testing and if the
test effort is successful, the new entry can remain on the main map. However, if
there are some problems in tests, and you cannot validate the resource please
move the entry to `ExternalNameNotTestedConfigs` again.

View File

@ -0,0 +1,151 @@
# SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
# SPDX-License-Identifier: CC-BY-4.0
# HELP upjet_terraform_cli_duration Measures in seconds how long it takes a Terraform CLI invocation to complete
# TYPE upjet_terraform_cli_duration histogram
# HELP upjet_terraform_running_processes The number of running Terraform CLI and Terraform provider processes
# TYPE upjet_terraform_running_processes gauge
# HELP upjet_resource_ttr Measures in seconds the time-to-readiness (TTR) for managed resources
# TYPE upjet_resource_ttr histogram
# HELP upjet_terraform_active_cli_invocations The number of active (running) Terraform CLI invocations
# TYPE upjet_terraform_active_cli_invocations gauge
# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors
# TYPE certwatcher_read_certificate_errors_total counter
# HELP certwatcher_read_certificate_total Total number of certificate reads
# TYPE certwatcher_read_certificate_total counter
# HELP controller_runtime_active_workers Number of currently used workers per controller
# TYPE controller_runtime_active_workers gauge
# HELP controller_runtime_max_concurrent_reconciles Maximum number of concurrent reconciles per controller
# TYPE controller_runtime_max_concurrent_reconciles gauge
# HELP controller_runtime_reconcile_errors_total Total number of reconciliation errors per controller
# TYPE controller_runtime_reconcile_errors_total counter
# HELP controller_runtime_reconcile_time_seconds Length of time per reconciliation per controller
# TYPE controller_runtime_reconcile_time_seconds histogram
# HELP controller_runtime_reconcile_total Total number of reconciliations per controller
# TYPE controller_runtime_reconcile_total counter
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
# HELP rest_client_request_duration_seconds Request latency in seconds. Broken down by verb, and host.
# TYPE rest_client_request_duration_seconds histogram
# HELP rest_client_request_size_bytes Request size in bytes. Broken down by verb and host.
# TYPE rest_client_request_size_bytes histogram
# HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.
# TYPE rest_client_requests_total counter
# HELP rest_client_response_size_bytes Response size in bytes. Broken down by verb and host.
# TYPE rest_client_response_size_bytes histogram
# HELP workqueue_adds_total Total number of adds handled by workqueue
# TYPE workqueue_adds_total counter
# HELP workqueue_depth Current depth of workqueue
# TYPE workqueue_depth gauge
# HELP workqueue_longest_running_processor_seconds How many seconds has the longest running processor for workqueue been running.
# TYPE workqueue_longest_running_processor_seconds gauge
# HELP workqueue_queue_duration_seconds How long in seconds an item stays in workqueue before being requested
# TYPE workqueue_queue_duration_seconds histogram
# HELP workqueue_retries_total Total number of retries handled by workqueue
# TYPE workqueue_retries_total counter
# HELP workqueue_unfinished_work_seconds How many seconds of work has been done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.
# TYPE workqueue_unfinished_work_seconds gauge
# HELP workqueue_work_duration_seconds How long in seconds processing an item from workqueue takes.
# TYPE workqueue_work_duration_seconds histogram

View File

@ -1,176 +0,0 @@
## Auto Cross Resource Reference Generation
Cross Resource Referencing is one of the key concepts of the resource
configuration. As a very common case, cloud services depend on other cloud
services. For example, AWS Subnet resource needs an AWS VPC for creation. So,
for creating a Subnet successfully, before you have to create a VPC resource.
Please see the [Dependencies] documentation for more details. And also, for
resource configuration-related parts of cross-resource referencing, please see
[this part] of [Configuring a Resource] documentation.
These documentations focus on the general concepts and manual configurations
of Cross Resource References. However, the main topic of this documentation is
automatic example&reference generation.
Upjet has a scraper tool for scraping provider metadata from the Terraform
Registry. The scraped metadata are:
- Resource Descriptions
- Examples of Resources (in HCL format)
- Field Documentations
- Import Statements
These are very critical information for our automation processes. We use this
scraped metadata in many contexts. For example, field documentation of
resources and descriptions are used as Golang comments for schema fields and
CRDs.
Another important scraped information is examples of resources. As a part
of testing efforts, finding the correct combination of field values is not easy
for every scenario. So, having a working example (combination) is very important
for easy testing.
At this point, this example that is in HCL format is converted to a Managed
Resource manifest, and we can use this manifest in our test efforts.
This is an example from Terraform Registry AWS Ebs Volume resource:
```
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 40
tags = {
Name = "HelloWorld"
}
}
resource "aws_ebs_snapshot" "example_snapshot" {
volume_id = aws_ebs_volume.example.id
tags = {
Name = "HelloWorld_snap"
}
}
```
The generated example:
```yaml
apiVersion: ec2.aws.upbound.io/v1beta1
kind: EBSSnapshot
metadata:
annotations:
meta.upbound.io/example-id: ec2/v1beta1/ebssnapshot
labels:
testing.upbound.io/example-name: example_snapshot
name: example-snapshot
spec:
forProvider:
region: us-west-1
tags:
Name: HelloWorld_snap
volumeIdSelector:
matchLabels:
testing.upbound.io/example-name: example
---
apiVersion: ec2.aws.upbound.io/v1beta1
kind: EBSVolume
metadata:
annotations:
meta.upbound.io/example-id: ec2/v1beta1/ebssnapshot
labels:
testing.upbound.io/example-name: example
name: example
spec:
forProvider:
availabilityZone: us-west-2a
region: us-west-1
size: 40
tags:
Name: HelloWorld
```
Here, there are three very important points that scraper makes easy our life:
- We do not have to find the correct value combinations for fields. So, we can
easily use the generated example manifest in our tests.
- The HCL example was scraped from registry documentation of the `aws_ebs_snapshot`
resource. In the example, you also see the `aws_ebs_volume` resource manifest
because, for the creation of an EBS Snapshot, you need an EBS Volume resource.
Thanks to the source Registry, (in many cases, there are the dependent resources
of target resources) we can also scrape the dependencies of target resources.
- The last item is actually what is intended to be explained in this document.
For using the Cross Resource References, as I mentioned above, you need to add
some references to the resource configuration. But, in many cases, if in the
scraped example, the mentioned dependencies are already described you do not
have to write explicit references to resource configuration. The Cross Resource
Reference generator generates the mentioned references.
### Validating the Cross Resource References
As I mentioned, many references are generated from scraped metadata by an auto
reference generator. However, there are two cases where we miss generating the
references.
The first one is related to some bugs or improvement points in the generator.
This means that the generator can handle many references in the scraped
examples and generate correctly them. But we cannot say that the ratio is %100.
For some cases, the generator cannot generate references although, they are in
the scraped example manifests.
The second one is related to the scraped example itself. As I mentioned above,
the source of the generator is the scraped example manifest. So, it checks the
manifest and tries to generate the found cross-resource references. In some
cases, although there are other reference fields, these do not exist in the
example manifest. They can only be mentioned in schema/field documentation.
For these types of situations, you must configure cross-resource references
explicitly.
### Removing Auto-Generated Cross Resource References In Some Corner Cases
In some cases, the generated references can narrow the reference pool covered by
the field. For example, X resource has an A field and Y and Z resources can be
referenced via this field. However, since the reference to Y is mentioned in the
example manifest, this reference field will only be defined over Y. In this case,
since the reference pool of the relevant field will be narrowed, it would be
more appropriate to delete this reference. For example,
```
resource "aws_route53_record" "www" {
zone_id = aws_route53_zone.primary.zone_id
name = "example.com"
type = "A"
alias {
name = aws_elb.main.dns_name
zone_id = aws_elb.main.zone_id
evaluate_target_health = true
}
}
```
Route53 Record resources alias.name field has a reference. In the example, this
reference is shown by using the `aws_elb` resource. However, when we check the
field documentation, we see that this field can also be used for reference
for other resources:
```
Alias
Alias records support the following:
name - (Required) DNS domain name for a CloudFront distribution, S3 bucket, ELB,
or another resource record set in this hosted zone.
```
### Conclusion
As a result, mentioned scraper and example&reference generators are very useful
for making easy the test efforts. But using these functionalities, we must be
careful to avoid undesired states.
[Dependencies]: https://crossplane.io/docs/v1.7/concepts/managed-resources.html#dependencies
[this part]: https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md#cross-resource-referencing
[Configuring a Resource]: https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md

View File

@ -1,84 +0,0 @@
## Testing Resources by Using Uptest
`Uptest` provides a framework to test resources in an end-to-end
pipeline during the resource configuration process. Together with the example
manifest generation tool, it allows us to avoid manual interventions and shortens
testing processes.
These integration tests are costly as they create real resources in cloud providers.
So they are not executed by default. Instead, a comment should be posted to the PR
for triggering tests.
Tests can be run by adding something like the following expressions to the
anywhere in comment:
* `/test-examples="provider-azure/examples/kubernetes/cluster.yaml"`
* `/test-examples="provider-aws/examples/s3/bucket.yaml,
provider-aws/examples/eks/cluster.yaml"`
You can trigger a test job for an only provider. Provider that the tests will run
is determined by using the first element of the comma separated list. If the
comment contains resources that are from different providers, then these different
resources will be skipped. So, if you want to run tests more than one provider,
you must post separate comments for each provider.
### Debugging Failed Test
After a test failed, it is important to understand what is going wrong. For
debugging the tests, we push some collected logs to GitHub Action artifacts.
These artifacts contain the following data:
* Dump of Kind Cluster
* Kuttl input files (Applied manifests, assertion files)
* Managed resource yaml outputs
To download the artifacts, firstly you must go to the `Summary` page
of the relevant job:
![images/summary.png](images/summary.png)
Then click the `1` under the `Artifacts` button in the upper right. If the
automated tests run for more than one providers, this number will be higher.
When you click this, you can see the `Artifacts` list of job. You can download
the artifact you are interested in by clicking it.
![images/artifacts.png](images/artifacts.png)
When a test fails, the first point to look is the provider container's logs. In
test environment, we run provider by using the `-d` flag to see the debug logs.
In the provider logs, it is possible to see all errors caused by the content of
the resource manifest, caused by the configuration or returned by the cloud
provider.
Also, as you know, yaml output of the managed resources (it is located in the
`managed.yaml` of the artifact archive's root level) are very useful to catch
errors.
If you have any doubts about the generated kuttl files, please check the
`kuttl-inputs.yaml` file in the archive's root.
### Running Uptest locally
For a faster feedback loop, you might want to run `uptest` locally in your
development setup.
To do so run a special `uptest-local` target that accepts `PROVIDER_NAME` and
`EXAMPLE_LIST` arguments as in the example below.
```console
make uptest-local PROVIDER_NAME=provider-azure EXAMPLE_LIST="provider-azure/examples/resource/resourcegroup.yaml"
```
You may also provide all the files in a folder like below:
```console
make uptest-local PROVIDER_NAME=provider-aws EXAMPLE_LIST=$(find provider-aws/examples/secretsmanager/*.yaml | tr '\n' ',')
```
The local invocation is intentionally lightweight and skips the local cluster,
credentials and ProviderConfig setup assuming you already have it all already
configured in your environment.
For a more heavyweight setup see `run_automated_tests` target which is used in a
centralized GitHub Actions invocation.

View File

@ -1,4 +1,93 @@
# Testing Instructions and Known Error Cases
<!--
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC-BY-4.0
-->
# Testing resources by using Uptest
`Uptest` provides a framework to test resources in an end-to-end pipeline during
the resource configuration process. Together with the example manifest
generation tool, it allows us to avoid manual interventions and shortens testing
processes.
These integration tests are costly as they create real resources in cloud
providers. So they are not executed by default. Instead, a comment should be
posted to the PR for triggering tests.
Tests can be run by adding something like the following expressions to the
anywhere in comment:
- `/test-examples="provider-azure/examples/kubernetes/cluster.yaml"`
- `/test-examples="provider-aws/examples/s3/bucket.yaml, provider-aws/examples/eks/cluster.yaml"`
You can trigger a test job for an only provider. Provider that the tests will
run is determined by using the first element of the comma separated list. If the
comment contains resources that are from different providers, then these
different resources will be skipped. So, if you want to run tests more than one
provider, you must post separate comments for each provider.
## Debugging Failed Test
After a test failed, it is important to understand what is going wrong. For
debugging the tests, we push some collected logs to GitHub Action artifacts.
These artifacts contain the following data:
- Dump of Kind Cluster
- Kuttl input files (Applied manifests, assertion files)
- Managed resource yaml outputs
To download the artifacts, firstly you must go to the `Summary` page of the
relevant job:
![images/summary.png](images/summary.png)
Then click the `1` under the `Artifacts` button in the upper right. If the
automated tests run for more than one providers, this number will be higher.
When you click this, you can see the `Artifacts` list of job. You can download
the artifact you are interested in by clicking it.
![images/artifacts.png](images/artifacts.png)
When a test fails, the first point to look is the provider container's logs. In
test environment, we run provider by using the `-d` flag to see the debug logs.
In the provider logs, it is possible to see all errors caused by the content of
the resource manifest, caused by the configuration or returned by the cloud
provider.
Also, as you know, yaml output of the managed resources (it is located in the
`managed.yaml` of the artifact archive's root level) are very useful to catch
errors.
If you have any doubts about the generated kuttl files, please check the
`kuttl-inputs.yaml` file in the archive's root.
## Running Uptest locally
For a faster feedback loop, you might want to run `uptest` locally in your
development setup.
To do so run a special `uptest-local` target that accepts `PROVIDER_NAME` and
`EXAMPLE_LIST` arguments as in the example below.
```bash
make uptest-local PROVIDER_NAME=provider-azure EXAMPLE_LIST="provider-azure/examples/resource/resourcegroup.yaml"
```
You may also provide all the files in a folder like below:
```bash
make uptest-local PROVIDER_NAME=provider-aws EXAMPLE_LIST=$(find provider-aws/examples/secretsmanager/*.yaml | tr '\n' ',')
```
The local invocation is intentionally lightweight and skips the local cluster,
credentials and ProviderConfig setup assuming you already have it all already
configured in your environment.
For a more heavyweight setup see `run_automated_tests` target which is used in a
centralized GitHub Actions invocation.
## Testing Instructions and Known Error Cases
While configuring resources, the testing effort is the longest part. Because the
characteristics of cloud providers and services can change. This test effort can
@ -9,9 +98,9 @@ an end-to-end pipeline during the resource configuration process. Together with
the example manifest generation tool, it allows us to avoid manual interventions
and shortens testing processes.
## Testing Methods
### Testing Methods
### Manual Test
#### Manual Test
Configured resources can be tested by using manual method. This method generally
contains the environment preparation and creating the example manifest in the
@ -67,9 +156,10 @@ them in our automated test effort.
In some cases, these manifests need manual interventions so, for successfully
applying them to a cluster (passing the Kubernetes schema validation) you may
need to do some work. Possible problems you might face:
- The generated manifest cannot provide at least one required field. So
- The generated manifest cannot provide at least one required field. So
before creating the resource you must set the required field in the manifest.
- In some fields of generated manifest the types of values cannot be matched.
- In some fields of generated manifest the types of values cannot be matched.
For example, X field expects a string but the manifest provides an integer.
In these cases you need to provide the correct type in your example YAML
manifest.
@ -88,32 +178,30 @@ successfully completed! However, if there are some resource values that are
`False`, you need to debug this situation. The main debugging ways will be
mentioned in the next parts.
```
NOTE: For following the test processes in a more accurate way, we have `UpToDate`
status condition. This status condition will be visible when you set the
annotation: `upjet.upbound.io/test=true`. Without adding this annotation you
cannot see the mentioned condition. Uptest adds this annotation to the tested
resources, but if you want to see the value of conditions in your tests in your
local environment (during manual tests) you need to add this condition manually.
For the goal and details of this status condition please see this PR:
https://github.com/upbound/upjet/pull/23
```
> [!NOTE]
> For following the test processes in a more accurate way, we have `UpToDate`
status condition. This status condition will be visible when you set the
annotation: `upjet.upbound.io/test=true`. Without adding this annotation you
cannot see the mentioned condition. Uptest adds this annotation to the tested
resources, but if you want to see the value of conditions in your tests in
your local environment (during manual tests) you need to add this condition
manually. For the goal and details of this status condition please see this
PR: https://github.com/crossplane/upjet/pull/23
```
NOTE: The resources that are tried to be created may have dependencies. For
example, you might actually need resources Y and Z while trying to test resource
X. Many of the generated examples include these dependencies. However, in some
cases, there may be missing dependencies. In these cases, please add the
relevant dependencies to your example manifest. This is important both for you
to pass the tests and to provide the correct manifests.
```
> [!NOTE]
> The resources that are tried to be created may have dependencies. For example,
you might actually need resources Y and Z while trying to test resource X.
Many of the generated examples include these dependencies. However, in some
cases, there may be missing dependencies. In these cases, please add the
relevant dependencies to your example manifest. This is important both for you
to pass the tests and to provide the correct manifests.
### Automated Tests - Uptest
#### Automated Tests - Uptest
Configured resources can be tested also by using `Uptest`. We can also separate
this part into two main application methods:
#### Using Uptest in GitHub Actions
##### Using Uptest in GitHub Actions
We have a GitHub workflow `Automated Tests`. This is an integration test for
Official Providers. This workflow prepares the environment (provisioning Kind
@ -150,17 +238,16 @@ The key is important for skipping, we are checking this `upjet.upbound.io/manual
annotation key and if is in there, we skip the related resource. The value is also
important to see why we skip this resource.
```
NOTE: For resources that are ignored during Automated Tests, manual testing is a
must. Because we need to make sure that all resources published in the `v1beta1`
version are working.
```
> [!NOTE]
> For resources that are ignored during Automated Tests, manual testing is a
must. Because we need to make sure that all resources published in the
`v1beta1` version are working.
At the end of the tests, Uptest will provide a report for you. And also for all
GitHub Actions, we will have an artifact that contains logs for debugging. For
details please see [here].
#### Using Uptest in Local Dev Environment
##### Using Uptest in Local Dev Environment
The main difference between running `Uptest` from your local environment and
running GitHub Actions is that the environment is also prepared during GitHub
@ -173,12 +260,14 @@ After preparing the testing environment, you should run the following command to
trigger tests locally by using `Uptest`:
Example for single file test:
```
```bash
make uptest-local PROVIDER_NAME=provider-aws EXAMPLE_LIST=provider-aws/examples/secretsmanager/secret.yaml
```
Example of whole API Group test:
```
```bash
make uptest-local PROVIDER_NAME=provider-aws EXAMPLE_LIST=$(find provider-aws/examples/secretsmanager/*.yaml | tr '\n' ',')
```
@ -216,11 +305,11 @@ Encountering this situation (i.e. Terraform trying to delete and recreate the
resource) is not normal and may indicate a specific error. Some possible
problems could be:
- As a result of overriding the constructed ID after Terraform calls, Terraform
- As a result of overriding the constructed ID after Terraform calls, Terraform
could not match the IDs and tries to recreate the resource. Please see
[this issue] for details. In this type of cases, you need to review your
external name configuration.
- Crossplane's concept of [Late Initialization] may cause some side effects.
- Crossplane's concept of [Late Initialization] may cause some side effects.
One of them is while late initialization, filling a field that is not initially
filled on the manifest may cause the resource to be destroyed and recreated.
In such a case, it should be evaluated that which field's value is set will
@ -229,7 +318,7 @@ problems could be:
solve the problem is put into the ignore list using the
[late initialization configuration] and the test is repeated from the
beginning.
- Some resources fall into `tainted` state as a result of certain steps in the
- Some resources fall into `tainted` state as a result of certain steps in the
creation process fail. Please see [tainted issue] for details.
2. External Name Configuration Related Errors: The most common known issue is
@ -261,23 +350,20 @@ the allowed values in some fields of the resource, or that you need to enable
the relevant service, etc. In such cases, please review your example manifest
and try to find the appropriate example.
```
IMPORTANT NOTE: `make reviewable` and `kubectl apply -f package/crds` commands
must be run after any change that will affect the schema or controller of the
configured/tested resource.
In addition, the provider needs to be restarted after the changes in the
controllers, because the controller change actually corresponds to the changes
made in the running code.
```
> [!IMPORTANT]
> `make reviewable` and `kubectl apply -f package/crds` commands must be run
after any change that will affect the schema or controller of the
configured/tested resource. In addition, the provider needs to be restarted
after the changes in the controllers, because the controller change actually
corresponds to the changes made in the running code.
[this repo]: https://github.com/kubernetes-sigs/kind
[the documentation]: https://crossplane.io/docs/v1.9/getting-started/install-configure.html#install-configuration-package
[here]: https://github.com/upbound/official-providers/blob/main/docs/testing-resources-by-using-uptest.md#debugging-failed-test
[these steps]: https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md#late-initialization-configuration
[late initialization configuration]: https://github.com/upbound/upjet/blob/main/docs/configuring-a-resource.md#late-initialization-configuration
[these steps]: https://github.com/upbound/crossplane/blob/main/docs/configuring-a-resource.md#late-initialization-configuration
[late initialization configuration]: https://github.com/upbound/crossplane/blob/main/docs/configuring-a-resource.md#late-initialization-configuration
[Terraform Resource Lifecycle]: https://learn.hashicorp.com/tutorials/terraform/resource-lifecycle
[this issue]: https://github.com/upbound/upjet/issues/32
[this issue]: https://github.com/upbound/crossplane/issues/32
[Late Initialization]: https://crossplane.io/docs/v1.9/concepts/managed-resources.html#late-initialization
[tainted issue]: https://github.com/upbound/upjet/issues/80
[tainted issue]: https://github.com/upbound/crossplane/issues/80
[k3d]: https://k3d.io/

170
go.mod
View File

@ -1,75 +1,90 @@
module github.com/upbound/upjet
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: CC0-1.0
go 1.19
module github.com/crossplane/upjet/v2
go 1.24.0
toolchain go1.24.5
require (
dario.cat/mergo v1.0.2
github.com/alecthomas/kingpin/v2 v2.4.0
github.com/antchfx/htmlquery v1.2.4
github.com/crossplane/crossplane-runtime v0.19.0-rc.0.0.20221012013934-bce61005a175
github.com/crossplane/crossplane-runtime/v2 v2.0.0-20250730220209-c306b1c8b181
github.com/fatih/camelcase v1.0.0
github.com/golang/mock v1.6.0
github.com/google/go-cmp v0.5.9
github.com/hashicorp/hcl/v2 v2.14.1
github.com/hashicorp/terraform-json v0.14.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.24.0
github.com/google/go-cmp v0.7.0
github.com/hashicorp/go-cty v1.5.0
github.com/hashicorp/hcl/v2 v2.23.0
github.com/hashicorp/terraform-json v0.25.0
github.com/hashicorp/terraform-plugin-framework v1.15.0
github.com/hashicorp/terraform-plugin-go v0.28.0
github.com/hashicorp/terraform-plugin-sdk/v2 v2.37.0
github.com/iancoleman/strcase v0.2.0
github.com/json-iterator/go v1.1.12
github.com/mitchellh/go-ps v1.0.0
github.com/muvaf/typewriter v0.0.0-20210910160850-80e49fe1eb32
github.com/pkg/errors v0.9.1
github.com/spf13/afero v1.8.0
github.com/prometheus/client_golang v1.22.0
github.com/spf13/afero v1.12.0
github.com/tmccombs/hcl2json v0.3.3
github.com/yuin/goldmark v1.4.13
github.com/zclconf/go-cty v1.11.0
golang.org/x/net v0.0.0-20220722155237-a158d28d115b
gopkg.in/alecthomas/kingpin.v2 v2.2.6
github.com/zclconf/go-cty v1.16.2
github.com/zclconf/go-cty-yaml v1.0.3
golang.org/x/net v0.39.0
golang.org/x/tools v0.32.0
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.25.0
k8s.io/apimachinery v0.25.0
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
sigs.k8s.io/controller-runtime v0.12.1
sigs.k8s.io/yaml v1.3.0
k8s.io/api v0.33.0
k8s.io/apimachinery v0.33.0
k8s.io/client-go v0.33.0
k8s.io/utils v0.0.0-20250321185631-1f6e0b77f77e
sigs.k8s.io/controller-runtime v0.19.0
sigs.k8s.io/yaml v1.4.0
)
require (
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
github.com/alecthomas/units v0.0.0-20210927113745-59d0afb8317a // indirect
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b // indirect
github.com/antchfx/xpath v1.2.0 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/fsnotify/fsnotify v1.5.4 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.1 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gobuffalo/flect v1.0.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 // indirect
github.com/hashicorp/go-hclog v1.2.1 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-plugin v1.6.3 // indirect
github.com/hashicorp/go-uuid v1.0.3 // indirect
github.com/hashicorp/go-version v1.6.0 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect
github.com/hashicorp/logutils v1.0.0 // indirect
github.com/hashicorp/terraform-plugin-go v0.14.0 // indirect
github.com/hashicorp/terraform-plugin-log v0.7.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/hashicorp/terraform-plugin-log v0.9.0 // indirect
github.com/hashicorp/terraform-registry-address v0.2.5 // indirect
github.com/hashicorp/terraform-svchost v0.1.1 // indirect
github.com/hashicorp/yamux v0.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-testing-interface v1.14.1 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
@ -78,29 +93,44 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/prometheus/client_golang v1.12.2 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/oklog/run v1.0.0 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/cobra v1.9.1 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect
github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect
github.com/vmihailenco/tagparser v0.1.1 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 // indirect
golang.org/x/tools v0.1.12 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xhit/go-str2duration/v2 v2.1.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/exp v0.0.0-20240808152545-0cdaa3abc0fa // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/oauth2 v0.29.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/term v0.31.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/time v0.11.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/client-go v0.25.0 // indirect
k8s.io/component-base v0.25.0 // indirect
k8s.io/klog/v2 v2.70.1 // indirect
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
k8s.io/apiextensions-apiserver v0.33.0 // indirect
k8s.io/code-generator v0.33.0 // indirect
k8s.io/component-base v0.33.0 // indirect
k8s.io/gengo/v2 v2.0.0-20250207200755-1244d31929d7 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
sigs.k8s.io/controller-tools v0.18.0 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
)

841
go.sum

File diff suppressed because it is too large Load Diff

4
go.sum.license Normal file
View File

@ -0,0 +1,4 @@
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
SPDX-License-Identifier: CC0-1.0

View File

@ -1,13 +1,3 @@
Copyright 2021 Upbound Inc.
SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
SPDX-License-Identifier: Apache-2.0

View File

@ -1,3 +1,7 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package pkg
import "strings"
@ -18,7 +22,7 @@ func FilterDescription(description, keyword string) string {
}
}
if len(result) == 0 {
return strings.ReplaceAll(strings.ToLower(description), keyword, "Upbound official provider")
return strings.ReplaceAll(strings.ToLower(description), keyword, "provider")
}
return strings.Join(result, descriptionSeparator)
}

44
pkg/config/canonical.go Normal file
View File

@ -0,0 +1,44 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"github.com/pkg/errors"
"github.com/crossplane/upjet/v2/pkg/resource/json"
)
const (
errFmtNotJSONString = "parameter at path %q with value %v is not a (JSON) string"
errFmtCanonicalize = "failed to canonicalize the parameter at path %q"
)
// CanonicalizeJSONParameters returns a ConfigurationInjector that computes
// and stores the canonical forms of the JSON documents for the specified list
// of top-level Terraform configuration arguments. Please note that currently
// only top-level configuration arguments are supported by this function.
func CanonicalizeJSONParameters(tfPath ...string) ConfigurationInjector {
return func(jsonMap map[string]any, tfMap map[string]any) error {
for _, param := range tfPath {
p, ok := tfMap[param]
if !ok {
continue
}
s, ok := p.(string)
if !ok {
return errors.Errorf(errFmtNotJSONString, param, p)
}
if s == "" {
continue
}
cJSON, err := json.Canonicalize(s)
if err != nil {
return errors.Wrapf(err, errFmtCanonicalize, param)
}
tfMap[param] = cJSON
}
return nil
}
}

View File

@ -1,16 +1,31 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"strings"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/upbound/upjet/pkg/registry"
tjname "github.com/upbound/upjet/pkg/types/name"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
tjname "github.com/crossplane/upjet/v2/pkg/types/name"
)
const (
// PackageNameConfig is the name of the provider subpackage that contains
// the base resources (e.g., ProviderConfig, ProviderConfigUsage,
// StoreConfig. etc.).
// TODO: we should be careful that there may also exist short groups with
// these names. We can consider making these configurable by the provider
// maintainer.
PackageNameConfig = "config"
// PackageNameMonolith is the name of the backwards-compatible
// provider subpackage that contains the all the resources.
PackageNameMonolith = "monolith"
)
// Commonly used resource configurations.
@ -18,12 +33,17 @@ var (
DefaultBasePackages = BasePackages{
APIVersion: []string{
// Default package for ProviderConfig APIs
"apis/v1alpha1",
"apis/v1beta1",
"v1alpha1",
"v1beta1",
},
Controller: []string{
// Default package for ProviderConfig controllers
"internal/controller/providerconfig",
"providerconfig",
},
ControllerMap: map[string]string{
// Default package for ProviderConfig controllers
"providerconfig": PackageNameConfig,
},
}
@ -38,7 +58,7 @@ type ResourceOption func(*Resource)
// DefaultResource keeps an initial default configuration for all resources of a
// provider.
func DefaultResource(name string, terraformSchema *schema.Resource, terraformRegistry *registry.Resource, opts ...ResourceOption) *Resource {
func DefaultResource(name string, terraformSchema *schema.Resource, terraformPluginFrameworkResource fwresource.Resource, terraformRegistry *registry.Resource, opts ...ResourceOption) *Resource {
words := strings.Split(name, "_")
// As group name we default to the second element if resource name
// has at least 3 elements, otherwise, we took the first element as
@ -61,14 +81,20 @@ func DefaultResource(name string, terraformSchema *schema.Resource, terraformReg
r := &Resource{
Name: name,
TerraformResource: terraformSchema,
TerraformPluginFrameworkResource: terraformPluginFrameworkResource,
MetaResource: terraformRegistry,
ShortGroup: group,
Kind: kind,
Version: "v1alpha1",
ExternalName: NameAsIdentifier,
References: map[string]Reference{},
References: make(References),
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: make(SchemaElementOptions),
ServerSideApplyMergeStrategies: make(ServerSideApplyMergeStrategies),
Conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)},
OverrideFieldNames: map[string]string{},
listConversionPaths: make(map[string]string),
}
for _, f := range opts {
f(r)
@ -102,13 +128,21 @@ func MoveToStatus(sch *schema.Resource, fieldpaths ...string) {
}
}
// MarkAsRequired marks the given fieldpaths as required without manipulating
// the native field schema.
func (r *Resource) MarkAsRequired(fieldpaths ...string) {
r.requiredFields = append(r.requiredFields, fieldpaths...)
}
// MarkAsRequired marks the schema of the given fieldpath as required. It's most
// useful in cases where external name contains an optional parameter that is
// defaulted by the provider but we need it to exist or to fix plain buggy
// schemas.
// Deprecated: Use Resource.MarkAsRequired instead.
// This function will be removed in future versions.
func MarkAsRequired(sch *schema.Resource, fieldpaths ...string) {
for _, fieldpath := range fieldpaths {
if s := GetSchema(sch, fieldpath); s != nil {
for _, fp := range fieldpaths {
if s := GetSchema(sch, fp); s != nil {
s.Computed = false
s.Optional = false
}

View File

@ -1,23 +1,29 @@
/*
Copyright 2022 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"reflect"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/upbound/upjet/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
)
func TestDefaultResource(t *testing.T) {
identityConversion := conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)
type args struct {
name string
sch *schema.Resource
frameworkResource fwresource.Resource
reg *registry.Resource
opts []ResourceOption
}
@ -41,6 +47,10 @@ func TestDefaultResource(t *testing.T) {
References: map[string]Reference{},
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"TwoSectionsName": {
@ -57,6 +67,10 @@ func TestDefaultResource(t *testing.T) {
References: map[string]Reference{},
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithPrefixAcronym": {
@ -73,6 +87,10 @@ func TestDefaultResource(t *testing.T) {
References: map[string]Reference{},
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithSuffixAcronym": {
@ -89,6 +107,10 @@ func TestDefaultResource(t *testing.T) {
References: map[string]Reference{},
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
"NameWithMultipleAcronyms": {
@ -105,6 +127,10 @@ func TestDefaultResource(t *testing.T) {
References: map[string]Reference{},
Sensitive: NopSensitive,
UseAsync: true,
SchemaElementOptions: SchemaElementOptions{},
ServerSideApplyMergeStrategies: ServerSideApplyMergeStrategies{},
Conversions: []conversion.Conversion{identityConversion},
OverrideFieldNames: map[string]string{},
},
},
}
@ -112,13 +138,15 @@ func TestDefaultResource(t *testing.T) {
// TODO(muvaf): Find a way to compare function pointers.
ignoreUnexported := []cmp.Option{
cmpopts.IgnoreFields(Sensitive{}, "fieldPaths", "AdditionalConnectionDetailsFn"),
cmpopts.IgnoreFields(LateInitializer{}, "ignoredCanonicalFieldPaths"),
cmpopts.IgnoreFields(LateInitializer{}, "ignoredCanonicalFieldPaths", "conditionalIgnoredCanonicalFieldPaths"),
cmpopts.IgnoreFields(ExternalName{}, "SetIdentifierArgumentFn", "GetExternalNameFn", "GetIDFn"),
cmpopts.IgnoreUnexported(Resource{}),
cmpopts.IgnoreUnexported(reflect.ValueOf(identityConversion).Elem().Interface()),
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
r := DefaultResource(tc.args.name, tc.args.sch, tc.args.reg, tc.args.opts...)
r := DefaultResource(tc.args.name, tc.args.sch, tc.args.frameworkResource, tc.args.reg, tc.args.opts...)
if diff := cmp.Diff(tc.want, r, ignoreUnexported...); diff != "" {
t.Errorf("\n%s\nDefaultResource(...): -want, +got:\n%s", tc.reason, diff)
}

View File

@ -0,0 +1,335 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"fmt"
"slices"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
)
const (
// AllVersions denotes that a Conversion is applicable for all versions
// of an API with which the Conversion is registered. It can be used for
// both the conversion source or target API versions.
AllVersions = "*"
)
const (
pathForProvider = "spec.forProvider"
pathInitProvider = "spec.initProvider"
pathAtProvider = "status.atProvider"
)
var (
_ PrioritizedManagedConversion = &identityConversion{}
_ PavedConversion = &fieldCopy{}
_ PavedConversion = &singletonListConverter{}
)
// Conversion is the interface for the CRD API version converters.
// Conversion implementations registered for a source, target
// pair are called in chain so Conversion implementations can be modular, e.g.,
// a Conversion implementation registered for a specific source and target
// versions does not have to contain all the needed API conversions between
// these two versions. All PavedConversions are run in their registration
// order before the ManagedConversions. Conversions are run in three stages:
// 1. PrioritizedManagedConversions are run.
// 2. The source and destination objects are paved and the PavedConversions are
// run in chain without unpaving the unstructured representation between
// conversions.
// 3. The destination paved object is converted back to a managed resource and
// ManagedConversions are run in the order they are registered.
type Conversion interface {
// Applicable should return true if this Conversion is applicable while
// converting the API of the `src` object to the API of the `dst` object.
Applicable(src, dst runtime.Object) bool
}
// PavedConversion is an optimized Conversion between two fieldpath.Paved
// objects. PavedConversion implementations for a specific source and target
// version pair are chained together and the source and the destination objects
// are paved once at the beginning of the chained PavedConversion.ConvertPaved
// calls. The target fieldpath.Paved object is then converted into the original
// resource.Terraformed object at the end of the chained calls. This prevents
// the intermediate conversions between fieldpath.Paved and
// the resource.Terraformed representations of the same object, and the
// fieldpath.Paved representation is convenient for writing generic
// Conversion implementations not bound to a specific type.
type PavedConversion interface {
Conversion
// ConvertPaved converts from the `src` paved object to the `dst`
// paved object and returns `true` if the conversion has been done,
// `false` otherwise, together with any errors encountered.
ConvertPaved(src, target *fieldpath.Paved) (bool, error)
}
// ManagedConversion defines a Conversion from a specific source
// resource.Managed type to a target one. Generic Conversion
// implementations may prefer to implement the PavedConversion interface.
// Implementations of ManagedConversion can do type assertions to
// specific source and target types, and so, they are expected to be
// strongly typed.
type ManagedConversion interface {
Conversion
// ConvertManaged converts from the `src` managed resource to the `dst`
// managed resource and returns `true` if the conversion has been done,
// `false` otherwise, together with any errors encountered.
ConvertManaged(src, target resource.Managed) (bool, error)
}
// PrioritizedManagedConversion is a ManagedConversion that take precedence
// over all the other converters. PrioritizedManagedConversions are run,
// in their registration order, before the PavedConversions.
type PrioritizedManagedConversion interface {
ManagedConversion
Prioritized()
}
type baseConversion struct {
sourceVersion string
targetVersion string
}
func (c *baseConversion) String() string {
return fmt.Sprintf("source API version %q, target API version %q", c.sourceVersion, c.targetVersion)
}
func newBaseConversion(sourceVersion, targetVersion string) baseConversion {
return baseConversion{
sourceVersion: sourceVersion,
targetVersion: targetVersion,
}
}
func (c *baseConversion) Applicable(src, dst runtime.Object) bool {
return (c.sourceVersion == AllVersions || c.sourceVersion == src.GetObjectKind().GroupVersionKind().Version) &&
(c.targetVersion == AllVersions || c.targetVersion == dst.GetObjectKind().GroupVersionKind().Version)
}
type fieldCopy struct {
baseConversion
sourceField string
targetField string
}
func (f *fieldCopy) ConvertPaved(src, target *fieldpath.Paved) (bool, error) {
if !f.Applicable(&unstructured.Unstructured{Object: src.UnstructuredContent()},
&unstructured.Unstructured{Object: target.UnstructuredContent()}) {
return false, nil
}
v, err := src.GetValue(f.sourceField)
// TODO: the field might actually exist in the schema and
// missing in the object. Or, it may not exist in the schema.
// For a field that does not exist in the schema, we had better error.
if fieldpath.IsNotFound(err) {
return false, nil
}
if err != nil {
return false, errors.Wrapf(err, "failed to get the field %q from the conversion source object", f.sourceField)
}
return true, errors.Wrapf(target.SetValue(f.targetField, v), "failed to set the field %q of the conversion target object", f.targetField)
}
// NewFieldRenameConversion returns a new Conversion that implements a
// field renaming conversion from the specified `sourceVersion` to the specified
// `targetVersion` of an API. The field's name in the `sourceVersion` is given
// with the `sourceField` parameter and its name in the `targetVersion` is
// given with `targetField` parameter.
func NewFieldRenameConversion(sourceVersion, sourceField, targetVersion, targetField string) Conversion {
return &fieldCopy{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
sourceField: sourceField,
targetField: targetField,
}
}
type customConverter func(src, target resource.Managed) error
type customConversion struct {
baseConversion
customConverter customConverter
}
func (cc *customConversion) ConvertManaged(src, target resource.Managed) (bool, error) {
if !cc.Applicable(src, target) || cc.customConverter == nil {
return false, nil
}
return true, errors.Wrap(cc.customConverter(src, target), "failed to apply the converter function")
}
// NewCustomConverter returns a new Conversion from the specified
// `sourceVersion` of an API to the specified `targetVersion` and invokes
// the specified converter function to perform the conversion on the
// managed resources.
func NewCustomConverter(sourceVersion, targetVersion string, converter func(src, target resource.Managed) error) Conversion {
return &customConversion{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
customConverter: converter,
}
}
type singletonListConverter struct {
baseConversion
pathPrefixes []string
crdPaths []string
mode ListConversionMode
convertOptions *ConvertOptions
}
type SingletonListConversionOption func(*singletonListConverter)
// WithConvertOptions sets the ConvertOptions for the singleton list conversion.
func WithConvertOptions(opts *ConvertOptions) SingletonListConversionOption {
return func(s *singletonListConverter) {
s.convertOptions = opts
}
}
// NewSingletonListConversion returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion and uses the
// CRD field paths given in crdPaths to convert between the singleton
// lists and embedded objects in the given conversion mode.
func NewSingletonListConversion(sourceVersion, targetVersion string, pathPrefixes []string, crdPaths []string, mode ListConversionMode, opts ...SingletonListConversionOption) Conversion {
s := &singletonListConverter{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
pathPrefixes: pathPrefixes,
crdPaths: crdPaths,
mode: mode,
}
for _, o := range opts {
o(s)
}
return s
}
func (s *singletonListConverter) ConvertPaved(src, target *fieldpath.Paved) (bool, error) {
if !s.Applicable(&unstructured.Unstructured{Object: src.UnstructuredContent()},
&unstructured.Unstructured{Object: target.UnstructuredContent()}) {
return false, nil
}
if len(s.crdPaths) == 0 {
return false, nil
}
for _, p := range s.pathPrefixes {
v, err := src.GetValue(p)
if err != nil {
return true, errors.Wrapf(err, "failed to read the %s value for conversion in mode %q", p, s.mode)
}
m, ok := v.(map[string]any)
if !ok {
return true, errors.Errorf("value at path %s is not a map[string]any", p)
}
if _, err := Convert(m, s.crdPaths, s.mode, s.convertOptions); err != nil {
return true, errors.Wrapf(err, "failed to convert the source map in mode %q with %s", s.mode, s.baseConversion.String())
}
if err := target.SetValue(p, m); err != nil {
return true, errors.Wrapf(err, "failed to set the %s value for conversion in mode %q", p, s.mode)
}
}
return true, nil
}
type identityConversion struct {
baseConversion
excludePaths []string
}
func (i *identityConversion) ConvertManaged(src, target resource.Managed) (bool, error) {
if !i.Applicable(src, target) {
return false, nil
}
srcCopy := src.DeepCopyObject()
srcRaw, err := runtime.DefaultUnstructuredConverter.ToUnstructured(srcCopy)
if err != nil {
return false, errors.Wrap(err, "cannot convert the source managed resource into an unstructured representation")
}
// remove excluded fields
if len(i.excludePaths) > 0 {
pv := fieldpath.Pave(srcRaw)
for _, ex := range i.excludePaths {
exPaths, err := pv.ExpandWildcards(ex)
if err != nil {
return false, errors.Wrapf(err, "cannot expand wildcards in the fieldpath expression %s", ex)
}
for _, p := range exPaths {
if err := pv.DeleteField(p); err != nil {
return false, errors.Wrapf(err, "cannot delete a field in the conversion source object")
}
}
}
}
// copy the remaining fields
gvk := target.GetObjectKind().GroupVersionKind()
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(srcRaw, target); err != nil {
return true, errors.Wrap(err, "cannot convert the map[string]any representation of the source object to the conversion target object")
}
// restore the original GVK for the conversion destination
target.GetObjectKind().SetGroupVersionKind(gvk)
return true, nil
}
func (i *identityConversion) Prioritized() {}
// newIdentityConversion returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion, which copies the
// identical paths from the source to the target. excludePaths can be used
// to ignore certain field paths while copying.
func newIdentityConversion(sourceVersion, targetVersion string, excludePaths ...string) Conversion {
return &identityConversion{
baseConversion: newBaseConversion(sourceVersion, targetVersion),
excludePaths: excludePaths,
}
}
// NewIdentityConversionExpandPaths returns a new Conversion from the specified
// sourceVersion of an API to the specified targetVersion, which copies the
// identical paths from the source to the target. excludePaths can be used
// to ignore certain field paths while copying. Exclude paths must be specified
// in standard crossplane-runtime fieldpath library syntax, i.e., with proper
// indices for traversing map and slice types (e.g., a.b[*].c).
// The field paths in excludePaths are sorted in lexical order and are prefixed
// with each of the path prefixes specified with pathPrefixes. So if an
// exclude path "x" is specified with the prefix slice ["a", "b"], then
// paths a.x and b.x will both be skipped while copying fields from a source to
// a target.
func NewIdentityConversionExpandPaths(sourceVersion, targetVersion string, pathPrefixes []string, excludePaths ...string) Conversion {
return newIdentityConversion(sourceVersion, targetVersion, ExpandParameters(pathPrefixes, excludePaths...)...)
}
// ExpandParameters sorts and expands the given list of field path suffixes
// with the given prefixes.
func ExpandParameters(prefixes []string, excludePaths ...string) []string {
slices.Sort(excludePaths)
if len(prefixes) == 0 {
return excludePaths
}
r := make([]string, 0, len(prefixes)*len(excludePaths))
for _, p := range prefixes {
for _, ex := range excludePaths {
r = append(r, fmt.Sprintf("%s.%s", p, ex))
}
}
return r
}
// DefaultPathPrefixes returns the list of the default path prefixes for
// excluding paths in the identity conversion. The returned value is
// ["spec.forProvider", "spec.initProvider", "status.atProvider"].
func DefaultPathPrefixes() []string {
return []string{pathForProvider, pathInitProvider, pathAtProvider}
}

View File

@ -0,0 +1,580 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"fmt"
"slices"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
jsoniter "github.com/json-iterator/go"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/utils/ptr"
)
const (
sourceVersion = "v1beta1"
sourceField = "testSourceField"
targetVersion = "v1beta2"
targetField = "testTargetField"
)
func TestConvertPaved(t *testing.T) {
type args struct {
sourceVersion string
sourceField string
targetVersion string
targetField string
sourceObj *fieldpath.Paved
targetObj *fieldpath.Paved
}
type want struct {
converted bool
err error
targetObj *fieldpath.Paved
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulConversion": {
reason: "Source field in source version is successfully converted to the target field in target version.",
args: args{
sourceVersion: sourceVersion,
sourceField: sourceField,
targetVersion: targetVersion,
targetField: targetField,
sourceObj: getPaved(sourceVersion, sourceField, ptr.To("testValue")),
targetObj: getPaved(targetVersion, targetField, nil),
},
want: want{
converted: true,
targetObj: getPaved(targetVersion, targetField, ptr.To("testValue")),
},
},
"SuccessfulConversionAllVersions": {
reason: "Source field in source version is successfully converted to the target field in target version when the conversion specifies wildcard version for both of the source and the target.",
args: args{
sourceVersion: AllVersions,
sourceField: sourceField,
targetVersion: AllVersions,
targetField: targetField,
sourceObj: getPaved(sourceVersion, sourceField, ptr.To("testValue")),
targetObj: getPaved(targetVersion, targetField, nil),
},
want: want{
converted: true,
targetObj: getPaved(targetVersion, targetField, ptr.To("testValue")),
},
},
"SourceVersionMismatch": {
reason: "Conversion is not done if the source version of the object does not match the conversion's source version.",
args: args{
sourceVersion: "mismatch",
sourceField: sourceField,
targetVersion: AllVersions,
targetField: targetField,
sourceObj: getPaved(sourceVersion, sourceField, ptr.To("testValue")),
targetObj: getPaved(targetVersion, targetField, nil),
},
want: want{
converted: false,
targetObj: getPaved(targetVersion, targetField, nil),
},
},
"TargetVersionMismatch": {
reason: "Conversion is not done if the target version of the object does not match the conversion's target version.",
args: args{
sourceVersion: AllVersions,
sourceField: sourceField,
targetVersion: "mismatch",
targetField: targetField,
sourceObj: getPaved(sourceVersion, sourceField, ptr.To("testValue")),
targetObj: getPaved(targetVersion, targetField, nil),
},
want: want{
converted: false,
targetObj: getPaved(targetVersion, targetField, nil),
},
},
"SourceFieldNotFound": {
reason: "Conversion is not done if the source field is not found in the source object.",
args: args{
sourceVersion: sourceVersion,
sourceField: sourceField,
targetVersion: targetVersion,
targetField: targetField,
sourceObj: getPaved(sourceVersion, sourceField, nil),
targetObj: getPaved(targetVersion, targetField, ptr.To("test")),
},
want: want{
converted: false,
targetObj: getPaved(targetVersion, targetField, ptr.To("test")),
},
},
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
c := NewFieldRenameConversion(tc.args.sourceVersion, tc.args.sourceField, tc.args.targetVersion, tc.args.targetField)
converted, err := c.(*fieldCopy).ConvertPaved(tc.args.sourceObj, tc.args.targetObj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConvertPaved(sourceObj, targetObj): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if tc.want.err != nil {
return
}
if diff := cmp.Diff(tc.want.converted, converted); diff != "" {
t.Errorf("\n%s\nConvertPaved(sourceObj, targetObj): -wantConverted, +gotConverted:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.targetObj.UnstructuredContent(), tc.args.targetObj.UnstructuredContent()); diff != "" {
t.Errorf("\n%s\nConvertPaved(sourceObj, targetObj): -wantTargetObj, +gotTargetObj:\n%s", tc.reason, diff)
}
})
}
}
func TestIdentityConversion(t *testing.T) {
type args struct {
sourceVersion string
source resource.Managed
targetVersion string
target *mockManaged
pathPrefixes []string
excludePaths []string
}
type want struct {
converted bool
err error
target *mockManaged
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulConversionNoExclusions": {
reason: "Successfully copy identical fields from the source to the target with no exclusions.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
},
},
"SuccessfulConversionExclusionsWithNoPrefixes": {
reason: "Successfully copy identical fields from the source to the target with exclusions without prefixes.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2", "k3"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
}),
},
},
"SuccessfulConversionNestedExclusionsWithNoPrefixes": {
reason: "Successfully copy identical fields from the source to the target with nested exclusions without prefixes.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": "v2",
"k3": map[string]any{
"nk1": "nv1",
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2", "k3.nk1"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
// key key3 is copied without its nested element (as an empty map)
"k3": map[string]any{},
}),
},
},
"SuccessfulConversionWithListExclusion": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a root-level list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": []map[string]any{
{
"nk3": "nv3",
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
}),
},
},
"SuccessfulConversionWithNestedListExclusion": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a nested list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"k1": "v1",
"k2": []map[string]any{
{
"nk3": []map[string]any{
{
"nk4": "nv4",
},
},
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2[*].nk3"},
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"k1": "v1",
"k2": []any{map[string]any{}},
}),
},
},
"SuccessfulConversionWithDefaultExclusionPrefixes": {
reason: "Successfully copy identical fields from the source to the target with an exclusion for a nested list.",
args: args{
sourceVersion: AllVersions,
source: newMockManaged(map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
"forProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
},
"status": map[string]any{
"atProvider": map[string]any{
"k1": "v1",
"k2": "v2",
},
},
}),
targetVersion: AllVersions,
target: newMockManaged(nil),
excludePaths: []string{"k2"},
pathPrefixes: DefaultPathPrefixes(),
},
want: want{
converted: true,
target: newMockManaged(map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"k1": "v1",
},
"forProvider": map[string]any{
"k1": "v1",
},
},
"status": map[string]any{
"atProvider": map[string]any{
"k1": "v1",
},
},
}),
},
},
}
for n, tc := range tests {
t.Run(n, func(t *testing.T) {
c := NewIdentityConversionExpandPaths(tc.args.sourceVersion, tc.args.targetVersion, tc.args.pathPrefixes, tc.args.excludePaths...)
converted, err := c.(*identityConversion).ConvertManaged(tc.args.source, tc.args.target)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvertManaged(source, target): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.converted, converted); diff != "" {
t.Errorf("\n%s\nConvertManaged(source, target): -wantConverted, +gotConverted:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.target.UnstructuredContent(), tc.args.target.UnstructuredContent()); diff != "" {
t.Errorf("\n%s\nConvertManaged(source, target): -wantTarget, +gotTarget:\n%s", tc.reason, diff)
}
})
}
}
func TestDefaultPathPrefixes(t *testing.T) {
// no need for a table-driven test here as we assert all the parameter roots
// in the MR schema are asserted.
want := []string{"spec.forProvider", "spec.initProvider", "status.atProvider"}
slices.Sort(want)
got := DefaultPathPrefixes()
slices.Sort(got)
if diff := cmp.Diff(want, got); diff != "" {
t.Fatalf("DefaultPathPrefixes(): -want, +got:\n%s", diff)
}
}
func TestSingletonListConversion(t *testing.T) {
type args struct {
sourceVersion string
sourceMap map[string]any
targetVersion string
targetMap map[string]any
crdPaths []string
mode ListConversionMode
opts []SingletonListConversionOption
}
type want struct {
converted bool
err error
targetMap map[string]any
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulToEmbeddedObjectConversion": {
reason: "Successful conversion from a singleton list to an embedded object.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
},
},
"SuccessfulToSingletonListConversion": {
reason: "Successful conversion from an embedded object to a singleton list.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"o"},
mode: ToSingletonList,
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"NoCRDPath": {
reason: "No conversion when the specified CRD paths is empty.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
mode: ToSingletonList,
},
want: want{
converted: false,
targetMap: map[string]any{},
},
},
"SuccessfulToSingletonListConversionWithInjectedKey": {
reason: "Successful conversion from an embedded object to a singleton list.",
args: args{
sourceVersion: AllVersions,
sourceMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": map[string]any{
"k": "v",
},
},
},
},
targetVersion: AllVersions,
targetMap: map[string]any{},
crdPaths: []string{"o"},
mode: ToSingletonList,
opts: []SingletonListConversionOption{
WithConvertOptions(&ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"o": {
Key: "index",
Value: "0",
},
},
}),
},
},
want: want{
converted: true,
targetMap: map[string]any{
"spec": map[string]any{
"initProvider": map[string]any{
"o": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
},
},
},
},
}
for n, tc := range tests {
t.Run(n, func(t *testing.T) {
c := NewSingletonListConversion(tc.args.sourceVersion, tc.args.targetVersion, []string{pathInitProvider}, tc.args.crdPaths, tc.args.mode, tc.args.opts...)
sourceMap, err := roundTrip(tc.args.sourceMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.args.sourceMap: %v", err)
}
targetMap, err := roundTrip(tc.args.targetMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.args.targetMap: %v", err)
}
converted, err := c.(*singletonListConverter).ConvertPaved(fieldpath.Pave(sourceMap), fieldpath.Pave(targetMap))
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvertPaved(source, target): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.converted, converted); diff != "" {
t.Errorf("\n%s\nConvertPaved(source, target): -wantConverted, +gotConverted:\n%s", tc.reason, diff)
}
m, err := roundTrip(tc.want.targetMap)
if err != nil {
t.Fatalf("Failed to preprocess tc.want.targetMap: %v", err)
}
if diff := cmp.Diff(m, targetMap); diff != "" {
t.Errorf("\n%s\nConvertPaved(source, target): -wantTarget, +gotTarget:\n%s", tc.reason, diff)
}
})
}
}
func getPaved(version, field string, value *string) *fieldpath.Paved {
m := map[string]any{
"apiVersion": fmt.Sprintf("mockgroup/%s", version),
"kind": "mockkind",
}
if value != nil {
m[field] = *value
}
return fieldpath.Pave(m)
}
type mockManaged struct {
*fake.Managed
*fieldpath.Paved
}
func (m *mockManaged) DeepCopyObject() runtime.Object {
buff, err := jsoniter.ConfigCompatibleWithStandardLibrary.Marshal(m.Paved.UnstructuredContent())
if err != nil {
panic(err)
}
var u map[string]any
if err := jsoniter.Unmarshal(buff, &u); err != nil {
panic(err)
}
return &mockManaged{
Managed: m.Managed.DeepCopyObject().(*fake.Managed),
Paved: fieldpath.Pave(u),
}
}
func newMockManaged(m map[string]any) *mockManaged {
return &mockManaged{
Managed: &fake.Managed{},
Paved: fieldpath.Pave(m),
}
}

View File

@ -0,0 +1,159 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"reflect"
"slices"
"sort"
"strings"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/pkg/errors"
)
// ListConversionMode denotes the mode of the list-object API conversion, e.g.,
// conversion of embedded objects into singleton lists.
type ListConversionMode int
const (
// ToEmbeddedObject represents a runtime conversion from a singleton list
// to an embedded object, i.e., the runtime conversions needed while
// reading from the Terraform state and updating the CRD
// (for status, late-initialization, etc.)
ToEmbeddedObject ListConversionMode = iota
// ToSingletonList represents a runtime conversion from an embedded object
// to a singleton list, i.e., the runtime conversions needed while passing
// the configuration data to the underlying Terraform layer.
ToSingletonList
)
const (
errFmtMultiItemList = "singleton list, at the field path %s, must have a length of at most 1 but it has a length of %d"
errFmtNonSlice = "value at the field path %s must be []any, not %q"
)
// String returns a string representation of the conversion mode.
func (m ListConversionMode) String() string {
switch m {
case ToSingletonList:
return "toSingletonList"
case ToEmbeddedObject:
return "toEmbeddedObject"
default:
return "unknown"
}
}
// setValue sets the value, in pv, to v at the specified path fp.
// It's implemented on top of the fieldpath library by accessing
// the parent map in fp and directly setting v as a value in the
// parent map. We don't use fieldpath.Paved.SetValue because the
// JSON value validation performed by it potentially changes types.
func setValue(pv *fieldpath.Paved, v any, fp string) error {
segments := strings.Split(fp, ".")
p := fp
var pm any = pv.UnstructuredContent()
var err error
if len(segments) > 1 {
p = strings.Join(segments[:len(segments)-1], ".")
pm, err = pv.GetValue(p)
if err != nil {
return errors.Wrapf(err, "cannot get the parent value at field path %s", p)
}
}
parent, ok := pm.(map[string]any)
if !ok {
return errors.Errorf("parent at field path %s must be a map[string]any", p)
}
parent[segments[len(segments)-1]] = v
return nil
}
type SingletonListInjectKey struct {
Key string
Value string
}
type ConvertOptions struct {
// ListInjectKeys is used to inject a key with a default value into the
// singleton list for a given path.
ListInjectKeys map[string]SingletonListInjectKey
}
// Convert performs conversion between singleton lists and embedded objects
// while passing the CRD parameters to the Terraform layer and while reading
// state from the Terraform layer at runtime. The paths where the conversion
// will be performed are specified using paths and the conversion mode (whether
// an embedded object will be converted into a singleton list or a singleton
// list will be converted into an embedded object) is determined by the mode
// parameter.
func Convert(params map[string]any, paths []string, mode ListConversionMode, opts *ConvertOptions) (map[string]any, error) { //nolint:gocyclo // easier to follow as a unit
switch mode {
case ToSingletonList:
slices.Sort(paths)
case ToEmbeddedObject:
sort.Slice(paths, func(i, j int) bool {
return paths[i] > paths[j]
})
}
pv := fieldpath.Pave(params)
for _, fp := range paths {
exp, err := pv.ExpandWildcards(fp)
if err != nil && !fieldpath.IsNotFound(err) {
return nil, errors.Wrapf(err, "cannot expand wildcards for the field path expression %s", fp)
}
for _, e := range exp {
v, err := pv.GetValue(e)
if err != nil {
return nil, errors.Wrapf(err, "cannot get the value at the field path %s with the conversion mode set to %q", e, mode)
}
switch mode {
case ToSingletonList:
if opts != nil {
// We replace 0th index with "*" to be able to stay consistent
// with the paths parameter in the keys of opts.ListInjectKeys.
if inj, ok := opts.ListInjectKeys[strings.ReplaceAll(e, "0", "*")]; ok && inj.Key != "" && inj.Value != "" {
if m, ok := v.(map[string]any); ok {
m[inj.Key] = inj.Value
}
}
}
if err := setValue(pv, []any{v}, e); err != nil {
return nil, errors.Wrapf(err, "cannot set the singleton list's value at the field path %s", e)
}
case ToEmbeddedObject:
var newVal any = nil
if v != nil {
newVal = map[string]any{}
s, ok := v.([]any)
if !ok {
// then it's not a slice
return nil, errors.Errorf(errFmtNonSlice, e, reflect.TypeOf(v))
}
if len(s) > 1 {
return nil, errors.Errorf(errFmtMultiItemList, e, len(s))
}
if len(s) > 0 {
newVal = s[0]
}
}
if opts != nil {
// We replace 0th index with "*" to be able to stay consistent
// with the paths parameter in the keys of opts.ListInjectKeys.
if inj, ok := opts.ListInjectKeys[strings.ReplaceAll(e, "0", "*")]; ok && inj.Key != "" && inj.Value != "" {
delete(newVal.(map[string]any), inj.Key)
}
}
if err := setValue(pv, newVal, e); err != nil {
return nil, errors.Wrapf(err, "cannot set the embedded object's value at the field path %s", e)
}
}
}
}
return params, nil
}

View File

@ -0,0 +1,474 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"reflect"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
jsoniter "github.com/json-iterator/go"
"github.com/pkg/errors"
)
func TestConvert(t *testing.T) {
type args struct {
params map[string]any
paths []string
mode ListConversionMode
opts *ConvertOptions
}
type want struct {
err error
params map[string]any
}
tests := map[string]struct {
reason string
args args
want want
}{
"NilParamsAndPaths": {
reason: "Conversion on an nil map should not fail.",
args: args{},
},
"EmptyPaths": {
reason: "Empty conversion on a map should be an identity function.",
args: args{
params: map[string]any{"a": "b"},
},
want: want{
params: map[string]any{"a": "b"},
},
},
"SingletonListToEmbeddedObject": {
reason: "Should successfully convert a singleton list at the root level to an embedded object.",
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"NestedSingletonListsToEmbeddedObjectsPathsInLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
"NestedSingletonListsToEmbeddedObjectsPathsInReverseLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in reverse-lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
paths: []string{"parent[*].child", "parent"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
"EmbeddedObjectToSingletonList": {
reason: "Should successfully convert an embedded object at the root level to a singleton list.",
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
"NestedEmbeddedObjectsToSingletonListInLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"NestedEmbeddedObjectsToSingletonListInReverseLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in reverse-lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent[*].child", "parent"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"child": []map[string]any{
{
"k": "v",
},
},
},
},
},
},
},
"FailConversionOfAMultiItemList": {
reason: `Conversion of a multi-item list in mode "ToEmbeddedObject" should fail.`,
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k1": "v1",
},
{
"k2": "v2",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
err: errors.Errorf(errFmtMultiItemList, "l", 2),
},
},
"FailConversionOfNonSlice": {
reason: `Conversion of a non-slice value in mode "ToEmbeddedObject" should fail.`,
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
},
want: want{
err: errors.Errorf(errFmtNonSlice, "l", reflect.TypeOf(map[string]any{})),
},
},
"ToSingletonListWithNonExistentPath": {
reason: `"ToSingletonList" mode conversions specifying only non-existent paths should be identity functions.`,
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"nonexistent"},
mode: ToSingletonList,
},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"ToEmbeddedObjectWithNonExistentPath": {
reason: `"ToEmbeddedObject" mode conversions specifying only non-existent paths should be identity functions.`,
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
paths: []string{"nonexistent"},
mode: ToEmbeddedObject,
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
},
},
},
},
},
"WithInjectedKeySingletonListToEmbeddedObject": {
reason: "Should successfully convert a singleton list at the root level to an embedded object.",
args: args{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
paths: []string{"l"},
mode: ToEmbeddedObject,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"l": {
Key: "index",
Value: "0",
},
},
}},
want: want{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
},
},
"WithInjectedKeyEmbeddedObjectToSingletonList": {
reason: "Should successfully convert an embedded object at the root level to a singleton list.",
args: args{
params: map[string]any{
"l": map[string]any{
"k": "v",
},
},
paths: []string{"l"},
mode: ToSingletonList,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"l": {
Key: "index",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"l": []map[string]any{
{
"k": "v",
"index": "0",
},
},
},
},
},
"WithInjectedKeyNestedEmbeddedObjectsToSingletonListInLexicalOrder": {
reason: "Should successfully convert the parent & nested embedded objects to singleton lists. Paths are specified in lexical order.",
args: args{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToSingletonList,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"parent": {
Key: "index",
Value: "0",
},
"parent[*].child": {
Key: "another",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"parent": []map[string]any{
{
"index": "0",
"child": []map[string]any{
{
"k": "v",
"another": "0",
},
},
},
},
},
},
},
"WithInjectedKeyNestedSingletonListsToEmbeddedObjectsPathsInLexicalOrder": {
reason: "Should successfully convert the parent & nested singleton lists to embedded objects. Paths specified in lexical order.",
args: args{
params: map[string]any{
"parent": []map[string]any{
{
"index": "0",
"child": []map[string]any{
{
"k": "v",
"another": "0",
},
},
},
},
},
paths: []string{"parent", "parent[*].child"},
mode: ToEmbeddedObject,
opts: &ConvertOptions{
ListInjectKeys: map[string]SingletonListInjectKey{
"parent": {
Key: "index",
Value: "0",
},
"parent[*].child": {
Key: "another",
Value: "0",
},
},
},
},
want: want{
params: map[string]any{
"parent": map[string]any{
"child": map[string]any{
"k": "v",
},
},
},
},
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
params, err := roundTrip(tt.args.params)
if err != nil {
t.Fatalf("Failed to preprocess tt.args.params: %v", err)
}
wantParams, err := roundTrip(tt.want.params)
if err != nil {
t.Fatalf("Failed to preprocess tt.want.params: %v", err)
}
got, err := Convert(params, tt.args.paths, tt.args.mode, tt.args.opts)
if diff := cmp.Diff(tt.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\nConvert(tt.args.params, tt.args.paths): -wantErr, +gotErr:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(wantParams, got); diff != "" {
t.Errorf("\n%s\nConvert(tt.args.params, tt.args.paths): -wantConverted, +gotConverted:\n%s", tt.reason, diff)
}
})
}
}
func TestModeString(t *testing.T) {
tests := map[string]struct {
m ListConversionMode
want string
}{
"ToSingletonList": {
m: ToSingletonList,
want: "toSingletonList",
},
"ToEmbeddedObject": {
m: ToEmbeddedObject,
want: "toEmbeddedObject",
},
"Unknown": {
m: ToSingletonList + 1,
want: "unknown",
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
if diff := cmp.Diff(tt.want, tt.m.String()); diff != "" {
t.Errorf("String(): -want, +got:\n%s", diff)
}
})
}
}
func roundTrip(m map[string]any) (map[string]any, error) {
if len(m) == 0 {
return m, nil
}
buff, err := jsoniter.ConfigCompatibleWithStandardLibrary.Marshal(m)
if err != nil {
return nil, err
}
var r map[string]any
return r, jsoniter.ConfigCompatibleWithStandardLibrary.Unmarshal(buff, &r)
}

View File

@ -1,6 +1,6 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
@ -10,9 +10,10 @@ import (
"regexp"
"strings"
"text/template"
"text/template/parse"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
)
const (
@ -47,6 +48,8 @@ var (
GetIDFn: ExternalNameAsID,
DisableNameInitializer: true,
}
parameterPattern = regexp.MustCompile(`{{\s*\.parameters\.([^\s}]+)\s*}}`)
)
// ParameterAsIdentifier uses the given field name in the arguments as the
@ -60,36 +63,66 @@ func ParameterAsIdentifier(param string) ExternalName {
param,
param + "_prefix",
}
e.IdentifierFields = []string{param}
return e
}
// TemplatedStringAsIdentifier accepts a template as the shape of the Terraform
// ID and lets you provide a field path for the argument you're using as external
// name. The available variables you can use in the template are as follows:
// parameters: A tree of parameters that you'd normally see in a Terraform HCL
//
// parameters: A tree of parameters that you'd normally see in a Terraform HCL
// file. You can use TF registry documentation of given resource to
// see what's available.
//
// terraformProviderConfig: The Terraform configuration object of the provider. You can
//
// setup.configuration: The Terraform configuration object of the provider. You can
// take a look at the TF registry provider configuration object
// to see what's available. Not to be confused with ProviderConfig
// custom resource of the Crossplane provider.
//
// external_name: The value of external name annotation of the custom resource.
// setup.client_metadata: The Terraform client metadata available for the provider,
// such as the AWS account ID for the AWS provider.
//
// external_name: The value of external name annotation of the custom resource.
// It is required to use this as part of the template.
//
// The following template functions are available:
//
// ToLower: Converts the contents of the pipeline to lower-case
//
// ToUpper: Converts the contents of the pipeline to upper-case
//
// Please note that it's currently *not* possible to use
// the template functions on the .external_name template variable.
// Example usages:
// TemplatedStringAsIdentifier("index_name", "/subscriptions/{{ .terraformProviderConfig.subscription }}/{{ .external_name }}")
// TemplatedStringAsIdentifier("index.name", "/resource/{{ .external_name }}/static")
// TemplatedStringAsIdentifier("index.name", "{{ .parameters.cluster_id }}:{{ .parameters.node_id }}:{{ .external_name }}")
//
// TemplatedStringAsIdentifier("index_name", "/subscriptions/{{ .setup.configuration.subscription }}/{{ .external_name }}")
//
// TemplatedStringAsIdentifier("index_name", "/resource/{{ .external_name }}/static")
//
// TemplatedStringAsIdentifier("index_name", "{{ .parameters.cluster_id }}:{{ .parameters.node_id }}:{{ .external_name }}")
//
// TemplatedStringAsIdentifier("", "arn:aws:network-firewall:{{ .setup.configuration.region }}:{{ .setup.client_metadata.account_id }}:{{ .parameters.type | ToLower }}-rulegroup/{{ .external_name }}")
func TemplatedStringAsIdentifier(nameFieldPath, tmpl string) ExternalName {
t, err := template.New("getid").Parse(tmpl)
t, err := template.New("getid").Funcs(template.FuncMap{
"ToLower": strings.ToLower,
"ToUpper": strings.ToUpper,
}).Parse(tmpl)
if err != nil {
panic(errors.Wrap(err, "cannot parse template"))
}
// Note(turkenh): If a parameter is used in the external name template,
// it is an identifier field.
var identifierFields []string
for _, node := range t.Root.Nodes {
if node.Type() == parse.NodeAction {
match := parameterPattern.FindStringSubmatch(node.String())
if len(match) == 2 {
identifierFields = append(identifierFields, match[1])
}
}
}
return ExternalName{
SetIdentifierArgumentFn: func(base map[string]any, externalName string) {
if nameFieldPath == "" {
@ -126,6 +159,7 @@ func TemplatedStringAsIdentifier(nameFieldPath, tmpl string) ExternalName {
}
return GetExternalNameFromTemplated(tmpl, id.(string))
},
IdentifierFields: identifierFields,
}
}
@ -175,3 +209,96 @@ func GetExternalNameFromTemplated(tmpl, val string) (string, error) { //nolint:g
}
return "", errors.Errorf("unhandled case with template %s and value %s", tmpl, val)
}
// ExternalNameFrom is an ExternalName configuration which uses a parent
// configuration as its base and modifies any of the GetIDFn,
// GetExternalNameFn or SetIdentifierArgumentsFn. This enables us to reuse
// the existing ExternalName configurations with modifications in their
// behaviors via compositions.
type ExternalNameFrom struct {
ExternalName
getIDFn func(GetIDFn, context.Context, string, map[string]any, map[string]any) (string, error)
getExternalNameFn func(GetExternalNameFn, map[string]any) (string, error)
setIdentifierArgumentFn func(SetIdentifierArgumentsFn, map[string]any, string)
}
// ExternalNameFromOption is an option that modifies the behavior of an
// ExternalNameFrom external-name configuration.
type ExternalNameFromOption func(from *ExternalNameFrom)
// WithGetIDFn sets the GetIDFn for the ExternalNameFrom configuration.
// The function parameter fn receives the parent ExternalName's GetIDFn, and
// implementations may invoke the parent's GetIDFn via this
// parameter. For the description of the rest of the parameters and return
// values, please see the documentation of GetIDFn.
func WithGetIDFn(fn func(fn GetIDFn, ctx context.Context, externalName string, parameters map[string]any, terraformProviderConfig map[string]any) (string, error)) ExternalNameFromOption {
return func(ec *ExternalNameFrom) {
ec.getIDFn = fn
}
}
// WithGetExternalNameFn sets the GetExternalNameFn for the ExternalNameFrom
// configuration. The function parameter fn receives the parent ExternalName's
// GetExternalNameFn, and implementations may invoke the parent's
// GetExternalNameFn via this parameter. For the description of the rest
// of the parameters and return values, please see the documentation of
// GetExternalNameFn.
func WithGetExternalNameFn(fn func(fn GetExternalNameFn, tfstate map[string]any) (string, error)) ExternalNameFromOption {
return func(ec *ExternalNameFrom) {
ec.getExternalNameFn = fn
}
}
// WithSetIdentifierArgumentsFn sets the SetIdentifierArgumentsFn for the
// ExternalNameFrom configuration. The function parameter fn receives the
// parent ExternalName's SetIdentifierArgumentsFn, and implementations may
// invoke the parent's SetIdentifierArgumentsFn via this
// parameter. For the description of the rest of the parameters and return
// values, please see the documentation of SetIdentifierArgumentsFn.
func WithSetIdentifierArgumentsFn(fn func(fn SetIdentifierArgumentsFn, base map[string]any, externalName string)) ExternalNameFromOption {
return func(ec *ExternalNameFrom) {
ec.setIdentifierArgumentFn = fn
}
}
// NewExternalNameFrom initializes a new ExternalNameFrom with the given parent
// and with the given options. An example configuration that uses a
// TemplatedStringAsIdentifier as its parent (base) and sets a default value
// for the external-name if the external-name is yet not populated is as
// follows:
//
// config.NewExternalNameFrom(config.TemplatedStringAsIdentifier("", "{{ .parameters.type }}/{{ .setup.client_metadata.account_id }}/{{ .external_name }}"),
//
// config.WithGetIDFn(func(fn config.GetIDFn, ctx context.Context, externalName string, parameters map[string]any, terraformProviderConfig map[string]any) (string, error) {
// if externalName == "" {
// externalName = "some random string"
// }
// return fn(ctx, externalName, parameters, terraformProviderConfig)
// }))
func NewExternalNameFrom(parent ExternalName, opts ...ExternalNameFromOption) ExternalName {
ec := &ExternalNameFrom{}
for _, o := range opts {
o(ec)
}
ec.ExternalName.GetIDFn = func(ctx context.Context, externalName string, parameters map[string]any, terraformProviderConfig map[string]any) (string, error) {
if ec.getIDFn == nil {
return parent.GetIDFn(ctx, externalName, parameters, terraformProviderConfig)
}
return ec.getIDFn(parent.GetIDFn, ctx, externalName, parameters, terraformProviderConfig)
}
ec.ExternalName.GetExternalNameFn = func(tfstate map[string]any) (string, error) {
if ec.getExternalNameFn == nil {
return parent.GetExternalNameFn(tfstate)
}
return ec.getExternalNameFn(parent.GetExternalNameFn, tfstate)
}
ec.ExternalName.SetIdentifierArgumentFn = func(base map[string]any, externalName string) {
if ec.setIdentifierArgumentFn == nil {
parent.SetIdentifierArgumentFn(base, externalName)
return
}
ec.setIdentifierArgumentFn(parent.SetIdentifierArgumentFn, base, externalName)
}
return ec.ExternalName
}

View File

@ -1,6 +1,6 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
@ -8,9 +8,8 @@ import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
)
@ -79,6 +78,16 @@ func TestGetExternalNameFromTemplated(t *testing.T) {
name: "myname",
},
},
"NoExternalNameInTemplate": {
reason: "Should return the ID intact if there's no {{ .external_name }} variable in the template",
args: args{
tmpl: "olala:{{ .another }}:omama:{{ .someOther }}",
val: "olala:val1:omama:val2",
},
want: want{
name: "olala:val1:omama:val2",
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
@ -213,6 +222,42 @@ func TestTemplatedGetIDFn(t *testing.T) {
id: "olala/paramval:myname/configval",
},
},
"TemplateFunctionToLower": {
reason: "Should work with a call of ToLower.",
args: args{
tmpl: "olala/{{ .parameters.ola | ToLower }}:{{ .external_name }}/{{ .setup.configuration.oma | ToLower }}",
externalName: "myname",
parameters: map[string]any{
"ola": "ALL_CAPITAL",
},
setup: map[string]any{
"configuration": map[string]any{
"oma": "CamelCase",
},
},
},
want: want{
id: "olala/all_capital:myname/camelcase",
},
},
"TemplateFunctionToUpper": {
reason: "Should work with a call of ToUpper.",
args: args{
tmpl: "olala/{{ .parameters.ola | ToUpper }}:{{ .external_name }}/{{ .setup.configuration.oma | ToUpper }}",
externalName: "myname",
parameters: map[string]any{
"ola": "all_small",
},
setup: map[string]any{
"configuration": map[string]any{
"oma": "CamelCase",
},
},
},
want: want{
id: "olala/ALL_SMALL:myname/CAMELCASE",
},
},
}
for n, tc := range cases {
t.Run(n, func(t *testing.T) {

View File

@ -1,18 +1,23 @@
/*
Copyright 2022 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"context"
"fmt"
"regexp"
tfjson "github.com/hashicorp/terraform-json"
fwprovider "github.com/hashicorp/terraform-plugin-framework/provider"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"github.com/upbound/upjet/pkg/registry"
conversiontfjson "github.com/upbound/upjet/pkg/types/conversion/tfjson"
"github.com/crossplane/upjet/v2/pkg/registry"
"github.com/crossplane/upjet/v2/pkg/schema/traverser"
conversiontfjson "github.com/crossplane/upjet/v2/pkg/types/conversion/tfjson"
)
// ResourceConfiguratorFn is a function that implements the ResourceConfigurator
@ -40,11 +45,15 @@ func (cc ResourceConfiguratorChain) Configure(r *Resource) {
}
}
// BasePackages keeps lists of base packages that needs to be registered as API
// BasePackages keeps lists of packages that needs to be registered as API
// and controllers. Typically, we expect to see ProviderConfig packages here.
// These APIs and controllers belong to non-generated (manually maintained)
// resources.
type BasePackages struct {
APIVersion []string
// Deprecated: Use ControllerMap instead.
Controller []string
ControllerMap map[string]string
}
// Provider holds configuration for a provider to be generated with Upjet.
@ -57,8 +66,8 @@ type Provider struct {
TerraformResourcePrefix string
// RootGroup is the root group that all CRDs groups in the provider are based
// on, e.g. "aws.jet.crossplane.io".
// Defaults to "<TerraformResourcePrefix>.jet.crossplane.io".
// on, e.g. "aws.upbound.io".
// Defaults to "<TerraformResourcePrefix>.upbound.io".
RootGroup string
// ShortName is the short name of the provider. Typically, added as a CRD
@ -70,6 +79,10 @@ type Provider struct {
// "github.com/upbound/provider-aws"
ModulePath string
// FeaturesPackage is the relative package patch for the features package to
// configure the features behind the feature gates.
FeaturesPackage string
// BasePackages keeps lists of base packages that needs to be registered as
// API and controllers. Typically, we expect to see ProviderConfig packages
// here.
@ -85,17 +98,57 @@ type Provider struct {
// can add "aws_waf.*" to the list.
SkipList []string
// MainTemplate is the template string to be used to render the
// provider subpackage main program. If this is set, the generated provider
// is broken up into subpackage families partitioned across the API groups.
// A monolithic provider is also generated to
// ensure backwards-compatibility.
MainTemplate string
// skippedResourceNames is a list of Terraform resource names
// available in the Terraform provider schema, but
// not in the include list or in the skip list, meaning that
// the corresponding managed resources are not generated.
skippedResourceNames []string
// IncludeList is a list of regex for the Terraform resources to be
// included. For example, to include "aws_shield_protection_group" into
// included and reconciled via the Terraform CLI.
// For example, to include "aws_shield_protection_group" into
// the generated resources, one can add "aws_shield_protection_group$".
// To include whole aws waf group, one can add "aws_waf.*" to the list.
// Defaults to []string{".+"} which would include all resources.
IncludeList []string
// TerraformPluginSDKIncludeList is a list of regex for the Terraform resources
// implemented with Terraform Plugin SDKv2 to be included and reconciled
// in the no-fork architecture (without the Terraform CLI).
// For example, to include "aws_shield_protection_group" into
// the generated resources, one can add "aws_shield_protection_group$".
// To include whole aws waf group, one can add "aws_waf.*" to the list.
// Defaults to []string{".+"} which would include all resources.
TerraformPluginSDKIncludeList []string
// TerraformPluginFrameworkIncludeList is a list of regex for the Terraform
// resources implemented with Terraform Plugin Framework to be included and
// reconciled in the no-fork architecture (without the Terraform CLI).
// For example, to include "aws_shield_protection_group" into
// the generated resources, one can add "aws_shield_protection_group$".
// To include whole aws waf group, one can add "aws_waf.*" to the list.
// Defaults to []string{".+"} which would include all resources.
TerraformPluginFrameworkIncludeList []string
// Resources is a map holding resource configurations where key is Terraform
// resource name.
Resources map[string]*Resource
// TerraformProvider is the Terraform provider in Terraform Plugin SDKv2
// compatible format
TerraformProvider *schema.Provider
// TerraformPluginFrameworkProvider is the Terraform provider reference
// in Terraform Plugin Framework compatible format
TerraformPluginFrameworkProvider fwprovider.Provider
// refInjectors is an ordered list of `ReferenceInjector`s for
// injecting references across this Provider's resources.
refInjectors []ReferenceInjector
@ -103,6 +156,12 @@ type Provider struct {
// resourceConfigurators is a map holding resource configurators where key
// is Terraform resource name.
resourceConfigurators map[string]ResourceConfiguratorChain
// schemaTraversers is a chain of schema traversers to be used with
// this Provider configuration. Schema traversers can be used to inspect or
// modify the Provider configuration based on the underlying Terraform
// resource schemas.
schemaTraversers []traverser.SchemaTraverser
}
// ReferenceInjector injects cross-resource references across the resources
@ -135,6 +194,38 @@ func WithIncludeList(l []string) ProviderOption {
}
}
// WithTerraformPluginSDKIncludeList configures the TerraformPluginSDKIncludeList for this Provider,
// with the given Terraform Plugin SDKv2-based resource name list
func WithTerraformPluginSDKIncludeList(l []string) ProviderOption {
return func(p *Provider) {
p.TerraformPluginSDKIncludeList = l
}
}
// WithTerraformPluginFrameworkIncludeList configures the
// TerraformPluginFrameworkIncludeList for this Provider, with the given
// Terraform Plugin Framework-based resource name list
func WithTerraformPluginFrameworkIncludeList(l []string) ProviderOption {
return func(p *Provider) {
p.TerraformPluginFrameworkIncludeList = l
}
}
// WithTerraformProvider configures the TerraformProvider for this Provider.
func WithTerraformProvider(tp *schema.Provider) ProviderOption {
return func(p *Provider) {
p.TerraformProvider = tp
}
}
// WithTerraformPluginFrameworkProvider configures the
// TerraformPluginFrameworkProvider for this Provider.
func WithTerraformPluginFrameworkProvider(tp fwprovider.Provider) ProviderOption {
return func(p *Provider) {
p.TerraformPluginFrameworkProvider = tp
}
}
// WithSkipList configures SkipList for this Provider.
func WithSkipList(l []string) ProviderOption {
return func(p *Provider) {
@ -166,13 +257,39 @@ func WithReferenceInjectors(refInjectors []ReferenceInjector) ProviderOption {
}
}
// WithFeaturesPackage configures FeaturesPackage for this Provider.
func WithFeaturesPackage(s string) ProviderOption {
return func(p *Provider) {
p.FeaturesPackage = s
}
}
// WithMainTemplate configures the provider family main module file's path.
// This template file will be used to generate the main modules of the
// family's members.
func WithMainTemplate(template string) ProviderOption {
return func(p *Provider) {
p.MainTemplate = template
}
}
// WithSchemaTraversers configures a chain of schema traversers to be used with
// this Provider configuration. Schema traversers can be used to inspect or
// modify the Provider configuration based on the underlying Terraform
// resource schemas.
func WithSchemaTraversers(traversers ...traverser.SchemaTraverser) ProviderOption {
return func(p *Provider) {
p.schemaTraversers = traversers
}
}
// NewProvider builds and returns a new Provider from provider
// tfjson schema, that is generated using Terraform CLI with:
// `terraform providers schema --json`
func NewProvider(schema []byte, prefix string, modulePath string, metadata []byte, opts ...ProviderOption) *Provider { // nolint:gocyclo
func NewProvider(schema []byte, prefix string, modulePath string, metadata []byte, opts ...ProviderOption) *Provider { //nolint:gocyclo
ps := tfjson.ProviderSchemas{}
if err := ps.UnmarshalJSON(schema); err != nil {
panic(err)
panic(errors.Wrap(err, "failed to unmarshal the Terraform JSON schema"))
}
if len(ps.Schemas) != 1 {
panic(fmt.Sprintf("there should exactly be 1 provider schema but there are %d", len(ps.Schemas)))
@ -182,7 +299,6 @@ func NewProvider(schema []byte, prefix string, modulePath string, metadata []byt
rs = v.ResourceSchemas
break
}
resourceMap := conversiontfjson.GetV2ResourceMap(rs)
providerMetadata, err := registry.NewProviderMetadataFromFile(metadata)
if err != nil {
@ -192,8 +308,8 @@ func NewProvider(schema []byte, prefix string, modulePath string, metadata []byt
p := &Provider{
ModulePath: modulePath,
TerraformResourcePrefix: fmt.Sprintf("%s_", prefix),
RootGroup: fmt.Sprintf("%s.jet.crossplane.io", prefix),
ShortName: fmt.Sprintf("%sjet", prefix),
RootGroup: fmt.Sprintf("%s.upbound.io", prefix),
ShortName: prefix,
BasePackages: DefaultBasePackages,
IncludeList: []string{
// Include all Resources
@ -207,20 +323,59 @@ func NewProvider(schema []byte, prefix string, modulePath string, metadata []byt
o(p)
}
p.skippedResourceNames = make([]string, 0, len(resourceMap))
terraformPluginFrameworkResourceFunctionsMap := terraformPluginFrameworkResourceFunctionsMap(p.TerraformPluginFrameworkProvider)
for name, terraformResource := range resourceMap {
if len(terraformResource.Schema) == 0 {
// There are resources with no schema, that we will address later.
fmt.Printf("Skipping resource %s because it has no schema\n", name)
}
// if in both of the include lists, the new behavior prevails
isTerraformPluginSDK := matches(name, p.TerraformPluginSDKIncludeList)
isPluginFrameworkResource := matches(name, p.TerraformPluginFrameworkIncludeList)
isCLIResource := matches(name, p.IncludeList)
if (isTerraformPluginSDK && isPluginFrameworkResource) || (isTerraformPluginSDK && isCLIResource) || (isPluginFrameworkResource && isCLIResource) {
panic(errors.Errorf(`resource %q is specified in more than one include list. It should appear in at most one of the lists "IncludeList", "TerraformPluginSDKIncludeList" or "TerraformPluginFrameworkIncludeList"`, name))
}
if len(terraformResource.Schema) == 0 || matches(name, p.SkipList) || (!matches(name, p.IncludeList) && !isTerraformPluginSDK && !isPluginFrameworkResource) {
p.skippedResourceNames = append(p.skippedResourceNames, name)
continue
}
if matches(name, p.SkipList) {
if isTerraformPluginSDK {
if p.TerraformProvider == nil || p.TerraformProvider.ResourcesMap[name] == nil {
panic(errors.Errorf("resource %q is configured to be reconciled with Terraform Plugin SDK"+
"but either config.Provider.TerraformProvider is not configured or the Go schema does not exist for the resource", name))
}
terraformResource = p.TerraformProvider.ResourcesMap[name]
if terraformResource.Schema == nil {
if terraformResource.SchemaFunc == nil {
p.skippedResourceNames = append(p.skippedResourceNames, name)
fmt.Printf("Skipping resource %s because it has no schema and no schema function\n", name)
continue
}
if !matches(name, p.IncludeList) {
continue
terraformResource.Schema = terraformResource.SchemaFunc()
}
}
p.Resources[name] = DefaultResource(name, terraformResource, providerMetadata.Resources[name], p.DefaultResourceOptions...)
var terraformPluginFrameworkResource fwresource.Resource
if isPluginFrameworkResource {
resourceFunc := terraformPluginFrameworkResourceFunctionsMap[name]
if p.TerraformPluginFrameworkProvider == nil || resourceFunc == nil {
panic(errors.Errorf("resource %q is configured to be reconciled with Terraform Plugin Framework"+
"but either config.Provider.TerraformPluginFrameworkProvider is not configured or the provider doesn't have the resource.", name))
}
terraformPluginFrameworkResource = resourceFunc()
}
p.Resources[name] = DefaultResource(name, terraformResource, terraformPluginFrameworkResource, providerMetadata.Resources[name], p.DefaultResourceOptions...)
p.Resources[name].useTerraformPluginSDKClient = isTerraformPluginSDK
p.Resources[name].useTerraformPluginFrameworkClient = isPluginFrameworkResource
// traverse the Terraform resource schema to initialize the upjet Resource
// configurations
if err := TraverseSchemas(name, p.Resources[name], p.schemaTraversers...); err != nil {
panic(errors.Wrap(err, "failed to execute the Terraform schema traverser chain"))
}
}
for i, refInjector := range p.refInjectors {
if err := refInjector.InjectReferences(p.Resources); err != nil {
@ -255,6 +410,14 @@ func (p *Provider) ConfigureResources() {
}
}
// GetSkippedResourceNames returns a list of Terraform resource names
// available in the Terraform provider schema, but
// not in the include list or in the skip list, meaning that
// the corresponding managed resources are not generated.
func (p *Provider) GetSkippedResourceNames() []string {
return p.skippedResourceNames
}
func matches(name string, regexList []string) bool {
for _, r := range regexList {
ok, err := regexp.MatchString(r, name)
@ -267,3 +430,30 @@ func matches(name string, regexList []string) bool {
}
return false
}
func terraformPluginFrameworkResourceFunctionsMap(provider fwprovider.Provider) map[string]func() fwresource.Resource {
if provider == nil {
return make(map[string]func() fwresource.Resource, 0)
}
ctx := context.TODO()
resourceFunctions := provider.Resources(ctx)
resourceFunctionsMap := make(map[string]func() fwresource.Resource, len(resourceFunctions))
providerMetadata := fwprovider.MetadataResponse{}
provider.Metadata(ctx, fwprovider.MetadataRequest{}, &providerMetadata)
for _, resourceFunction := range resourceFunctions {
resource := resourceFunction()
resourceTypeNameReq := fwresource.MetadataRequest{
ProviderTypeName: providerMetadata.TypeName,
}
resourceTypeNameResp := fwresource.MetadataResponse{}
resource.Metadata(ctx, resourceTypeNameReq, &resourceTypeNameResp)
resourceFunctionsMap[resourceTypeNameResp.TypeName] = resourceFunction
}
return resourceFunctionsMap
}

View File

@ -1,25 +1,80 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"context"
"fmt"
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
rschema "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/util/json"
"k8s.io/utils/pointer"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/registry"
)
"github.com/upbound/upjet/pkg/registry"
// A ListType is a type of list.
type ListType string
// Types of lists.
const (
// ListTypeAtomic means the entire list is replaced during merge. At any
// point in time, a single manager owns the list.
ListTypeAtomic ListType = "atomic"
// ListTypeSet can be granularly merged, and different managers can own
// different elements in the list. The list can include only scalar
// elements.
ListTypeSet ListType = "set"
// ListTypeMap can be granularly merged, and different managers can own
// different elements in the list. The list can include only nested types
// (i.e. objects).
ListTypeMap ListType = "map"
)
// A MapType is a type of map.
type MapType string
// Types of maps.
const (
// MapTypeAtomic means that the map can only be entirely replaced by a
// single manager.
MapTypeAtomic MapType = "atomic"
// MapTypeGranular means that the map supports separate managers updating
// individual fields.
MapTypeGranular MapType = "granular"
)
// A StructType is a type of struct.
type StructType string
// Struct types.
const (
// StructTypeAtomic means that the struct can only be entirely replaced by a
// single manager.
StructTypeAtomic StructType = "atomic"
// StructTypeGranular means that the struct supports separate managers
// updating individual fields.
StructTypeGranular StructType = "granular"
)
// SetIdentifierArgumentsFn sets the name of the resource in Terraform attributes map,
@ -104,17 +159,31 @@ type ExternalName struct {
// assigned by the provider, like AWS VPC where it gets vpc-21kn123 identifier
// and not let you name it.
DisableNameInitializer bool
// IdentifierFields are the fields that are used to construct external
// resource identifier. We need to know these fields no matter what the
// management policy is including the Observe Only, different from other
// (required) fields.
IdentifierFields []string
}
// References represents reference resolver configurations for the fields of a
// given resource. Key should be the field path of the field to be referenced.
// The key is the Terraform field path of the field to be referenced.
// Example: "vpc_id" or "forwarding_rule.certificate_name" in case of nested
// in another object.
type References map[string]Reference
// Reference represents the Crossplane options used to generate
// reference resolvers for fields
type Reference struct {
// Type is the type name of the CRD if it is in the same package or
// Type is the Go type name of the CRD if it is in the same package or
// <package-path>.<type-name> if it is in a different package.
// Deprecated: Type is deprecated in favor of TerraformName, which provides
// a more stable and less error-prone API compared to Type. TerraformName
// will automatically handle name & version configurations that will affect
// the generated cross-resource reference. This is crucial especially if the
// provider generates multiple versions for its MR APIs.
Type string
// TerraformName is the name of the Terraform resource
// which will be referenced. The supplied resource name is
@ -129,7 +198,7 @@ type Reference struct {
// <field-name>Ref or <field-name>Refs.
// Optional
RefFieldName string
// SelectorFieldName is the field name for the Selector field. Defaults to
// SelectorFieldName is the Go field name for the Selector field. Defaults to
// <field-name>Selector.
// Optional
SelectorFieldName string
@ -156,10 +225,20 @@ type LateInitializer struct {
// "block_device_mappings.ebs".
IgnoredFields []string
// ConditionalIgnoredFields are the field paths to be skipped during
// late-initialization if they are filled in spec.initProvider.
ConditionalIgnoredFields []string
// ignoredCanonicalFieldPaths are the Canonical field paths to be skipped
// during late-initialization. This is filled using the `IgnoredFields`
// field which keeps Terraform paths by converting them to Canonical paths.
ignoredCanonicalFieldPaths []string
// conditionalIgnoredCanonicalFieldPaths are the Canonical field paths to be
// skipped during late-initialization if they are filled in spec.initProvider.
// This is filled using the `ConditionalIgnoredFields` field which keeps
// Terraform paths by converting them to Canonical paths.
conditionalIgnoredCanonicalFieldPaths []string
}
// GetIgnoredCanonicalFields returns the ignoredCanonicalFields
@ -175,6 +254,19 @@ func (l *LateInitializer) AddIgnoredCanonicalFields(cf string) {
l.ignoredCanonicalFieldPaths = append(l.ignoredCanonicalFieldPaths, cf)
}
// GetConditionalIgnoredCanonicalFields returns the conditionalIgnoredCanonicalFieldPaths
func (l *LateInitializer) GetConditionalIgnoredCanonicalFields() []string {
return l.conditionalIgnoredCanonicalFieldPaths
}
// AddConditionalIgnoredCanonicalFields sets conditional ignored canonical fields
func (l *LateInitializer) AddConditionalIgnoredCanonicalFields(cf string) {
if l.conditionalIgnoredCanonicalFieldPaths == nil {
l.conditionalIgnoredCanonicalFieldPaths = make([]string, 0)
}
l.conditionalIgnoredCanonicalFieldPaths = append(l.conditionalIgnoredCanonicalFieldPaths, cf)
}
// GetFieldPaths returns the fieldPaths map for Sensitive
func (s *Sensitive) GetFieldPaths() map[string]string {
return s.fieldPaths
@ -219,6 +311,11 @@ func NewTagger(kube client.Client, fieldName string) *Tagger {
// Initialize is a custom initializer for setting external tags
func (t *Tagger) Initialize(ctx context.Context, mg xpresource.Managed) error {
if sets.New[xpv1.ManagementAction](mg.GetManagementPolicies()...).Equal(sets.New[xpv1.ManagementAction](xpv1.ManagementActionObserve)) {
// We don't want to add tags to the spec.forProvider if the resource is
// only being Observed.
return nil
}
paved, err := fieldpath.PaveObject(mg)
if err != nil {
return err
@ -238,9 +335,9 @@ func (t *Tagger) Initialize(ctx context.Context, mg xpresource.Managed) error {
func setExternalTagsWithPaved(externalTags map[string]string, paved *fieldpath.Paved, fieldName string) ([]byte, error) {
tags := map[string]*string{
xpresource.ExternalResourceTagKeyKind: pointer.String(externalTags[xpresource.ExternalResourceTagKeyKind]),
xpresource.ExternalResourceTagKeyName: pointer.String(externalTags[xpresource.ExternalResourceTagKeyName]),
xpresource.ExternalResourceTagKeyProvider: pointer.String(externalTags[xpresource.ExternalResourceTagKeyProvider]),
xpresource.ExternalResourceTagKeyKind: ptr.To(externalTags[xpresource.ExternalResourceTagKeyKind]),
xpresource.ExternalResourceTagKeyName: ptr.To(externalTags[xpresource.ExternalResourceTagKeyName]),
xpresource.ExternalResourceTagKeyProvider: ptr.To(externalTags[xpresource.ExternalResourceTagKeyProvider]),
}
if err := paved.SetValue(fmt.Sprintf("spec.forProvider.%s", fieldName), tags); err != nil {
@ -253,6 +350,60 @@ func setExternalTagsWithPaved(externalTags map[string]string, paved *fieldpath.P
return pavedByte, nil
}
type InjectedKey struct {
Key string
DefaultValue string
}
// ListMapKeys is the list map keys when the server-side apply merge strategy
// islistType=map.
type ListMapKeys struct {
// InjectedKey can be used to inject the specified index key
// into the generated CRD schema for the list object when
// the SSA merge strategy for the parent list is `map`.
// If a non-zero `InjectedKey` is specified, then a field of type string with
// the specified name is injected into the Terraform schema and used as
// a list map key together with any other existing keys specified in `Keys`.
InjectedKey InjectedKey
// Keys is the set of list map keys to be used while SSA merges list items.
// If InjectedKey is non-zero, then it's automatically put into Keys and
// you must not specify the InjectedKey in Keys explicitly.
Keys []string
}
// ListMergeStrategy configures the corresponding field as list
// and configures its server-side apply merge strategy.
type ListMergeStrategy struct {
// ListMapKeys is the list map keys when the SSA merge strategy is
// `listType=map`. The keys specified here must be a set of scalar Terraform
// argument names to be used as the list map keys for the object list.
ListMapKeys ListMapKeys
// MergeStrategy is the SSA merge strategy for an object list. Valid values
// are: `atomic`, `set` and `map`
MergeStrategy ListType
}
// MergeStrategy configures the server-side apply merge strategy for the
// corresponding field. One and only one of the pointer members can be set
// and the specified merge strategy configuration must match the field's
// type, e.g., you cannot set MapMergeStrategy for a field of type list.
type MergeStrategy struct {
ListMergeStrategy ListMergeStrategy
MapMergeStrategy MapType
StructMergeStrategy StructType
}
// ServerSideApplyMergeStrategies configures the server-side apply merge strategy
// for the field at the specified path as the map key. The key is
// a Terraform configuration argument path such as a.b.c, without any
// index notation (i.e., array/map components do not need indices).
// It's an error to set a configuration option which does not match
// the object type at the specified path or to leave the corresponding
// configuration entry empty. For example, if the field at path a.b.c is
// a list, then ListMergeStrategy must be set and it should be the only
// configuration entry set.
type ServerSideApplyMergeStrategies map[string]MergeStrategy
// Resource is the set of information that you can override at different steps
// of the code generation pipeline.
type Resource struct {
@ -260,9 +411,14 @@ type Resource struct {
// e.g. aws_rds_cluster.
Name string
// TerraformResource is the Terraform representation of the resource.
// TerraformResource is the Terraform representation of the
// Terraform Plugin SDKv2 based resource.
TerraformResource *schema.Resource
// TerraformPluginFrameworkResource is the Terraform representation
// of the TF Plugin Framework based resource
TerraformPluginFrameworkResource fwresource.Resource
// ShortGroup is the short name of the API group of this CRD. The full
// CRD API group is calculated by adding the group suffix of the provider.
// For example, ShortGroup could be `ec2` where group suffix of the
@ -270,9 +426,24 @@ type Resource struct {
// be `ec2.aws.crossplane.io`
ShortGroup string
// Version is the version CRD will have.
// Version is the API version being generated for the corresponding CRD.
Version string
// PreviousVersions is the list of API versions previously generated for this
// resource for multi-versioned managed resources. upjet will attempt to load
// the type definitions from these previous versions if configured.
PreviousVersions []string
// ControllerReconcileVersion is the CRD API version the associated
// controller will watch & reconcile. If left unspecified,
// defaults to the value of Version. This configuration parameter
// can be used to have a controller use an older
// API version of the generated CRD instead of the API version being
// generated. Because this configuration parameter's value defaults to
// the value of Version, by default the controllers will reconcile the
// currently generated API versions of their associated CRs.
ControllerReconcileVersion string
// Kind is the kind of the CRD.
Kind string
@ -281,6 +452,8 @@ type Resource struct {
// databases.
UseAsync bool
// InitializerFns specifies the initializer functions to be used
// for this Resource.
InitializerFns []NewInitializerFn
// OperationTimeouts allows configuring resource operation timeouts.
@ -301,4 +474,325 @@ type Resource struct {
// MetaResource is the metadata associated with the resource scraped from
// the Terraform registry.
MetaResource *registry.Resource
// Path is the resource path for the API server endpoint. It defaults to
// the plural name of the generated CRD. Overriding this sets both the
// path and the plural name for the generated CRD.
Path string
// SchemaElementOptions is a map from the schema element paths to
// SchemaElementOption for configuring options for schema elements.
SchemaElementOptions SchemaElementOptions
// crdStorageVersion is the CRD storage API version.
// Use Resource.CRDStorageVersion to read the configured storage version
// which implements a defaulting to the current version being generated
// for backwards compatibility. This field is not exported to enforce
// defaulting, which is needed for backwards-compatibility.
crdStorageVersion string
// crdHubVersion is the conversion hub API version for the generated CRD.
// Use Resource.CRDHubVersion to read the configured hub version
// which implements a defaulting to the current version being generated
// for backwards compatibility. This field is not exported to enforce
// the defaulting behavior, which is needed for backwards-compatibility.
crdHubVersion string
// listConversionPaths maps the Terraform field paths of embedded objects
// that need to be converted into singleton lists (lists of
// at most one element) at runtime, to the corresponding CRD paths.
// Such fields are lists in the Terraform schema, however upjet generates
// them as nested objects, and we need to convert them back to lists
// at runtime before passing them to the Terraform stack and lists into
// embedded objects after reading the state from the Terraform stack.
listConversionPaths map[string]string
// TerraformConfigurationInjector allows a managed resource to inject
// configuration values in the Terraform configuration map obtained by
// deserializing its `spec.forProvider` value. Managed resources can
// use this resource configuration option to inject Terraform
// configuration parameters into their deserialized configuration maps,
// if the deserialization skips certain fields.
TerraformConfigurationInjector ConfigurationInjector
// TerraformCustomDiff allows a resource.Terraformed to customize how its
// Terraform InstanceDiff is computed during reconciliation.
TerraformCustomDiff CustomDiff
// TerraformPluginFrameworkIsStateEmptyFn allows customizing the logic
// for determining whether a Terraform Plugin Framework state value should
// be considered empty/nil for resource existence checks. If not set, the
// default behavior uses tfStateValue.IsNull().
TerraformPluginFrameworkIsStateEmptyFn TerraformPluginFrameworkIsStateEmptyFn
// ServerSideApplyMergeStrategies configures the server-side apply merge
// strategy for the fields at the given map keys. The map key is
// a Terraform configuration argument path such as a.b.c, without any
// index notation (i.e., array/map components do not need indices).
ServerSideApplyMergeStrategies ServerSideApplyMergeStrategies
// Conversions is the list of CRD API conversion functions to be invoked
// in-chain by the installed conversion Webhook for the generated CRD.
// This list of conversion.Conversion registered here are responsible for
// doing the conversions between the hub & spoke CRD API versions.
Conversions []conversion.Conversion
// TerraformConversions is the list of conversions to be invoked when passing
// data from the Crossplane layer to the Terraform layer and when reading
// data (state) from the Terraform layer to be used in the Crossplane layer.
TerraformConversions []TerraformConversion
// useTerraformPluginSDKClient indicates that a plugin SDK external client should
// be generated instead of the Terraform CLI-forking client.
useTerraformPluginSDKClient bool
// useTerraformPluginFrameworkClient indicates that a Terraform
// Plugin Framework external client should be generated instead of
// the Terraform Plugin SDKv2 client.
useTerraformPluginFrameworkClient bool
// OverrideFieldNames allows to manually override the relevant field name to
// avoid possible Go struct name conflicts that may occur after Multiversion
// CRDs support. During field generation, there may be fields with the same
// struct name calculated in the same group. For example, let X and Y
// resources in the same API group have a field named Tag. This field is an
// object type and the name calculated for the struct to be generated is
// TagParameters (for spec) for both resources. To avoid this conflict, upjet
// looks at all previously created structs in the package during generation
// and if there is a conflict, it puts the Kind name of the related resource
// in front of the next one: YTagParameters.
// With Multiversion CRDs support, the above conflict scenario cannot be
// solved in the generator when the old API group is preserved and not
// regenerated, because the generator does not know the object names in the
// old version. For example, a new API version is generated for resource X. In
// this case, no generation is done for the old version of X and when Y is
// generated, the generator is not aware of the TagParameters in X and
// generates TagParameters instead of YTagParameters. Thus, two object types
// with the same name are generated in the same package. This can be overcome
// by using this configuration API.
// The key of the map indicates the name of the field that is generated and
// causes the conflict, while the value indicates the name used to avoid the
// conflict. By convention, also used in upjet, the field name is preceded by
// the value of the generated Kind, for example:
// "TagParameters": "ClusterTagParameters"
// Deprecated: OverrideFieldNames has been deprecated in favor of loading
// the already existing type names from the older versions of the MR APIS
// via the PreviousVersions API.
OverrideFieldNames map[string]string
// requiredFields are the fields that will be marked as required in the
// generated CRD schema, although they are not required in the TF schema.
requiredFields []string
// UpdateLoopPrevention is a mechanism to prevent infinite reconciliation
// loops. This is especially useful in cases where external services,
// silently modify resource data without notifying the management layer
// (e.g., sanitized XML fields).
UpdateLoopPrevention UpdateLoopPrevention
}
// UpdateLoopPrevention is an interface that defines the behavior to prevent
// update loops. Implementations of this interface are responsible for analyzing
// diffs and determining whether an update should be blocked or allowed.
type UpdateLoopPrevention interface {
// UpdateLoopPreventionFunc analyzes a diff and decides whether the update
// should be blocked. It returns a result containing a reason for blocking
// the update if a loop is detected, or nil if the update can proceed.
//
// Parameters:
// - diff: The diff object representing changes between the desired and
// current state.
// - mg: The managed resource that is being reconciled.
//
// Returns:
// - *UpdateLoopPreventResult: Contains the reason for blocking the update
// if a loop is detected.
// - error: An error if there are issues analyzing the diff
// (e.g., invalid data).
UpdateLoopPreventionFunc(diff *terraform.InstanceDiff, mg xpresource.Managed) (*UpdateLoopPreventResult, error)
}
// UpdateLoopPreventResult provides the result of an update loop prevention
// check. If a loop is detected, it includes a reason explaining why the update
// was blocked.
type UpdateLoopPreventResult struct {
// Reason provides a human-readable explanation of why the update was
// blocked. This message can be displayed to the user or logged for
// debugging purposes.
Reason string
}
// RequiredFields returns slice of the marked as required fieldpaths.
func (r *Resource) RequiredFields() []string {
return r.requiredFields
}
// ShouldUseTerraformPluginSDKClient returns whether to generate an SDKv2-based
// external client for this Resource.
func (r *Resource) ShouldUseTerraformPluginSDKClient() bool {
return r.useTerraformPluginSDKClient
}
// ShouldUseTerraformPluginFrameworkClient returns whether to generate a
// Terraform Plugin Framework-based external client for this Resource.
func (r *Resource) ShouldUseTerraformPluginFrameworkClient() bool {
return r.useTerraformPluginFrameworkClient
}
// CustomDiff customizes the computed Terraform InstanceDiff. This can be used
// in cases where, for example, changes in a certain argument should just be
// dismissed. The new InstanceDiff is returned along with any errors.
type CustomDiff func(diff *terraform.InstanceDiff, state *terraform.InstanceState, config *terraform.ResourceConfig) (*terraform.InstanceDiff, error)
// ConfigurationInjector is a function that injects Terraform configuration
// values from the specified managed resource into the specified configuration
// map. jsonMap is the map obtained by converting the `spec.forProvider` using
// the JSON tags and tfMap is obtained by using the TF tags.
type ConfigurationInjector func(jsonMap map[string]any, tfMap map[string]any) error
// TerraformPluginFrameworkIsStateEmptyFn is a function that determines whether
// a Terraform Plugin Framework state value should be considered empty/nil for the
// purpose of determining resource existence. This allows providers to implement
// custom logic to handle cases where the standard IsNull() check is insufficient,
// such as when provider interceptors add fields like region to all state values.
type TerraformPluginFrameworkIsStateEmptyFn func(ctx context.Context, tfStateValue tftypes.Value, resourceSchema rschema.Schema) (bool, error)
// SchemaElementOptions represents schema element options for the
// schema elements of a Resource.
type SchemaElementOptions map[string]*SchemaElementOption
// SetAddToObservation sets the AddToObservation for the specified key.
func (m SchemaElementOptions) SetAddToObservation(el string) {
if m[el] == nil {
m[el] = &SchemaElementOption{}
}
m[el].AddToObservation = true
}
// AddToObservation returns true if the schema element at the specified path
// should be added to the CRD type's Observation type.
func (m SchemaElementOptions) AddToObservation(el string) bool {
return m[el] != nil && m[el].AddToObservation
}
// TFListConversionPaths returns the Resource's runtime Terraform list
// conversion paths in fieldpath syntax.
func (r *Resource) TFListConversionPaths() []string {
l := make([]string, 0, len(r.listConversionPaths))
for k := range r.listConversionPaths {
l = append(l, k)
}
return l
}
// CRDListConversionPaths returns the Resource's runtime CRD list
// conversion paths in fieldpath syntax.
func (r *Resource) CRDListConversionPaths() []string {
l := make([]string, 0, len(r.listConversionPaths))
for _, v := range r.listConversionPaths {
l = append(l, v)
}
return l
}
// CRDStorageVersion returns the CRD storage version if configured. If not,
// returns the Version being generated as the default value.
func (r *Resource) CRDStorageVersion() string {
if r.crdStorageVersion != "" {
return r.crdStorageVersion
}
return r.Version
}
// SetCRDStorageVersion configures the CRD storage version for a Resource.
// If unset, the default storage version is the current Version
// being generated.
func (r *Resource) SetCRDStorageVersion(v string) {
r.crdStorageVersion = v
}
// CRDHubVersion returns the CRD hub version if configured. If not,
// returns the Version being generated as the default value.
func (r *Resource) CRDHubVersion() string {
if r.crdHubVersion != "" {
return r.crdHubVersion
}
return r.Version
}
// SetCRDHubVersion configures the CRD API conversion hub version
// for a Resource.
// If unset, the default hub version is the current Version
// being generated.
func (r *Resource) SetCRDHubVersion(v string) {
r.crdHubVersion = v
}
// AddSingletonListConversion configures the list at the specified Terraform
// field path and the specified CRD field path as an embedded object.
// crdPath is the field path expression for the CRD schema and tfPath is
// the field path expression for the Terraform schema corresponding to the
// singleton list to be converted to an embedded object.
// At runtime, upjet will convert such objects back and forth
// from/to singleton lists while communicating with the Terraform stack.
// The specified fieldpath expression must be a wildcard expression such as
// `conditions[*]` or a 0-indexed expression such as `conditions[0]`. Other
// index values are not allowed as this function deals with singleton lists.
func (r *Resource) AddSingletonListConversion(tfPath, crdPath string) {
// SchemaElementOptions.SetEmbeddedObject does not expect the indices and
// because we are dealing with singleton lists here, we only expect wildcards
// or the zero-index.
nPath := strings.ReplaceAll(tfPath, "[*]", "")
nPath = strings.ReplaceAll(nPath, "[0]", "")
r.SchemaElementOptions.SetEmbeddedObject(nPath)
r.listConversionPaths[tfPath] = crdPath
}
// RemoveSingletonListConversion removes the singleton list conversion
// for the specified Terraform configuration path. Also unsets the path's
// embedding mode. The specified fieldpath expression must be a Terraform
// field path with or without the wildcard segments. Returns true if
// the path has already been registered for singleton list conversion.
func (r *Resource) RemoveSingletonListConversion(tfPath string) bool {
nPath := strings.ReplaceAll(tfPath, "[*]", "")
nPath = strings.ReplaceAll(nPath, "[0]", "")
for p := range r.listConversionPaths {
n := strings.ReplaceAll(p, "[*]", "")
n = strings.ReplaceAll(n, "[0]", "")
if n == nPath {
delete(r.listConversionPaths, p)
if r.SchemaElementOptions[n] != nil {
r.SchemaElementOptions[n].EmbeddedObject = false
}
return true
}
}
return false
}
// SetEmbeddedObject sets the EmbeddedObject for the specified key.
// The key is a Terraform field path without the wildcard segments.
func (m SchemaElementOptions) SetEmbeddedObject(el string) {
if m[el] == nil {
m[el] = &SchemaElementOption{}
}
m[el].EmbeddedObject = true
}
// EmbeddedObject returns true if the schema element at the specified path
// should be generated as an embedded object.
func (m SchemaElementOptions) EmbeddedObject(el string) bool {
return m[el] != nil && m[el].EmbeddedObject
}
// SchemaElementOption represents configuration options on a schema element.
type SchemaElementOption struct {
// AddToObservation is set to true if the field represented by
// a schema element is to be added to the generated CRD type's
// Observation type.
AddToObservation bool
// EmbeddedObject is set to true if the field represented by
// a schema element is to be embedded into its parent instead of being
// generated as a single element list.
EmbeddedObject bool
}

View File

@ -1,3 +1,7 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
@ -5,14 +9,13 @@ import (
"fmt"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/errors"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/crossplane-runtime/pkg/errors"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
)
const (
@ -21,7 +24,7 @@ const (
provider = "ACoolProvider"
)
func TestTagger_Initialize(t *testing.T) {
func TestTaggerInitialize(t *testing.T) {
errBoom := errors.New("boom")
type args struct {
@ -109,3 +112,187 @@ func TestSetExternalTagsWithPaved(t *testing.T) {
})
}
}
func TestAddSingletonListConversion(t *testing.T) {
type args struct {
r func() *Resource
tfPath string
crdPath string
}
type want struct {
r func() *Resource
}
cases := map[string]struct {
reason string
args
want
}{
"AddNonWildcardTFPath": {
reason: "A non-wildcard TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "singleton_list",
crdPath: "singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("singleton_list", "singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["singleton_list"] = "singletonList"
return r
},
},
},
"AddWildcardTFPath": {
reason: "A wildcard TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "parent[*].singleton_list",
crdPath: "parent[*].singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["parent.singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["parent[*].singleton_list"] = "parent[*].singletonList"
return r
},
},
},
"AddIndexedTFPath": {
reason: "An indexed TF path of a singleton list should successfully be configured to be converted into an embedded object.",
args: args{
tfPath: "parent[0].singleton_list",
crdPath: "parent[0].singletonList",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[0].singleton_list", "parent[0].singletonList")
return r
},
},
want: want{
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.SchemaElementOptions = SchemaElementOptions{}
r.SchemaElementOptions["parent.singleton_list"] = &SchemaElementOption{
EmbeddedObject: true,
}
r.listConversionPaths["parent[0].singleton_list"] = "parent[0].singletonList"
return r
},
},
},
}
for n, tc := range cases {
t.Run(n, func(t *testing.T) {
r := tc.args.r()
r.AddSingletonListConversion(tc.args.tfPath, tc.args.crdPath)
wantR := tc.want.r()
if diff := cmp.Diff(wantR.listConversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("%s\nAddSingletonListConversion(tfPath): -wantConversionPaths, +gotConversionPaths: \n%s", tc.reason, diff)
}
if diff := cmp.Diff(wantR.SchemaElementOptions, r.SchemaElementOptions); diff != "" {
t.Errorf("%s\nAddSingletonListConversion(tfPath): -wantSchemaElementOptions, +gotSchemaElementOptions: \n%s", tc.reason, diff)
}
})
}
}
func TestRemoveSingletonListConversion(t *testing.T) {
type args struct {
r func() *Resource
tfPath string
}
type want struct {
removed bool
r func() *Resource
}
cases := map[string]struct {
reason string
args
want
}{
"RemoveWildcardListConversion": {
reason: "An existing wildcard list conversion can successfully be removed.",
args: args{
tfPath: "parent[*].singleton_list",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
removed: true,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
return r
},
},
},
"RemoveIndexedListConversion": {
reason: "An existing indexed list conversion can successfully be removed.",
args: args{
tfPath: "parent[0].singleton_list",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[0].singleton_list", "parent[0].singletonList")
return r
},
},
want: want{
removed: true,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
return r
},
},
},
"NonExistingListConversion": {
reason: "A list conversion path that does not exist cannot be removed.",
args: args{
tfPath: "non-existent",
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
want: want{
removed: false,
r: func() *Resource {
r := DefaultResource("test_resource", nil, nil, nil)
r.AddSingletonListConversion("parent[*].singleton_list", "parent[*].singletonList")
return r
},
},
},
}
for n, tc := range cases {
t.Run(n, func(t *testing.T) {
r := tc.args.r()
got := r.RemoveSingletonListConversion(tc.args.tfPath)
if diff := cmp.Diff(tc.want.removed, got); diff != "" {
t.Errorf("%s\nRemoveSingletonListConversion(tfPath): -wantRemoved, +gotRemoved: \n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.r().listConversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("%s\nRemoveSingletonListConversion(tfPath): -wantConversionPaths, +gotConversionPaths: \n%s", tc.reason, diff)
}
})
}
}

View File

@ -0,0 +1,77 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"github.com/crossplane/upjet/v2/pkg/schema/traverser"
)
var _ ResourceSetter = &SingletonListEmbedder{}
// ResourceSetter allows the context Resource to be set for a traverser.
type ResourceSetter interface {
SetResource(r *Resource)
}
// ResourceSchema represents a provider's resource schema.
type ResourceSchema map[string]*Resource
// TraverseTFSchemas traverses the Terraform schemas of all the resources of
// the Provider `p` using the specified visitors. Reports any errors
// encountered.
func (s ResourceSchema) TraverseTFSchemas(visitors ...traverser.SchemaTraverser) error {
for name, cfg := range s {
if err := TraverseSchemas(name, cfg, visitors...); err != nil {
return errors.Wrapf(err, "failed to traverse the schema of the Terraform resource with name %q", name)
}
}
return nil
}
// TraverseSchemas visits the specified schema belonging to the Terraform
// resource with the given name and given upjet resource configuration using
// the specified visitors. If any visitors report an error, traversal is
// stopped and the error is reported to the caller.
func TraverseSchemas(tfName string, r *Resource, visitors ...traverser.SchemaTraverser) error {
// set the upjet Resource configuration as context for the visitors that
// satisfy the ResourceSetter interface.
for _, v := range visitors {
if rs, ok := v.(ResourceSetter); ok {
rs.SetResource(r)
}
}
return traverser.Traverse(tfName, r.TerraformResource, visitors...)
}
type resourceContext struct {
r *Resource
}
func (rc *resourceContext) SetResource(r *Resource) {
rc.r = r
}
// SingletonListEmbedder is a schema traverser for embedding singleton lists
// in the Terraform schema as objects.
type SingletonListEmbedder struct {
resourceContext
traverser.NoopTraverser
}
func (l *SingletonListEmbedder) VisitResource(r *traverser.ResourceNode) error {
// this visitor only works on sets and lists with the MaxItems constraint
// of 1.
if r.Schema.Type != schema.TypeList && r.Schema.Type != schema.TypeSet {
return nil
}
if r.Schema.MaxItems != 1 {
return nil
}
l.r.AddSingletonListConversion(traverser.FieldPathWithWildcard(r.TFPath), traverser.FieldPathWithWildcard(r.CRDPath))
return nil
}

View File

@ -0,0 +1,173 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)
func TestSingletonListEmbedder(t *testing.T) {
type args struct {
resource *schema.Resource
name string
}
type want struct {
err error
schemaOpts SchemaElementOptions
conversionPaths map[string]string
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulRootLevelSingletonListEmbedding": {
reason: "Successfully embed a root-level singleton list in the resource schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"singleton_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{
"singleton_list": {
EmbeddedObject: true,
},
},
conversionPaths: map[string]string{
"singleton_list": "singletonList",
},
},
},
"NoEmbeddingForMultiItemList": {
reason: "Do not embed a list with a MaxItems constraint greater than 1.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"multiitem_list": {
Type: schema.TypeList,
MaxItems: 2,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{},
conversionPaths: map[string]string{},
},
},
"NoEmbeddingForNonList": {
reason: "Do not embed a non-list schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"invalid": {
Type: schema.TypeInvalid,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{},
conversionPaths: map[string]string{},
},
},
"SuccessfulNestedSingletonListEmbedding": {
reason: "Successfully embed a nested singleton list in the resource schema.",
args: args{
resource: &schema.Resource{
Schema: map[string]*schema.Schema{
"parent_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"child_list": {
Type: schema.TypeList,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"element": {
Type: schema.TypeString,
},
},
},
},
},
},
},
},
},
name: "test_resource",
},
want: want{
schemaOpts: map[string]*SchemaElementOption{
"parent_list": {
EmbeddedObject: true,
},
"parent_list.child_list": {
EmbeddedObject: true,
},
},
conversionPaths: map[string]string{
"parent_list": "parentList",
"parent_list[*].child_list": "parentList[*].childList",
},
},
},
}
for n, tt := range tests {
t.Run(n, func(t *testing.T) {
e := &SingletonListEmbedder{}
r := DefaultResource(tt.args.name, tt.args.resource, nil, nil)
s := ResourceSchema{
tt.args.name: r,
}
err := s.TraverseTFSchemas(e)
if diff := cmp.Diff(tt.want.err, err, test.EquateErrors()); diff != "" {
t.Fatalf("\n%s\ntraverseSchemas(name, schema, ...): -wantErr, +gotErr:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(tt.want.schemaOpts, r.SchemaElementOptions); diff != "" {
t.Errorf("\n%s\ntraverseSchemas(name, schema, ...): -wantOptions, +gotOptions:\n%s", tt.reason, diff)
}
if diff := cmp.Diff(tt.want.conversionPaths, r.listConversionPaths); diff != "" {
t.Errorf("\n%s\ntraverseSchemas(name, schema, ...): -wantPaths, +gotPaths:\n%s", tt.reason, diff)
}
})
}
}

View File

@ -0,0 +1,72 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package config
import (
"github.com/pkg/errors"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
)
// Mode denotes the mode of the runtime Terraform conversion, e.g.,
// conversion from Crossplane parameters to Terraform arguments, or
// conversion from Terraform state to Crossplane state.
type Mode int
const (
ToTerraform Mode = iota
FromTerraform
)
// String returns a string representation of the conversion mode.
func (m Mode) String() string {
switch m {
case ToTerraform:
return "toTerraform"
case FromTerraform:
return "fromTerraform"
default:
return "unknown"
}
}
type TerraformConversion interface {
Convert(params map[string]any, r *Resource, mode Mode) (map[string]any, error)
}
// ApplyTFConversions applies the configured Terraform conversions on the
// specified params map in the given mode, i.e., from Crossplane layer to the
// Terraform layer or vice versa.
func (r *Resource) ApplyTFConversions(params map[string]any, mode Mode) (map[string]any, error) {
var err error
for _, c := range r.TerraformConversions {
params, err = c.Convert(params, r, mode)
if err != nil {
return nil, err
}
}
return params, nil
}
type singletonListConversion struct{}
// NewTFSingletonConversion initializes a new TerraformConversion to convert
// between singleton lists and embedded objects in the exchanged data
// at runtime between the Crossplane & Terraform layers.
func NewTFSingletonConversion() TerraformConversion {
return singletonListConversion{}
}
func (s singletonListConversion) Convert(params map[string]any, r *Resource, mode Mode) (map[string]any, error) {
var err error
var m map[string]any
switch mode {
case FromTerraform:
m, err = conversion.Convert(params, r.TFListConversionPaths(), conversion.ToEmbeddedObject, nil)
case ToTerraform:
m, err = conversion.Convert(params, r.TFListConversionPaths(), conversion.ToSingletonList, nil)
}
return m, errors.Wrapf(err, "failed to convert between Crossplane and Terraform layers in mode %q", mode)
}

View File

@ -1,26 +1,14 @@
/*
Copyright 2021 Upbound Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
@ -28,14 +16,30 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
ctrl "sigs.k8s.io/controller-runtime/pkg/manager"
"github.com/upbound/upjet/pkg/resource"
"github.com/upbound/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
const (
errGet = "cannot get resource"
errGetFmt = "cannot get resource %s/%s after an async %s"
errUpdateStatusFmt = "cannot update status of the resource %s/%s after an async %s"
errReconcileRequestFmt = "cannot request the reconciliation of the resource %s/%s after an async %s"
)
// crossplane-runtime error constants
const (
errXPReconcileCreate = "create failed"
errXPReconcileUpdate = "update failed"
errXPReconcileDelete = "delete failed"
)
const (
rateLimiterCallback = "asyncCallback"
)
var _ CallbackProvider = &APICallbacks{}
// APISecretClient is a client for getting k8s secrets
type APISecretClient struct {
kube client.Client
@ -59,47 +63,126 @@ func (a *APISecretClient) GetSecretValue(ctx context.Context, sel xpv1.SecretKey
return d[sel.Key], err
}
// APICallbacksOption represents a configurable option for the APICallbacks
type APICallbacksOption func(callbacks *APICallbacks)
// WithEventHandler sets the EventHandler for the APICallbacks so that
// the APICallbacks instance can requeue reconcile requests in the
// context of the asynchronous operations.
func WithEventHandler(e *handler.EventHandler) APICallbacksOption {
return func(callbacks *APICallbacks) {
callbacks.eventHandler = e
}
}
// WithStatusUpdates sets whether the LastAsyncOperation status condition
// is enabled. If set to false, APICallbacks will not use the
// LastAsyncOperation status condition for reporting ongoing async
// operations or errors. Error conditions will still be reported
// as usual in the `Synced` status condition.
func WithStatusUpdates(enabled bool) APICallbacksOption {
return func(callbacks *APICallbacks) {
callbacks.enableStatusUpdates = enabled
}
}
// NewAPICallbacks returns a new APICallbacks.
func NewAPICallbacks(m ctrl.Manager, of xpresource.ManagedKind) *APICallbacks {
func NewAPICallbacks(m ctrl.Manager, of xpresource.ManagedKind, opts ...APICallbacksOption) *APICallbacks {
nt := func() resource.Terraformed {
return xpresource.MustCreateObject(schema.GroupVersionKind(of), m.GetScheme()).(resource.Terraformed)
}
return &APICallbacks{
cb := &APICallbacks{
kube: m.GetClient(),
newTerraformed: nt,
// the default behavior is to use the LastAsyncOperation
// status condition for backwards compatibility.
enableStatusUpdates: true,
}
for _, o := range opts {
o(cb)
}
return cb
}
// APICallbacks providers callbacks that work on API resources.
type APICallbacks struct {
eventHandler *handler.EventHandler
kube client.Client
newTerraformed func() resource.Terraformed
enableStatusUpdates bool
}
// Apply makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Apply(name string) terraform.CallbackFn {
func (ac *APICallbacks) callbackFn(nn types.NamespacedName, op string) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
nn := types.NamespacedName{Name: name}
tr := ac.newTerraformed()
if kErr := ac.kube.Get(ctx, nn, tr); kErr != nil {
return errors.Wrap(kErr, errGet)
return errors.Wrapf(kErr, errGetFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
}
// For the no-fork architecture, we will need to be able to report
// reconciliation errors. The proper place is the `Synced`
// status condition but we need changes in the managed reconciler
// to do so. So we keep the `LastAsyncOperation` condition.
// TODO: move this to the `Synced` condition.
tr.SetConditions(resource.LastAsyncOperationCondition(err))
tr.SetConditions(resource.AsyncOperationFinishedCondition())
return errors.Wrap(ac.kube.Status().Update(ctx, tr), errStatusUpdate)
if err != nil {
wrapMsg := ""
switch op {
case "create":
wrapMsg = errXPReconcileCreate
case "update":
wrapMsg = errXPReconcileUpdate
case "destroy":
wrapMsg = errXPReconcileDelete
}
tr.SetConditions(xpv1.ReconcileError(errors.Wrap(err, wrapMsg)))
} else {
tr.SetConditions(xpv1.ReconcileSuccess())
}
if ac.enableStatusUpdates {
tr.SetConditions(resource.AsyncOperationFinishedCondition())
}
uErr := errors.Wrapf(ac.kube.Status().Update(ctx, tr), errUpdateStatusFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
if ac.eventHandler != nil {
rateLimiter := handler.NoRateLimiter
switch {
case err != nil:
rateLimiter = rateLimiterCallback
default:
ac.eventHandler.Forget(rateLimiterCallback, nn)
}
// TODO: use the errors.Join from
// github.com/crossplane/crossplane-runtime.
if ok := ac.eventHandler.RequestReconcile(rateLimiter, nn, nil); !ok {
return errors.Errorf(errReconcileRequestFmt, tr.GetObjectKind().GroupVersionKind().String(), nn, op)
}
}
return uErr
}
}
// Create makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Create(name types.NamespacedName) terraform.CallbackFn {
// request will be requeued although the managed reconciler already
// requeues with exponential back-off during the creation phase
// because the upjet external client returns ResourceExists &
// ResourceUpToDate both set to true, if an async operation is
// in-progress immediately following a Create call. This will
// delay a reobservation of the resource (while being created)
// for the poll period.
return ac.callbackFn(name, "create")
}
// Update makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Update(name types.NamespacedName) terraform.CallbackFn {
return ac.callbackFn(name, "update")
}
// Destroy makes sure the error is saved in async operation condition.
func (ac *APICallbacks) Destroy(name string) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
nn := types.NamespacedName{Name: name}
tr := ac.newTerraformed()
if kErr := ac.kube.Get(ctx, nn, tr); kErr != nil {
return errors.Wrap(kErr, errGet)
}
tr.SetConditions(resource.LastAsyncOperationCondition(err))
tr.SetConditions(resource.AsyncOperationFinishedCondition())
return errors.Wrap(ac.kube.Status().Update(ctx, tr), errStatusUpdate)
}
func (ac *APICallbacks) Destroy(name types.NamespacedName) terraform.CallbackFn {
// request will be requeued although the managed reconciler requeues
// with exponential back-off during the deletion phase because
// during the async deletion operation, external client's
// observe just returns success to the managed reconciler.
return ac.callbackFn(name, "destroy")
}

View File

@ -1,18 +1,6 @@
/*
Copyright 2021 Upbound Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
@ -20,21 +8,21 @@ import (
"context"
"testing"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
ctrl "sigs.k8s.io/controller-runtime/pkg/manager"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
"github.com/upbound/upjet/pkg/resource"
"github.com/upbound/upjet/pkg/resource/fake"
tjerrors "github.com/upbound/upjet/pkg/terraform/errors"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
tjerrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
func TestAPICallbacks_Apply(t *testing.T) {
func TestAPICallbacksCreate(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
@ -48,17 +36,17 @@ func TestAPICallbacks_Apply(t *testing.T) {
args
want
}{
"ApplyOperationFailed": {
"CreateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.Terraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nApply(...): -want error, +got error:\n%s", diff)
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
@ -68,17 +56,17 @@ func TestAPICallbacks_Apply(t *testing.T) {
err: tjerrors.NewApplyFailed(nil),
},
},
"ApplyOperationSucceeded": {
"CreateOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.Terraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nApply(...): -want error, +got error:\n%s", diff)
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
@ -101,16 +89,98 @@ func TestAPICallbacks_Apply(t *testing.T) {
},
},
want: want{
err: errors.Wrap(errBoom, errGet),
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "create"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Apply("name")(tc.args.err, context.TODO())
err := e.Create(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nApply(...): -want error, +got error:\n%s", tc.reason, diff)
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacksUpdate(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"UpdateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.Terraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.Terraformed{}),
},
err: tjerrors.NewApplyFailed(nil),
},
},
"ApplyOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.Terraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.Terraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.Terraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.Terraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "update"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Update(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nUpdate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
@ -137,7 +207,7 @@ func TestAPICallbacks_Destroy(t *testing.T) {
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewDestroyFailed(nil)), got); diff != "" {
t.Errorf("\nApply(...): -want error, +got error:\n%s", diff)
@ -157,7 +227,7 @@ func TestAPICallbacks_Destroy(t *testing.T) {
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: test.NewMockGetFn(nil),
MockStatusUpdate: func(_ context.Context, obj client.Object, _ ...client.UpdateOption) error {
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nApply(...): -want error, +got error:\n%s", diff)
@ -183,14 +253,290 @@ func TestAPICallbacks_Destroy(t *testing.T) {
},
},
want: want{
err: errors.Wrap(errBoom, errGet),
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=//name", "destroy"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Destroy("name")(tc.args.err, context.TODO())
err := e.Destroy(types.NamespacedName{Name: "name"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nDestroy(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacksCreate_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"CreateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewApplyFailed(nil),
},
},
"CreateOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nCreate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "create"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Create(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacksUpdate_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"UpdateOperationFailed": {
reason: "It should update the condition with error if async apply failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewApplyFailed(nil)), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewApplyFailed(nil),
},
},
"ApplyOperationSucceeded": {
reason: "It should update the condition with success if the apply operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nUpdate(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "update"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Update(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nUpdate(...): -want error, +got error:\n%s", tc.reason, diff)
}
})
}
}
func TestAPICallbacks_Destroy_namespaced(t *testing.T) {
type args struct {
mgr ctrl.Manager
mg xpresource.ManagedKind
err error
}
type want struct {
err error
}
cases := map[string]struct {
reason string
args
want
}{
"DestroyOperationFailed": {
reason: "It should update the condition with error if async destroy failed",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(tjerrors.NewDestroyFailed(nil)), got); diff != "" {
t.Errorf("\nDestroy(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
err: tjerrors.NewDestroyFailed(nil),
},
},
"DestroyOperationSucceeded": {
reason: "It should update the condition with success if the destroy operation does not report error",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, gotKey client.ObjectKey, _ client.Object) error {
if diff := cmp.Diff(client.ObjectKey{Name: "name", Namespace: "foo-ns"}, gotKey); diff != "" {
t.Errorf("\nGet(...): -want object key, +got object key:\n%s", diff)
}
return nil
},
MockStatusUpdate: func(ctx context.Context, obj client.Object, opts ...client.SubResourceUpdateOption) error {
got := obj.(resource.Terraformed).GetCondition(resource.TypeLastAsyncOperation)
if diff := cmp.Diff(resource.LastAsyncOperationCondition(nil), got); diff != "" {
t.Errorf("\nDestroy(...): -want error, +got error:\n%s", diff)
}
return nil
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
},
"CannotGet": {
reason: "It should return error if it cannot get the resource to update",
args: args{
mg: xpresource.ManagedKind(xpfake.GVK(&fake.ModernTerraformed{})),
mgr: &xpfake.Manager{
Client: &test.MockClient{
MockGet: func(_ context.Context, _ client.ObjectKey, _ client.Object) error {
return errBoom
},
},
Scheme: xpfake.SchemeWith(&fake.ModernTerraformed{}),
},
},
want: want{
err: errors.Wrapf(errBoom, errGetFmt, "", ", Kind=/foo-ns/name", "destroy"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := NewAPICallbacks(tc.args.mgr, tc.args.mg)
err := e.Destroy(types.NamespacedName{Name: "name", Namespace: "foo-ns"})(tc.args.err, context.TODO())
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nDestroy(...): -want error, +got error:\n%s", tc.reason, diff)
}

View File

@ -0,0 +1,101 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
)
const (
errFmtPrioritizedManagedConversion = "cannot apply the PrioritizedManagedConversion for the %q object"
errFmtPavedConversion = "cannot apply the PavedConversion for the %q object"
errFmtManagedConversion = "cannot apply the ManagedConversion for the %q object"
errFmtGetGVK = "cannot get the GVK for the %s object of type %T"
)
// RoundTrip round-trips from `src` to `dst` via an unstructured map[string]any
// representation of the `src` object and applies the registered webhook
// conversion functions of this registry.
func (r *registry) RoundTrip(dst, src resource.Terraformed) error { //nolint:gocyclo // considered breaking this according to the converters and I did not like it
if dst.GetObjectKind().GroupVersionKind().Version == "" {
gvk, err := apiutil.GVKForObject(dst, r.scheme)
if err != nil && !runtime.IsNotRegisteredError(err) {
return errors.Wrapf(err, errFmtGetGVK, "destination", dst)
}
if err == nil {
dst.GetObjectKind().SetGroupVersionKind(gvk)
}
}
if src.GetObjectKind().GroupVersionKind().Version == "" {
gvk, err := apiutil.GVKForObject(src, r.scheme)
if err != nil && !runtime.IsNotRegisteredError(err) {
return errors.Wrapf(err, errFmtGetGVK, "source", src)
}
if err == nil {
src.GetObjectKind().SetGroupVersionKind(gvk)
}
}
// first PrioritizedManagedConversions are run in their registration order
for _, c := range r.GetConversions(dst) {
if pc, ok := c.(conversion.PrioritizedManagedConversion); ok {
if _, err := pc.ConvertManaged(src, dst); err != nil {
return errors.Wrapf(err, errFmtPrioritizedManagedConversion, dst.GetTerraformResourceType())
}
}
}
srcMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(src)
if err != nil {
return errors.Wrap(err, "cannot convert the conversion source object into the map[string]any representation")
}
// now we will try to run the registered webhook conversions
dstMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(dst)
if err != nil {
return errors.Wrap(err, "cannot convert the conversion destination object into the map[string]any representation")
}
srcPaved := fieldpath.Pave(srcMap)
dstPaved := fieldpath.Pave(dstMap)
// then run the PavedConversions
for _, c := range r.GetConversions(dst) {
if pc, ok := c.(conversion.PavedConversion); ok {
if _, err := pc.ConvertPaved(srcPaved, dstPaved); err != nil {
return errors.Wrapf(err, errFmtPavedConversion, dst.GetTerraformResourceType())
}
}
}
// convert the map[string]any representation of the conversion target back to
// the original type.
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(dstMap, dst); err != nil {
return errors.Wrap(err, "cannot convert the map[string]any representation of the conversion target back to the object itself")
}
// finally at the third stage, run the ManagedConverters
for _, c := range r.GetConversions(dst) {
if tc, ok := c.(conversion.ManagedConversion); ok {
if _, ok := tc.(conversion.PrioritizedManagedConversion); ok {
continue // then already run in the first stage
}
if _, err := tc.ConvertManaged(src, dst); err != nil {
return errors.Wrapf(err, errFmtManagedConversion, dst.GetTerraformResourceType())
}
}
}
return nil
}
// RoundTrip round-trips from `src` to `dst` via an unstructured map[string]any
// representation of the `src` object and applies the registered webhook
// conversion functions.
func RoundTrip(dst, src resource.Terraformed) error {
return instance.RoundTrip(dst, src)
}

View File

@ -0,0 +1,220 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"fmt"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
)
const (
key1 = "key1"
val1 = "val1"
key2 = "key2"
val2 = "val2"
commonKey = "commonKey"
commonVal = "commonVal"
errTest = "test error"
)
func TestRoundTrip(t *testing.T) {
type args struct {
dst resource.Terraformed
src resource.Terraformed
conversions []conversion.Conversion
}
type want struct {
err error
dst resource.Terraformed
}
tests := map[string]struct {
reason string
args args
want want
}{
"SuccessfulRoundTrip": {
reason: "Source object is successfully copied into the target object.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil)},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
},
},
"SuccessfulRoundTripWithConversions": {
reason: "Source object is successfully converted into the target object with a set of conversions.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key1, val1))),
conversions: []conversion.Conversion{
conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, nil),
// Because the parameters of the fake.Terraformed is an unstructured
// map, all the fields of source (including key1) are successfully
// copied into dst by registry.RoundTrip.
// This conversion deletes the copied key "key1".
conversion.NewCustomConverter(conversion.AllVersions, conversion.AllVersions, func(_, target xpresource.Managed) error {
tr := target.(*fake.Terraformed)
delete(tr.Parameters, key1)
return nil
}),
conversion.NewFieldRenameConversion(conversion.AllVersions, fmt.Sprintf("parameterizable.parameters.%s", key1), conversion.AllVersions, fmt.Sprintf("parameterizable.parameters.%s", key2)),
},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key2, val1))),
},
},
"SuccessfulRoundTripWithNonWildcardConversions": {
reason: "Source object is successfully converted into the target object with a set of non-wildcard conversions.",
args: args{
dst: fake.NewTerraformed(fake.WithTypeMeta(metav1.TypeMeta{})),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key1, val1)), fake.WithTypeMeta(metav1.TypeMeta{})),
conversions: []conversion.Conversion{
conversion.NewIdentityConversionExpandPaths(fake.Version, fake.Version, nil),
// Because the parameters of the fake.Terraformed is an unstructured
// map, all the fields of source (including key1) are successfully
// copied into dst by registry.RoundTrip.
// This conversion deletes the copied key "key1".
conversion.NewCustomConverter(fake.Version, fake.Version, func(_, target xpresource.Managed) error {
tr := target.(*fake.Terraformed)
delete(tr.Parameters, key1)
return nil
}),
conversion.NewFieldRenameConversion(fake.Version, fmt.Sprintf("parameterizable.parameters.%s", key1), fake.Version, fmt.Sprintf("parameterizable.parameters.%s", key2)),
},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(commonKey, commonVal, key2, val1)), fake.WithTypeMeta(metav1.TypeMeta{
Kind: fake.Kind,
APIVersion: fake.GroupVersion.String(),
})),
},
},
"RoundTripFailedPrioritizedConversion": {
reason: "Should return an error if a PrioritizedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedPrioritizedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtPrioritizedManagedConversion, ""),
},
},
"RoundTripFailedPavedConversion": {
reason: "Should return an error if a PavedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedPavedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtPavedConversion, ""),
},
},
"RoundTripFailedManagedConversion": {
reason: "Should return an error if a ManagedConversion fails.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(),
conversions: []conversion.Conversion{failedManagedConversion{}},
},
want: want{
err: errors.Wrapf(errors.New(errTest), errFmtManagedConversion, ""),
},
},
"RoundTripWithExcludedFields": {
reason: "Source object is successfully copied into the target object with certain fields excluded.",
args: args{
dst: fake.NewTerraformed(),
src: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1, key2, val2))),
conversions: []conversion.Conversion{conversion.NewIdentityConversionExpandPaths(conversion.AllVersions, conversion.AllVersions, []string{"parameterizable.parameters"}, key2)},
},
want: want{
dst: fake.NewTerraformed(fake.WithParameters(fake.NewMap(key1, val1))),
},
},
}
s := runtime.NewScheme()
if err := fake.AddToScheme(s); err != nil {
t.Fatalf("Failed to register the fake.Terraformed object with the runtime scheme")
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
p := &config.Provider{
Resources: map[string]*config.Resource{
tc.args.dst.GetTerraformResourceType(): {
Conversions: tc.args.conversions,
},
},
}
r := &registry{
scheme: s,
}
if err := r.RegisterConversions(p, nil); err != nil {
t.Fatalf("\n%s\nRegisterConversions(p): Failed to register the conversions with the registry.\n", tc.reason)
}
err := r.RoundTrip(tc.args.dst, tc.args.src)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nRoundTrip(dst, src): -wantErr, +gotErr:\n%s", tc.reason, diff)
}
if tc.want.err != nil {
return
}
if diff := cmp.Diff(tc.want.dst, tc.args.dst); diff != "" {
t.Errorf("\n%s\nRoundTrip(dst, src): -wantDst, +gotDst:\n%s", tc.reason, diff)
}
})
}
}
type failedPrioritizedConversion struct{}
func (failedPrioritizedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedPrioritizedConversion) ConvertManaged(_, _ xpresource.Managed) (bool, error) {
return false, errors.New(errTest)
}
func (failedPrioritizedConversion) Prioritized() {}
type failedPavedConversion struct{}
func (failedPavedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedPavedConversion) ConvertPaved(_, _ *fieldpath.Paved) (bool, error) {
return false, errors.New(errTest)
}
type failedManagedConversion struct{}
func (failedManagedConversion) Applicable(_, _ runtime.Object) bool {
return true
}
func (failedManagedConversion) ConvertManaged(_, _ xpresource.Managed) (bool, error) {
return false, errors.New(errTest)
}

View File

@ -0,0 +1,76 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
"github.com/crossplane/upjet/v2/pkg/resource"
)
const (
errAlreadyRegistered = "conversion functions are already registered"
)
var instance *registry
// registry represents the conversion hook registry for a provider.
type registry struct {
providerCluster *config.Provider
providerNamespaced *config.Provider
scheme *runtime.Scheme
}
// RegisterConversions registers the API version conversions from the specified
// provider configuration with this registry.
func (r *registry) RegisterConversions(providerCluster, providerNamespaced *config.Provider) error {
if r.providerCluster != nil || r.providerNamespaced != nil {
return errors.New(errAlreadyRegistered)
}
r.providerCluster = providerCluster
r.providerNamespaced = providerNamespaced
return nil
}
// GetConversions returns the conversion.Conversions registered in this
// registry for the specified Terraformed resource.
func (r *registry) GetConversions(tr resource.Terraformed) []conversion.Conversion {
t := tr.GetTerraformResourceType()
p := r.providerCluster
if tr.GetNamespace() != "" {
p = r.providerNamespaced
}
if p == nil || p.Resources[t] == nil {
return nil
}
return p.Resources[t].Conversions
}
// GetConversions returns the conversion.Conversions registered for the
// specified Terraformed resource.
func GetConversions(tr resource.Terraformed) []conversion.Conversion {
return instance.GetConversions(tr)
}
// RegisterConversions registers the API version conversions from the specified
// provider configuration. The specified scheme should contain the registrations
// for the types whose versions are to be converted. If a registration for a
// Go schema is not found in the specified registry, RoundTrip does not error
// but only wildcard conversions must be used with the registry.
func RegisterConversions(providerCluster, providerNamespaced *config.Provider, scheme *runtime.Scheme) error {
if instance != nil {
return errors.New(errAlreadyRegistered)
}
instance = &registry{
scheme: scheme,
}
return instance.RegisterConversions(providerCluster, providerNamespaced)
}

View File

@ -1,22 +1,29 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/resource"
"github.com/upbound/upjet/pkg/resource/json"
"github.com/upbound/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
const (
@ -24,12 +31,20 @@ const (
errGetTerraformSetup = "cannot get terraform setup"
errGetWorkspace = "cannot get a terraform workspace for resource"
errRefresh = "cannot run refresh"
errImport = "cannot run import"
errPlan = "cannot run plan"
errStartAsyncApply = "cannot start async apply"
errStartAsyncDestroy = "cannot start async destroy"
errApply = "cannot apply"
errDestroy = "cannot destroy"
errStatusUpdate = "cannot update status of custom resource"
errScheduleProvider = "cannot schedule native Terraform provider process, please consider increasing its TTL with the --provider-ttl command-line option"
errUpdateAnnotations = "cannot update managed resource annotations"
)
const (
rateLimiterScheduler = "scheduler"
rateLimiterStatus = "status"
retryLimit = 20
)
// Option allows you to configure Connector.
@ -44,6 +59,21 @@ func WithCallbackProvider(ac CallbackProvider) Option {
}
}
// WithLogger configures a logger for the Connector.
func WithLogger(l logging.Logger) Option {
return func(c *Connector) {
c.logger = l
}
}
// WithConnectorEventHandler configures the EventHandler so that
// the external clients can requeue reconciliation requests.
func WithConnectorEventHandler(e *handler.EventHandler) Option {
return func(c *Connector) {
c.eventHandler = e
}
}
// NewConnector returns a new Connector object.
func NewConnector(kube client.Client, ws Store, sf terraform.SetupFn, cfg *config.Resource, opts ...Option) *Connector {
c := &Connector{
@ -51,6 +81,7 @@ func NewConnector(kube client.Client, ws Store, sf terraform.SetupFn, cfg *confi
getTerraformSetup: sf,
store: ws,
config: cfg,
logger: logging.NewNopLogger(),
}
for _, f := range opts {
f(c)
@ -66,6 +97,8 @@ type Connector struct {
getTerraformSetup terraform.SetupFn
config *config.Resource
callback CallbackProvider
eventHandler *handler.EventHandler
logger logging.Logger
}
// Connect makes sure the underlying client is ready to issue requests to the
@ -81,15 +114,19 @@ func (c *Connector) Connect(ctx context.Context, mg xpresource.Managed) (managed
return nil, errors.Wrap(err, errGetTerraformSetup)
}
tf, err := c.store.Workspace(ctx, &APISecretClient{kube: c.kube}, tr, ts, c.config)
ws, err := c.store.Workspace(ctx, &APISecretClient{kube: c.kube}, tr, ts, c.config)
if err != nil {
return nil, errors.Wrap(err, errGetWorkspace)
}
return &external{
workspace: tf,
workspace: ws,
config: c.config,
callback: c.callback,
providerScheduler: ts.Scheduler,
providerHandle: ws.ProviderHandle,
eventHandler: c.eventHandler,
kube: c.kube,
logger: c.logger.WithValues("uid", mg.GetUID(), "namespace", mg.GetNamespace(), "name", mg.GetName(), "gvk", mg.GetObjectKind().GroupVersionKind().String()),
}, nil
}
@ -97,6 +134,42 @@ type external struct {
workspace Workspace
config *config.Resource
callback CallbackProvider
providerScheduler terraform.ProviderScheduler
providerHandle terraform.ProviderHandle
eventHandler *handler.EventHandler
kube client.Client
logger logging.Logger
}
func (e *external) scheduleProvider(name types.NamespacedName) (bool, error) {
if e.providerScheduler == nil || e.workspace == nil {
return false, nil
}
inuse, attachmentConfig, err := e.providerScheduler.Start(e.providerHandle)
if err != nil {
retryLimit := retryLimit
if tferrors.IsRetryScheduleError(err) && (e.eventHandler != nil && e.eventHandler.RequestReconcile(rateLimiterScheduler, name, &retryLimit)) {
// the reconcile request has been requeued for a rate-limited retry
return true, nil
}
return false, errors.Wrap(err, errScheduleProvider)
}
if e.eventHandler != nil {
e.eventHandler.Forget(rateLimiterScheduler, name)
}
if ps, ok := e.workspace.(ProviderSharer); ok {
ps.UseProvider(inuse, attachmentConfig)
}
return false, nil
}
func (e *external) stopProvider() {
if e.providerScheduler == nil {
return
}
if err := e.providerScheduler.Stop(e.providerHandle); err != nil {
e.logger.Info("ExternalClient failed to stop the native provider", "error", err)
}
}
func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) { //nolint:gocyclo
@ -104,16 +177,51 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
// and serial.
// TODO(muvaf): Look for ways to reduce the cyclomatic complexity without
// increasing the difficulty of understanding the flow.
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalObservation{}, errors.Wrapf(err, "cannot schedule a native provider during observe: %s", mg.GetUID())
}
if requeued {
// return a noop for Observe after requeuing the reconcile request
// for a retry.
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
}, nil
}
defer e.stopProvider()
tr, ok := mg.(resource.Terraformed)
if !ok {
return managed.ExternalObservation{}, errors.New(errUnexpectedObject)
}
policySet := sets.New[xpv1.ManagementAction](tr.GetManagementPolicies()...)
// Note(turkenh): We don't need to check if the management policies are
// enabled or not because the crossplane-runtime's managed reconciler already
// does that for us. In other words, if the management policies are set
// without management policies being enabled, the managed
// reconciler will error out before reaching this point.
// https://github.com/crossplane/crossplane-runtime/pull/384/files#diff-97300a2543f95f5a2ada3560bf47dd7334e237e27976574d15d1cddef2e66c01R696
// Note (lsviben) We are only using import instead of refresh if the
// management policies do not contain create or update as they need the
// required fields to be set, which is not the case for import.
if !policySet.HasAny(xpv1.ManagementActionCreate, xpv1.ManagementActionUpdate, xpv1.ManagementActionAll) {
return e.Import(ctx, tr)
}
res, err := e.workspace.Refresh(ctx)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, errRefresh)
}
switch {
case res.IsApplying, res.IsDestroying:
case res.ASyncInProgress:
mg.SetConditions(resource.AsyncOperationOngoingCondition())
return managed.ExternalObservation{
ResourceExists: true,
@ -141,19 +249,40 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
return managed.ExternalObservation{}, errors.Wrap(err, "cannot set observation")
}
// NOTE(lsviben) although the annotations were supposed to be set and the
// managed resource updated during the Create step, we are checking and
// updating the annotations here due to the fact that in most cases, the
// Create step is done asynchronously and the managed resource is not
// updated with the annotations. That is why below we are prioritizing the
// annotations update before anything else. We are setting lateInitialized
// to true so that the reconciler updates the managed resource. This
// behavior conflicts with management policies in which LateInitialize is
// turned off. To circumvent this, we are checking if the management policy
// does not contain LateInitialize and if it does not, we are updating the
// annotations manually.
annotationsUpdated, err := resource.SetCriticalAnnotations(tr, e.config, tfstate, string(res.State.GetPrivateRaw()))
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot set critical annotations")
}
policyHasLateInit := policySet.HasAny(xpv1.ManagementActionLateInitialize, xpv1.ManagementActionAll)
if annotationsUpdated && !policyHasLateInit {
if err := e.kube.Update(ctx, mg); err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, errUpdateAnnotations)
}
annotationsUpdated = false
}
conn, err := resource.GetConnectionDetails(tfstate, tr, e.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
lateInitedParams, err := tr.LateInitialize(res.State.GetAttributes())
var lateInitedParams bool
if policyHasLateInit {
lateInitedParams, err = tr.LateInitialize(res.State.GetAttributes())
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot late initialize parameters")
}
}
markedAvailable := tr.GetCondition(xpv1.TypeReady).Equal(xpv1.Available())
// In the following switch block, before running a relatively costly
@ -171,6 +300,7 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
switch {
// we prioritize critical annotation updates over status updates
case annotationsUpdated:
e.logger.Debug("Critical annotations have been updated.")
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
@ -179,7 +309,16 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
}, nil
// we prioritize status updates over late-init'ed spec updates
case !markedAvailable:
addTTR(tr)
tr.SetConditions(xpv1.Available())
e.logger.Debug("Resource is marked as available.")
if e.eventHandler != nil {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
e.eventHandler.RequestReconcile(rateLimiterStatus, name, nil)
}
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
@ -188,6 +327,7 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
// with the least priority wrt critical annotation updates and status updates
// we allow a late-initialization before the Workspace.Plan call
case lateInitedParams:
e.logger.Debug("Resource is late-initialized.")
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
@ -196,12 +336,24 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
}, nil
// now we do a Workspace.Refresh
default:
if e.eventHandler != nil {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
e.eventHandler.Forget(rateLimiterStatus, name)
}
// TODO(cem): Consider skipping diff calculation (terraform plan) to
// avoid potential config validation errors in the import path. See
// https://github.com/crossplane/upjet/pull/461
plan, err := e.workspace.Plan(ctx)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, errPlan)
}
resource.SetUpToDateCondition(mg, plan.UpToDate)
e.logger.Debug("Called plan on the resource.", "upToDate", plan.UpToDate)
return managed.ExternalObservation{
ResourceExists: true,
@ -211,9 +363,26 @@ func (e *external) Observe(ctx context.Context, mg xpresource.Managed) (managed.
}
}
func addTTR(mg xpresource.Managed) {
gvk := mg.GetObjectKind().GroupVersionKind()
metrics.TTRMeasurements.WithLabelValues(gvk.Group, gvk.Version, gvk.Kind).Observe(time.Since(mg.GetCreationTimestamp().Time).Seconds())
}
func (e *external) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalCreation{}, errors.Wrapf(err, "cannot schedule a native provider during create: %s", mg.GetUID())
}
if requeued {
return managed.ExternalCreation{}, nil
}
defer e.stopProvider()
if e.config.UseAsync {
return managed.ExternalCreation{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Apply(mg.GetName())), errStartAsyncApply)
return managed.ExternalCreation{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Create(name)), errStartAsyncApply)
}
tr, ok := mg.(resource.Terraformed)
if !ok {
@ -239,8 +408,20 @@ func (e *external) Create(ctx context.Context, mg xpresource.Managed) (managed.E
}
func (e *external) Update(ctx context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrapf(err, "cannot schedule a native provider during update: %s", mg.GetUID())
}
if requeued {
return managed.ExternalUpdate{}, nil
}
defer e.stopProvider()
if e.config.UseAsync {
return managed.ExternalUpdate{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Apply(mg.GetName())), errStartAsyncApply)
return managed.ExternalUpdate{}, errors.Wrap(e.workspace.ApplyAsync(e.callback.Update(name)), errStartAsyncApply)
}
tr, ok := mg.(resource.Terraformed)
if !ok {
@ -257,9 +438,74 @@ func (e *external) Update(ctx context.Context, mg xpresource.Managed) (managed.E
return managed.ExternalUpdate{}, errors.Wrap(tr.SetObservation(attr), "cannot set observation")
}
func (e *external) Delete(ctx context.Context, mg xpresource.Managed) error {
if e.config.UseAsync {
return errors.Wrap(e.workspace.DestroyAsync(e.callback.Destroy(mg.GetName())), errStartAsyncDestroy)
func (e *external) Delete(ctx context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) {
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
return errors.Wrap(e.workspace.Destroy(ctx), errDestroy)
requeued, err := e.scheduleProvider(name)
if err != nil {
return managed.ExternalDelete{}, errors.Wrapf(err, "cannot schedule a native provider during delete: %s", mg.GetUID())
}
if requeued {
return managed.ExternalDelete{}, nil
}
defer e.stopProvider()
if e.config.UseAsync {
return managed.ExternalDelete{}, errors.Wrap(e.workspace.DestroyAsync(e.callback.Destroy(name)), errStartAsyncDestroy)
}
return managed.ExternalDelete{}, errors.Wrap(e.workspace.Destroy(ctx), errDestroy)
}
func (e *external) Disconnect(_ context.Context) error {
return nil
}
func (e *external) Import(ctx context.Context, tr resource.Terraformed) (managed.ExternalObservation, error) {
res, err := e.workspace.Import(ctx, tr)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, errImport)
}
// We normally don't expect apply/destroy to be in progress when the
// management policy is set to "ObserveOnly". However, this could happen
// if the policy is changed to "ObserveOnly" while an async operation is
// in progress. In that case, we want to wait for the operation to finish
// before we start observing.
if res.ASyncInProgress {
tr.SetConditions(resource.AsyncOperationOngoingCondition())
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
}, nil
}
// If the resource doesn't exist, we don't need to do anything else.
// We report it to the managed reconciler as a non-existent resource and
// it will take care of reporting it to the user as an error case for
// observe-only policy.
if !res.Exists {
return managed.ExternalObservation{
ResourceExists: false,
}, nil
}
// No operation was in progress, our observation completed successfully, and
// we have an observation to consume.
tfstate := map[string]any{}
if err := json.JSParser.Unmarshal(res.State.GetAttributes(), &tfstate); err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot unmarshal state attributes")
}
if err := tr.SetObservation(tfstate); err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot set observation")
}
conn, err := resource.GetConnectionDetails(tfstate, tr, e.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
tr.SetConditions(xpv1.Available())
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ConnectionDetails: conn,
}, nil
}

View File

@ -0,0 +1,280 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"fmt"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
// TerraformPluginFrameworkAsyncConnector is a managed reconciler Connecter
// implementation for reconciling Terraform plugin framework based
// resources.
type TerraformPluginFrameworkAsyncConnector struct {
*TerraformPluginFrameworkConnector
callback CallbackProvider
eventHandler *handler.EventHandler
}
// TerraformPluginFrameworkAsyncOption represents a configuration option for
// a TerraformPluginFrameworkAsyncConnector object.
type TerraformPluginFrameworkAsyncOption func(connector *TerraformPluginFrameworkAsyncConnector)
func NewTerraformPluginFrameworkAsyncConnector(kube client.Client,
ots *OperationTrackerStore,
sf terraform.SetupFn,
cfg *config.Resource,
opts ...TerraformPluginFrameworkAsyncOption,
) *TerraformPluginFrameworkAsyncConnector {
nfac := &TerraformPluginFrameworkAsyncConnector{
TerraformPluginFrameworkConnector: NewTerraformPluginFrameworkConnector(kube, sf, cfg, ots),
}
for _, f := range opts {
f(nfac)
}
return nfac
}
func (c *TerraformPluginFrameworkAsyncConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) {
ec, err := c.TerraformPluginFrameworkConnector.Connect(ctx, mg)
if err != nil {
return nil, errors.Wrap(err, "cannot initialize the Terraform Plugin Framework async external client")
}
return &terraformPluginFrameworkAsyncExternalClient{
terraformPluginFrameworkExternalClient: ec.(*terraformPluginFrameworkExternalClient),
callback: c.callback,
eventHandler: c.eventHandler,
}, nil
}
// WithTerraformPluginFrameworkAsyncConnectorEventHandler configures the EventHandler so that
// the Terraform Plugin Framework external clients can requeue reconciliation requests.
func WithTerraformPluginFrameworkAsyncConnectorEventHandler(e *handler.EventHandler) TerraformPluginFrameworkAsyncOption {
return func(c *TerraformPluginFrameworkAsyncConnector) {
c.eventHandler = e
}
}
// WithTerraformPluginFrameworkAsyncCallbackProvider configures the controller to use async variant of the functions
// of the Terraform client and run given callbacks once those operations are
// completed.
func WithTerraformPluginFrameworkAsyncCallbackProvider(ac CallbackProvider) TerraformPluginFrameworkAsyncOption {
return func(c *TerraformPluginFrameworkAsyncConnector) {
c.callback = ac
}
}
// WithTerraformPluginFrameworkAsyncLogger configures a logger for the TerraformPluginFrameworkAsyncConnector.
func WithTerraformPluginFrameworkAsyncLogger(l logging.Logger) TerraformPluginFrameworkAsyncOption {
return func(c *TerraformPluginFrameworkAsyncConnector) {
c.logger = l
}
}
// WithTerraformPluginFrameworkAsyncMetricRecorder configures a metrics.MetricRecorder for the
// TerraformPluginFrameworkAsyncConnector.
func WithTerraformPluginFrameworkAsyncMetricRecorder(r *metrics.MetricRecorder) TerraformPluginFrameworkAsyncOption {
return func(c *TerraformPluginFrameworkAsyncConnector) {
c.metricRecorder = r
}
}
// WithTerraformPluginFrameworkAsyncManagementPolicies configures whether the client should
// handle management policies.
func WithTerraformPluginFrameworkAsyncManagementPolicies(isManagementPoliciesEnabled bool) TerraformPluginFrameworkAsyncOption {
return func(c *TerraformPluginFrameworkAsyncConnector) {
c.isManagementPoliciesEnabled = isManagementPoliciesEnabled
}
}
type terraformPluginFrameworkAsyncExternalClient struct {
*terraformPluginFrameworkExternalClient
callback CallbackProvider
eventHandler *handler.EventHandler
}
func (n *terraformPluginFrameworkAsyncExternalClient) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) {
if n.opTracker.LastOperation.IsRunning() {
n.logger.WithValues("opType", n.opTracker.LastOperation.Type).Debug("ongoing async operation")
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
}, nil
}
n.opTracker.LastOperation.Clear(true)
o, err := n.terraformPluginFrameworkExternalClient.Observe(ctx, mg)
// clear any previously reported LastAsyncOperation error condition here,
// because there are no pending updates on the existing resource and it's
// not scheduled to be deleted.
if err == nil && o.ResourceExists && o.ResourceUpToDate && !meta.WasDeleted(mg) {
mg.(resource.Terraformed).SetConditions(resource.LastAsyncOperationCondition(nil))
mg.(resource.Terraformed).SetConditions(xpv1.ReconcileSuccess())
n.opTracker.LastOperation.Clear(false)
}
return o, err
}
// panicHandler wraps an error, so that deferred functions that will
// be executed on a panic can access the error more conveniently.
type panicHandler struct {
err error
}
// recoverIfPanic recovers from panics, if any. Upon recovery, the
// error is set to a recovery message. Otherwise, the error is left
// unmodified. Calls to this function should be defferred directly:
// `defer ph.recoverIfPanic()`. Panic recovery won't work if the call
// is wrapped in another function call, such as `defer func() {
// ph.recoverIfPanic() }()`. On recovery, API machinery panic handlers
// run. The implementation follows the outline of panic recovery
// mechanism in controller-runtime:
// https://github.com/kubernetes-sigs/controller-runtime/blob/v0.17.3/pkg/internal/controller/controller.go#L105-L112
func (ph *panicHandler) recoverIfPanic(ctx context.Context) {
if r := recover(); r != nil {
for _, fn := range utilruntime.PanicHandlers {
fn(ctx, r)
}
ph.err = fmt.Errorf("recovered from panic: %v", r)
}
}
func (n *terraformPluginFrameworkAsyncExternalClient) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("create") {
return managed.ExternalCreation{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncCreateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Create(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async create starting...")
_, ph.err = n.terraformPluginFrameworkExternalClient.Create(ctx, mg)
}()
return managed.ExternalCreation{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("update") {
return managed.ExternalUpdate{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncUpdateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Update(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async update starting...")
_, ph.err = n.terraformPluginFrameworkExternalClient.Update(ctx, mg)
}()
return managed.ExternalUpdate{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Delete(_ context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
switch {
case n.opTracker.LastOperation.Type == "delete":
n.opTracker.logger.Debug("The previous delete operation is still ongoing")
return managed.ExternalDelete{}, nil
case !n.opTracker.LastOperation.MarkStart("delete"):
return managed.ExternalDelete{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncDeleteFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err)
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Destroy(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async delete starting...")
_, ph.err = n.terraformPluginFrameworkExternalClient.Delete(ctx, mg)
}()
return managed.ExternalDelete{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginFrameworkAsyncExternalClient) Disconnect(_ context.Context) error {
return nil
}

View File

@ -0,0 +1,257 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/controller/handler"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
tferrors "github.com/crossplane/upjet/v2/pkg/terraform/errors"
)
var defaultAsyncTimeout = 1 * time.Hour
// TerraformPluginSDKAsyncConnector is a managed reconciler Connecter
// implementation for reconciling Terraform plugin SDK v2 based
// resources.
type TerraformPluginSDKAsyncConnector struct {
*TerraformPluginSDKConnector
callback CallbackProvider
eventHandler *handler.EventHandler
}
// TerraformPluginSDKAsyncOption represents a configuration option for
// a TerraformPluginSDKAsyncConnector object.
type TerraformPluginSDKAsyncOption func(connector *TerraformPluginSDKAsyncConnector)
// NewTerraformPluginSDKAsyncConnector initializes a new
// TerraformPluginSDKAsyncConnector.
func NewTerraformPluginSDKAsyncConnector(kube client.Client, ots *OperationTrackerStore, sf terraform.SetupFn, cfg *config.Resource, opts ...TerraformPluginSDKAsyncOption) *TerraformPluginSDKAsyncConnector {
nfac := &TerraformPluginSDKAsyncConnector{
TerraformPluginSDKConnector: NewTerraformPluginSDKConnector(kube, sf, cfg, ots),
}
for _, f := range opts {
f(nfac)
}
return nfac
}
func (c *TerraformPluginSDKAsyncConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) {
ec, err := c.TerraformPluginSDKConnector.Connect(ctx, mg)
if err != nil {
return nil, errors.Wrap(err, "cannot initialize the Terraform plugin SDK async external client")
}
return &terraformPluginSDKAsyncExternal{
terraformPluginSDKExternal: ec.(*terraformPluginSDKExternal),
callback: c.callback,
eventHandler: c.eventHandler,
}, nil
}
// WithTerraformPluginSDKAsyncConnectorEventHandler configures the
// EventHandler so that the Terraform plugin SDK external clients can requeue
// reconciliation requests.
func WithTerraformPluginSDKAsyncConnectorEventHandler(e *handler.EventHandler) TerraformPluginSDKAsyncOption {
return func(c *TerraformPluginSDKAsyncConnector) {
c.eventHandler = e
}
}
// WithTerraformPluginSDKAsyncCallbackProvider configures the controller to use
// async variant of the functions of the Terraform client and run given
// callbacks once those operations are completed.
func WithTerraformPluginSDKAsyncCallbackProvider(ac CallbackProvider) TerraformPluginSDKAsyncOption {
return func(c *TerraformPluginSDKAsyncConnector) {
c.callback = ac
}
}
// WithTerraformPluginSDKAsyncLogger configures a logger for the
// TerraformPluginSDKAsyncConnector.
func WithTerraformPluginSDKAsyncLogger(l logging.Logger) TerraformPluginSDKAsyncOption {
return func(c *TerraformPluginSDKAsyncConnector) {
c.logger = l
}
}
// WithTerraformPluginSDKAsyncMetricRecorder configures a
// metrics.MetricRecorder for the TerraformPluginSDKAsyncConnector.
func WithTerraformPluginSDKAsyncMetricRecorder(r *metrics.MetricRecorder) TerraformPluginSDKAsyncOption {
return func(c *TerraformPluginSDKAsyncConnector) {
c.metricRecorder = r
}
}
// WithTerraformPluginSDKAsyncManagementPolicies configures whether the client
// should handle management policies.
func WithTerraformPluginSDKAsyncManagementPolicies(isManagementPoliciesEnabled bool) TerraformPluginSDKAsyncOption {
return func(c *TerraformPluginSDKAsyncConnector) {
c.isManagementPoliciesEnabled = isManagementPoliciesEnabled
}
}
type terraformPluginSDKAsyncExternal struct {
*terraformPluginSDKExternal
callback CallbackProvider
eventHandler *handler.EventHandler
}
type CallbackFn func(error, context.Context) error
func (n *terraformPluginSDKAsyncExternal) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) {
if n.opTracker.LastOperation.IsRunning() {
n.logger.WithValues("opType", n.opTracker.LastOperation.Type).Debug("ongoing async operation")
return managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
}, nil
}
n.opTracker.LastOperation.Clear(true)
o, err := n.terraformPluginSDKExternal.Observe(ctx, mg)
// clear any previously reported LastAsyncOperation error condition here,
// because there are no pending updates on the existing resource and it's
// not scheduled to be deleted.
if err == nil && o.ResourceExists && o.ResourceUpToDate && !meta.WasDeleted(mg) {
mg.(resource.Terraformed).SetConditions(resource.LastAsyncOperationCondition(nil))
mg.(resource.Terraformed).SetConditions(xpv1.ReconcileSuccess())
n.opTracker.LastOperation.Clear(false)
}
return o, err
}
func (n *terraformPluginSDKAsyncExternal) Create(_ context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("create") {
return managed.ExternalCreation{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncCreateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async create ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Create(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async create callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async create starting...", "tfID", n.opTracker.GetTfID())
_, ph.err = n.terraformPluginSDKExternal.Create(ctx, mg)
}()
return managed.ExternalCreation{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Update(_ context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
if !n.opTracker.LastOperation.MarkStart("update") {
return managed.ExternalUpdate{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncUpdateFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async update ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Update(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async update callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async update starting...", "tfID", n.opTracker.GetTfID())
_, ph.err = n.terraformPluginSDKExternal.Update(ctx, mg)
}()
return managed.ExternalUpdate{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Delete(_ context.Context, mg xpresource.Managed) (managed.ExternalDelete, error) { //nolint:contextcheck // we intentionally use a fresh context for the async operation
switch {
case n.opTracker.LastOperation.Type == "delete":
n.opTracker.logger.Debug("The previous delete operation is still ongoing", "tfID", n.opTracker.GetTfID())
return managed.ExternalDelete{}, nil
case !n.opTracker.LastOperation.MarkStart("delete"):
return managed.ExternalDelete{}, errors.Errorf("%s operation that started at %s is still running", n.opTracker.LastOperation.Type, n.opTracker.LastOperation.StartTime().String())
}
ctx, cancel := context.WithDeadline(context.Background(), n.opTracker.LastOperation.StartTime().Add(defaultAsyncTimeout))
go func() {
// The order of deferred functions, executed last-in-first-out, is
// significant. The context should be canceled last, because it is
// used by the finishing operations. Panic recovery should execute
// first, because the finishing operations report the panic error,
// if any.
var ph panicHandler
defer cancel()
defer func() { // Finishing operations
err := tferrors.NewAsyncDeleteFailed(ph.err)
n.opTracker.LastOperation.SetError(err)
n.opTracker.logger.Debug("Async delete ended.", "error", err, "tfID", n.opTracker.GetTfID())
n.opTracker.LastOperation.MarkEnd()
name := types.NamespacedName{
Namespace: mg.GetNamespace(),
Name: mg.GetName(),
}
if cErr := n.callback.Destroy(name)(err, ctx); cErr != nil {
n.opTracker.logger.Info("Async delete callback failed", "error", cErr.Error())
}
}()
defer ph.recoverIfPanic(ctx)
n.opTracker.logger.Debug("Async delete starting...", "tfID", n.opTracker.GetTfID())
_, ph.err = n.terraformPluginSDKExternal.Delete(ctx, mg)
}()
return managed.ExternalDelete{}, n.opTracker.LastOperation.Error()
}
func (n *terraformPluginSDKAsyncExternal) Disconnect(_ context.Context) error {
return nil
}

View File

@ -0,0 +1,337 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
tf "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
var (
cfgAsync = &config.Resource{
TerraformResource: &schema.Resource{
Timeouts: &schema.ResourceTimeout{
Create: &timeout,
Read: &timeout,
Update: &timeout,
Delete: &timeout,
},
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
Required: false,
},
"map": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"list": {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
},
ExternalName: config.IdentifierFromProvider,
Sensitive: config.Sensitive{AdditionalConnectionDetailsFn: func(attr map[string]any) (map[string][]byte, error) {
return nil, nil
}},
}
objAsync = &fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": "example",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
}
)
func prepareTerraformPluginSDKAsyncExternal(r Resource, cfg *config.Resource, fns CallbackFns) *terraformPluginSDKAsyncExternal {
schemaBlock := cfg.TerraformResource.CoreConfigSchema()
rawConfig, err := schema.JSONMapToStateValue(map[string]any{"name": "example"}, schemaBlock)
if err != nil {
panic(err)
}
return &terraformPluginSDKAsyncExternal{
terraformPluginSDKExternal: &terraformPluginSDKExternal{
ts: terraform.Setup{},
resourceSchema: r,
config: cfg,
params: map[string]any{
"name": "example",
},
rawConfig: rawConfig,
logger: logTest,
opTracker: NewAsyncTracker(),
},
callback: fns,
}
}
func TestAsyncTerraformPluginSDKConnect(t *testing.T) {
type args struct {
setupFn terraform.SetupFn
cfg *config.Resource
ots *OperationTrackerStore
obj xpresource.Managed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
setupFn: func(_ context.Context, _ client.Client, _ xpresource.Managed) (terraform.Setup, error) {
return terraform.Setup{}, nil
},
cfg: cfgAsync,
obj: objAsync,
ots: ots,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
c := NewTerraformPluginSDKAsyncConnector(nil, tc.args.ots, tc.args.setupFn, tc.args.cfg, WithTerraformPluginSDKAsyncLogger(logTest))
_, err := c.Connect(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestAsyncTerraformPluginSDKObserve(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj xpresource.Managed
}
type want struct {
obs managed.ExternalObservation
err error
}
cases := map[string]struct {
args
want
}{
"NotExists": {
args: args{
r: mockResource{
RefreshWithoutUpgradeFn: func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return nil, nil
},
},
cfg: cfgAsync,
obj: objAsync,
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: false,
ResourceUpToDate: false,
ResourceLateInitialized: false,
ConnectionDetails: nil,
Diff: "",
},
},
},
"UpToDate": {
args: args{
r: mockResource{
RefreshWithoutUpgradeFn: func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id", Attributes: map[string]string{"name": "example"}}, nil
},
},
cfg: cfgAsync,
obj: objAsync,
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
ConnectionDetails: nil,
Diff: "",
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKAsyncExternal := prepareTerraformPluginSDKAsyncExternal(tc.args.r, tc.args.cfg, CallbackFns{})
observation, err := terraformPluginSDKAsyncExternal.Observe(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.obs, observation); diff != "" {
t.Errorf("\n%s\nObserve(...): -want observation, +got observation:\n", diff)
}
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestAsyncTerraformPluginSDKCreate(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj xpresource.Managed
fns CallbackFns
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
CreateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
},
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKAsyncExternal := prepareTerraformPluginSDKAsyncExternal(tc.args.r, tc.args.cfg, tc.args.fns)
_, err := terraformPluginSDKAsyncExternal.Create(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestAsyncTerraformPluginSDKUpdate(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj xpresource.Managed
fns CallbackFns
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
UpdateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
},
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKAsyncExternal := prepareTerraformPluginSDKAsyncExternal(tc.args.r, tc.args.cfg, tc.args.fns)
_, err := terraformPluginSDKAsyncExternal.Update(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestAsyncTerraformPluginSDKDelete(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj xpresource.Managed
fns CallbackFns
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfgAsync,
obj: objAsync,
fns: CallbackFns{
DestroyFn: func(nn types.NamespacedName) terraform.CallbackFn {
return func(err error, ctx context.Context) error {
return nil
}
},
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKAsyncExternal := prepareTerraformPluginSDKAsyncExternal(tc.args.r, tc.args.cfg, tc.args.fns)
_, err := terraformPluginSDKAsyncExternal.Delete(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}

View File

@ -1,6 +1,6 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
@ -8,22 +8,29 @@ import (
"context"
"testing"
xpv1 "github.com/crossplane/crossplane-runtime/apis/common/v1"
xpmeta "github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/pkg/test"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
xpmeta "github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
xpfake "github.com/crossplane/crossplane-runtime/v2/pkg/resource/fake"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/pkg/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/resource"
"github.com/upbound/upjet/pkg/resource/fake"
"github.com/upbound/upjet/pkg/resource/json"
"github.com/upbound/upjet/pkg/terraform"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
const (
testPath = "test/path"
)
var (
@ -39,6 +46,10 @@ var (
},
},
}
exampleCriticalAnnotations = map[string]string{
resource.AnnotationKeyPrivateRawAttribute: "",
xpmeta.AnnotationKeyExternalName: "some-id",
}
)
type WorkspaceFns struct {
@ -47,6 +58,7 @@ type WorkspaceFns struct {
DestroyAsyncFn func(callback terraform.CallbackFn) error
DestroyFn func(ctx context.Context) error
RefreshFn func(ctx context.Context) (terraform.RefreshResult, error)
ImportFn func(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error)
PlanFn func(ctx context.Context) (terraform.PlanResult, error)
}
@ -74,6 +86,10 @@ func (c WorkspaceFns) Plan(ctx context.Context) (terraform.PlanResult, error) {
return c.PlanFn(ctx)
}
func (c WorkspaceFns) Import(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error) {
return c.ImportFn(ctx, tr)
}
type StoreFns struct {
WorkspaceFn func(ctx context.Context, c resource.SecretClient, tr resource.Terraformed, ts terraform.Setup, cfg *config.Resource) (*terraform.Workspace, error)
}
@ -83,15 +99,20 @@ func (s StoreFns) Workspace(ctx context.Context, c resource.SecretClient, tr res
}
type CallbackFns struct {
ApplyFn func(string) terraform.CallbackFn
DestroyFn func(string) terraform.CallbackFn
CreateFn func(types.NamespacedName) terraform.CallbackFn
UpdateFn func(types.NamespacedName) terraform.CallbackFn
DestroyFn func(types.NamespacedName) terraform.CallbackFn
}
func (c CallbackFns) Apply(name string) terraform.CallbackFn {
return c.ApplyFn(name)
func (c CallbackFns) Create(name types.NamespacedName) terraform.CallbackFn {
return c.CreateFn(name)
}
func (c CallbackFns) Destroy(name string) terraform.CallbackFn {
func (c CallbackFns) Update(name types.NamespacedName) terraform.CallbackFn {
return c.UpdateFn(name)
}
func (c CallbackFns) Destroy(name types.NamespacedName) terraform.CallbackFn {
return c.DestroyFn(name)
}
@ -154,7 +175,7 @@ func TestConnect(t *testing.T) {
},
store: StoreFns{
WorkspaceFn: func(_ context.Context, _ resource.SecretClient, _ resource.Terraformed, _ terraform.Setup, _ *config.Resource) (*terraform.Workspace, error) {
return nil, nil
return terraform.NewWorkspace(testPath), nil
},
},
},
@ -175,9 +196,11 @@ func TestObserve(t *testing.T) {
type args struct {
w Workspace
obj xpresource.Managed
client client.Client
}
type want struct {
obs managed.ExternalObservation
condition *xpv1.Condition
err error
}
cases := map[string]struct {
@ -196,7 +219,13 @@ func TestObserve(t *testing.T) {
"RefreshFailed": {
reason: "It should return error if we cannot refresh",
args: args{
obj: &fake.Terraformed{},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{}, errBoom
@ -210,7 +239,13 @@ func TestObserve(t *testing.T) {
"RefreshNotFound": {
reason: "It should not report error in case resource is not found",
args: args{
obj: &fake.Terraformed{},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{Exists: false}, nil
@ -221,11 +256,17 @@ func TestObserve(t *testing.T) {
"RefreshInProgress": {
reason: "It should report exists and up-to-date if an operation is ongoing",
args: args{
obj: &fake.Terraformed{},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{
IsApplying: true,
ASyncInProgress: true,
}, nil
},
},
@ -240,7 +281,19 @@ func TestObserve(t *testing.T) {
"TransitionToReady": {
reason: "We should mark the resource as ready if the refresh succeeds and there is no ongoing operation",
args: args{
obj: &fake.Terraformed{},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
ConditionedStatus: xpv1.ConditionedStatus{
// empty
},
ObjectMeta: metav1.ObjectMeta{
Annotations: exampleCriticalAnnotations,
},
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{
@ -257,6 +310,7 @@ func TestObserve(t *testing.T) {
ConnectionDetails: nil,
ResourceLateInitialized: false,
},
condition: available(),
},
},
"PlanFailed": {
@ -265,13 +319,14 @@ func TestObserve(t *testing.T) {
obj: &fake.Terraformed{
Managed: xpfake.Managed{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
xpmeta.AnnotationKeyExternalName: "some-id",
},
Annotations: exampleCriticalAnnotations,
},
ConditionedStatus: xpv1.ConditionedStatus{
Conditions: []xpv1.Condition{xpv1.Available()},
},
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
@ -290,13 +345,17 @@ func TestObserve(t *testing.T) {
err: errors.Wrap(errBoom, errPlan),
},
},
"Success": {
"AnnotationsUpdated": {
reason: "We should update annotations if they are not up-to-date as a priority",
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
ConditionedStatus: xpv1.ConditionedStatus{
Conditions: []xpv1.Condition{xpv1.Available()},
},
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
},
},
w: WorkspaceFns{
@ -306,6 +365,116 @@ func TestObserve(t *testing.T) {
State: exampleState,
}, nil
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
},
},
},
"ObserveOnlyAsyncInProgress": {
reason: "We should report exists and up-to-date if an operation is ongoing and the policy is observe-only",
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionObserve},
},
ConditionedStatus: xpv1.ConditionedStatus{
Conditions: []xpv1.Condition{xpv1.Available()},
},
},
},
w: WorkspaceFns{
ImportFn: func(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error) {
return terraform.ImportResult{
ASyncInProgress: true,
}, nil
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
},
},
},
"ObserveOnlyImportFails": {
reason: "We should report an error if the import fails and the policy is observe-only",
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionObserve},
},
ConditionedStatus: xpv1.ConditionedStatus{
Conditions: []xpv1.Condition{xpv1.Available()},
},
},
},
w: WorkspaceFns{
ImportFn: func(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error) {
return terraform.ImportResult{}, errBoom
},
},
},
want: want{
err: errors.Wrap(errBoom, errImport),
},
},
"ObserveOnlyDoesNotExist": {
reason: "We should report if the resource does not exist and the policy is observe-only",
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionObserve},
},
ConditionedStatus: xpv1.ConditionedStatus{
Conditions: []xpv1.Condition{xpv1.Available()},
},
},
},
w: WorkspaceFns{
ImportFn: func(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error) {
return terraform.ImportResult{
Exists: false,
}, nil
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: false,
},
},
},
"ObserveOnlySuccess": {
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionObserve},
},
ConditionedStatus: xpv1.ConditionedStatus{
// empty
},
ObjectMeta: metav1.ObjectMeta{
Annotations: exampleCriticalAnnotations,
},
},
},
w: WorkspaceFns{
ImportFn: func(ctx context.Context, tr resource.Terraformed) (terraform.ImportResult, error) {
return terraform.ImportResult{
Exists: true,
State: exampleState,
}, nil
},
PlanFn: func(_ context.Context) (terraform.PlanResult, error) {
return terraform.PlanResult{UpToDate: true}, nil
},
@ -315,22 +484,137 @@ func TestObserve(t *testing.T) {
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
},
condition: available(),
},
},
"TransitionToReadyManagementPolicyDefault": {
reason: "We should mark the resource as ready if the refresh succeeds and there is no ongoing operation",
args: args{
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionAll},
},
ConditionedStatus: xpv1.ConditionedStatus{
// empty
},
ObjectMeta: metav1.ObjectMeta{
Annotations: exampleCriticalAnnotations,
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{
Exists: true,
State: exampleState,
}, nil
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ConnectionDetails: nil,
ResourceLateInitialized: false,
},
condition: available(),
},
},
"AnnotationsUpdatedManuallyManagementPolicyNoLateInit": {
reason: "We should update annotations manually if they are not up-to-date and the policy is not late-init",
args: args{
client: &test.MockClient{
MockUpdate: func(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error {
if diff := cmp.Diff(exampleCriticalAnnotations, obj.GetAnnotations()); diff != "" {
reason := "Critical annotations should be updated"
t.Errorf("\nReason: %s\n-want, +got:\n%s", reason, diff)
}
return nil
},
},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
ConditionedStatus: xpv1.ConditionedStatus{
// empty
},
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionCreate},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{
Exists: true,
State: exampleState,
}, nil
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
},
condition: available(),
},
},
"AnnotationsUpdatedManuallyManagementPolicyNoLateInitError": {
reason: "Should handle the error of updating annotations manually if they are not up-to-date and the policy is not late-init",
args: args{
client: &test.MockClient{
MockUpdate: func(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error {
return errBoom
},
},
obj: &fake.Terraformed{
Managed: xpfake.Managed{
Manageable: xpfake.Manageable{
Policy: xpv1.ManagementPolicies{xpv1.ManagementActionCreate},
},
},
},
w: WorkspaceFns{
RefreshFn: func(_ context.Context) (terraform.RefreshResult, error) {
return terraform.RefreshResult{
Exists: true,
State: exampleState,
}, nil
},
},
},
want: want{
err: errors.Wrap(errBoom, errUpdateAnnotations),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := &external{workspace: tc.w, config: config.DefaultResource("upjet_resource", nil, nil)}
_, err := e.Observe(context.TODO(), tc.args.obj)
e := &external{workspace: tc.w, config: config.DefaultResource("upjet_resource", nil, nil, nil), kube: tc.args.client, logger: logging.NewNopLogger()}
observation, err := e.Observe(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.obs, observation); diff != "" {
t.Errorf("\n%s\nObserve(...): -want observation, +got observation:\n%s", tc.reason, diff)
}
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nObserve(...): -want error, +got error:\n%s", tc.reason, diff)
}
if tc.want.condition != nil {
if diff := cmp.Diff(*tc.want.condition, tc.args.obj.GetCondition(tc.want.condition.Type), cmpopts.IgnoreTypes(metav1.Time{})); diff != "" {
t.Errorf("\n%s\nObserve(...): -want condition, +got condition:\n%s", tc.reason, diff)
}
}
})
}
}
func available() *xpv1.Condition {
c := xpv1.Available()
return &c
}
func TestCreate(t *testing.T) {
type args struct {
w Workspace
@ -362,7 +646,7 @@ func TestCreate(t *testing.T) {
UseAsync: true,
},
c: CallbackFns{
ApplyFn: func(s string) terraform.CallbackFn {
CreateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -428,14 +712,14 @@ func TestUpdate(t *testing.T) {
err: errors.New(errUnexpectedObject),
},
},
"AsyncFailed": {
"AsyncUpdateFailed": {
reason: "It should return error if it cannot trigger the async apply",
args: args{
cfg: &config.Resource{
UseAsync: true,
},
c: CallbackFns{
ApplyFn: func(s string) terraform.CallbackFn {
UpdateFn: func(nn types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -450,7 +734,7 @@ func TestUpdate(t *testing.T) {
err: errors.Wrap(errBoom, errStartAsyncApply),
},
},
"SyncApplyFailed": {
"SyncUpdateFailed": {
reason: "It should return error if it cannot apply in sync mode",
args: args{
cfg: &config.Resource{},
@ -499,7 +783,7 @@ func TestDelete(t *testing.T) {
UseAsync: true,
},
c: CallbackFns{
DestroyFn: func(_ string) terraform.CallbackFn {
DestroyFn: func(_ types.NamespacedName) terraform.CallbackFn {
return nil
},
},
@ -533,7 +817,7 @@ func TestDelete(t *testing.T) {
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
e := &external{workspace: tc.w, callback: tc.c, config: tc.cfg}
err := e.Delete(context.TODO(), tc.args.obj)
_, err := e.Delete(context.TODO(), tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nCreate(...): -want error, +got error:\n%s", tc.reason, diff)
}

View File

@ -0,0 +1,746 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"encoding/json"
"fmt"
"math"
"math/big"
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
fwdiag "github.com/hashicorp/terraform-plugin-framework/diag"
fwprovider "github.com/hashicorp/terraform-plugin-framework/provider"
"github.com/hashicorp/terraform-plugin-framework/providerserver"
fwresource "github.com/hashicorp/terraform-plugin-framework/resource"
rschema "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-go/tfprotov5"
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
upjson "github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// TerraformPluginFrameworkConnector is an external client, with credentials and
// other configuration parameters, for Terraform Plugin Framework resources. You
// can use NewTerraformPluginFrameworkConnector to construct.
type TerraformPluginFrameworkConnector struct {
getTerraformSetup terraform.SetupFn
kube client.Client
config *config.Resource
logger logging.Logger
metricRecorder *metrics.MetricRecorder
operationTrackerStore *OperationTrackerStore
isManagementPoliciesEnabled bool
}
// TerraformPluginFrameworkConnectorOption allows you to configure TerraformPluginFrameworkConnector.
type TerraformPluginFrameworkConnectorOption func(connector *TerraformPluginFrameworkConnector)
// WithTerraformPluginFrameworkLogger configures a logger for the TerraformPluginFrameworkConnector.
func WithTerraformPluginFrameworkLogger(l logging.Logger) TerraformPluginFrameworkConnectorOption {
return func(c *TerraformPluginFrameworkConnector) {
c.logger = l
}
}
// WithTerraformPluginFrameworkMetricRecorder configures a metrics.MetricRecorder for the
// TerraformPluginFrameworkConnectorOption.
func WithTerraformPluginFrameworkMetricRecorder(r *metrics.MetricRecorder) TerraformPluginFrameworkConnectorOption {
return func(c *TerraformPluginFrameworkConnector) {
c.metricRecorder = r
}
}
// WithTerraformPluginFrameworkManagementPolicies configures whether the client should
// handle management policies.
func WithTerraformPluginFrameworkManagementPolicies(isManagementPoliciesEnabled bool) TerraformPluginFrameworkConnectorOption {
return func(c *TerraformPluginFrameworkConnector) {
c.isManagementPoliciesEnabled = isManagementPoliciesEnabled
}
}
// NewTerraformPluginFrameworkConnector creates a new
// TerraformPluginFrameworkConnector with given options.
func NewTerraformPluginFrameworkConnector(kube client.Client, sf terraform.SetupFn, cfg *config.Resource, ots *OperationTrackerStore, opts ...TerraformPluginFrameworkConnectorOption) *TerraformPluginFrameworkConnector {
connector := &TerraformPluginFrameworkConnector{
getTerraformSetup: sf,
kube: kube,
config: cfg,
operationTrackerStore: ots,
}
for _, f := range opts {
f(connector)
}
return connector
}
type terraformPluginFrameworkExternalClient struct {
ts terraform.Setup
config *config.Resource
logger logging.Logger
metricRecorder *metrics.MetricRecorder
opTracker *AsyncTracker
resource fwresource.Resource
server tfprotov5.ProviderServer
params map[string]any
planResponse *tfprotov5.PlanResourceChangeResponse
resourceSchema rschema.Schema
// the terraform value type associated with the resource schema
resourceValueTerraformType tftypes.Type
}
// Connect makes sure the underlying client is ready to issue requests to the
// provider API.
func (c *TerraformPluginFrameworkConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) { //nolint:gocyclo
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), metrics.NameForManaged(mg))
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "namespace", mg.GetNamespace(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
logger.Debug("Connecting to the service provider")
start := time.Now()
ts, err := c.getTerraformSetup(ctx, c.kube, mg)
metrics.ExternalAPITime.WithLabelValues("connect").Observe(time.Since(start).Seconds())
if err != nil {
return nil, errors.Wrap(err, errGetTerraformSetup)
}
tr := mg.(resource.Terraformed)
opTracker := c.operationTrackerStore.Tracker(tr)
externalName := meta.GetExternalName(tr)
params, err := getExtendedParameters(ctx, tr, externalName, c.config, ts, c.isManagementPoliciesEnabled, c.kube)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", client.ObjectKeyFromObject(mg))
}
resourceSchema, err := c.getResourceSchema(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve resource schema")
}
resourceTfValueType := resourceSchema.Type().TerraformType(ctx)
hasState := false
if opTracker.HasFrameworkTFState() {
tfStateValue, err := opTracker.GetFrameworkTFState().Unmarshal(resourceTfValueType)
if err != nil {
return nil, errors.Wrap(err, "cannot unmarshal TF state dynamic value during state existence check")
}
hasState = err == nil && !tfStateValue.IsNull()
}
if !hasState {
logger.Debug("Instance state not found in cache, reconstructing...")
tfState, err := tr.GetObservation()
if err != nil {
return nil, errors.Wrap(err, "failed to get the observation")
}
copyParams := len(tfState) == 0
if err = resource.GetSensitiveParameters(ctx, &APISecretClient{kube: c.kube}, tr, tfState, tr.GetConnectionDetailsMapping()); err != nil {
return nil, errors.Wrap(err, "cannot store sensitive parameters into tfState")
}
c.config.ExternalName.SetIdentifierArgumentFn(tfState, externalName)
tfState["id"] = params["id"]
if copyParams {
tfState = copyParameters(tfState, params)
}
tfStateDynamicValue, err := protov5DynamicValueFromMap(tfState, resourceTfValueType)
if err != nil {
return nil, errors.Wrap(err, "cannot construct dynamic value for TF state")
}
opTracker.SetFrameworkTFState(tfStateDynamicValue)
}
configuredProviderServer, err := c.configureProvider(ctx, ts)
if err != nil {
return nil, errors.Wrap(err, "could not configure provider server")
}
return &terraformPluginFrameworkExternalClient{
ts: ts,
config: c.config,
logger: logger,
metricRecorder: c.metricRecorder,
opTracker: opTracker,
resource: c.config.TerraformPluginFrameworkResource,
server: configuredProviderServer,
params: params,
resourceSchema: resourceSchema,
resourceValueTerraformType: resourceTfValueType,
}, nil
}
// getResourceSchema returns the Terraform Plugin Framework-style resource schema for the configured framework resource on the connector
func (c *TerraformPluginFrameworkConnector) getResourceSchema(ctx context.Context) (rschema.Schema, error) {
res := c.config.TerraformPluginFrameworkResource
schemaResp := &fwresource.SchemaResponse{}
res.Schema(ctx, fwresource.SchemaRequest{}, schemaResp)
if schemaResp.Diagnostics.HasError() {
fwErrors := frameworkDiagnosticsToString(schemaResp.Diagnostics)
return rschema.Schema{}, errors.Errorf("could not retrieve resource schema: %s", fwErrors)
}
return schemaResp.Schema, nil
}
// configureProvider returns a configured Terraform protocol v5 provider server
// with the preconfigured provider instance in the terraform setup.
// The provider instance used should be already preconfigured
// at the terraform setup layer with the relevant provider meta if needed
// by the provider implementation.
func (c *TerraformPluginFrameworkConnector) configureProvider(ctx context.Context, ts terraform.Setup) (tfprotov5.ProviderServer, error) {
if ts.FrameworkProvider == nil {
return nil, fmt.Errorf("cannot retrieve framework provider")
}
var schemaResp fwprovider.SchemaResponse
ts.FrameworkProvider.Schema(ctx, fwprovider.SchemaRequest{}, &schemaResp)
if schemaResp.Diagnostics.HasError() {
fwDiags := frameworkDiagnosticsToString(schemaResp.Diagnostics)
return nil, fmt.Errorf("cannot retrieve provider schema: %s", fwDiags)
}
providerServer := providerserver.NewProtocol5(ts.FrameworkProvider)()
providerConfigDynamicVal, err := protov5DynamicValueFromMap(ts.Configuration, schemaResp.Schema.Type().TerraformType(ctx))
if err != nil {
return nil, errors.Wrap(err, "cannot construct dynamic value for TF provider config")
}
configureProviderReq := &tfprotov5.ConfigureProviderRequest{
TerraformVersion: "crossTF000",
Config: providerConfigDynamicVal,
}
providerResp, err := providerServer.ConfigureProvider(ctx, configureProviderReq)
if err != nil {
return nil, errors.Wrap(err, "cannot configure framework provider")
}
if fatalDiags := getFatalDiagnostics(providerResp.Diagnostics); fatalDiags != nil {
return nil, errors.Wrap(fatalDiags, "provider configure request failed")
}
return providerServer, nil
}
// Filter diffs that have unknown plan values, which correspond to
// computed fields, and null plan values, which correspond to
// not-specified fields. Such cases cause unnecessary diff detection
// when only computed attributes or not-specified argument diffs
// exist in the raw diff and no actual diff exists in the
// parametrizable attributes.
func (n *terraformPluginFrameworkExternalClient) filteredDiffExists(rawDiff []tftypes.ValueDiff) bool {
filteredDiff := make([]tftypes.ValueDiff, 0)
for _, diff := range rawDiff {
if diff.Value1 != nil && diff.Value1.IsKnown() && !diff.Value1.IsNull() {
filteredDiff = append(filteredDiff, diff)
}
}
return len(filteredDiff) > 0
}
// getDiffPlanResponse calls the underlying native TF provider's PlanResourceChange RPC,
// and returns the planned state and whether a diff exists.
// If plan response contains non-empty RequiresReplace (i.e. the resource needs
// to be recreated) an error is returned as Crossplane Resource Model (XRM)
// prohibits resource re-creations and rejects this plan.
func (n *terraformPluginFrameworkExternalClient) getDiffPlanResponse(ctx context.Context,
tfStateValue tftypes.Value) (*tfprotov5.PlanResourceChangeResponse, bool, error) {
tfConfigDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return nil, false, errors.Wrap(err, "cannot construct dynamic value for TF Config")
}
//
tfPlannedStateDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return nil, false, errors.Wrap(err, "cannot construct dynamic value for TF Planned State")
}
prcReq := &tfprotov5.PlanResourceChangeRequest{
TypeName: n.config.Name,
PriorState: n.opTracker.GetFrameworkTFState(),
Config: tfConfigDynamicVal,
ProposedNewState: tfPlannedStateDynamicVal,
}
planResponse, err := n.server.PlanResourceChange(ctx, prcReq)
if err != nil {
return nil, false, errors.Wrap(err, "cannot plan change")
}
if fatalDiags := getFatalDiagnostics(planResponse.Diagnostics); fatalDiags != nil {
return nil, false, errors.Wrap(fatalDiags, "plan resource change request failed")
}
plannedStateValue, err := planResponse.PlannedState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return nil, false, errors.Wrap(err, "cannot unmarshal planned state")
}
rawDiff, err := plannedStateValue.Diff(tfStateValue)
if err != nil {
return nil, false, errors.Wrap(err, "cannot compare prior state and plan")
}
return planResponse, n.filteredDiffExists(rawDiff), nil
}
func (n *terraformPluginFrameworkExternalClient) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) { //nolint:gocyclo
n.logger.Debug("Observing the external resource")
if meta.WasDeleted(mg) && n.opTracker.IsDeleted() {
return managed.ExternalObservation{
ResourceExists: false,
}, nil
}
readRequest := &tfprotov5.ReadResourceRequest{
TypeName: n.config.Name,
CurrentState: n.opTracker.GetFrameworkTFState(),
}
readResponse, err := n.server.ReadResource(ctx, readRequest)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot read resource")
}
if fatalDiags := getFatalDiagnostics(readResponse.Diagnostics); fatalDiags != nil {
return managed.ExternalObservation{}, errors.Wrap(fatalDiags, "read resource request failed")
}
tfStateValue, err := readResponse.NewState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot unmarshal state value")
}
n.opTracker.SetFrameworkTFState(readResponse.NewState)
// Determine if the resource exists based on Terraform state
var resourceExists bool
if !tfStateValue.IsNull() {
// Resource state is not null, assume it exists
resourceExists = true
// If a custom empty state check function is configured, use it to verify existence
if n.config.TerraformPluginFrameworkIsStateEmptyFn != nil {
isEmpty, err := n.config.TerraformPluginFrameworkIsStateEmptyFn(ctx, tfStateValue, n.resourceSchema)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot check if TF State is empty")
}
// Override existence based on custom check result
resourceExists = !isEmpty
// If custom check determines resource doesn't exist, reset state to nil
if !resourceExists {
nilTfValue := tftypes.NewValue(n.resourceValueTerraformType, nil)
nildynamicValue, err := tfprotov5.NewDynamicValue(n.resourceValueTerraformType, nilTfValue)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot create nil dynamic value")
}
n.opTracker.SetFrameworkTFState(&nildynamicValue)
}
}
}
var stateValueMap map[string]any
if resourceExists {
if conv, err := tfValueToGoValue(tfStateValue); err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot convert instance state to JSON map")
} else {
stateValueMap = conv.(map[string]any)
}
}
// TODO(cem): Consider skipping diff calculation to avoid potential config
// validation errors in the import path. See
// https://github.com/crossplane/upjet/pull/461
planResponse, hasDiff, err := n.getDiffPlanResponse(ctx, tfStateValue)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot calculate diff")
}
n.planResponse = planResponse
var connDetails managed.ConnectionDetails
if !resourceExists && mg.GetDeletionTimestamp() != nil {
gvk := mg.GetObjectKind().GroupVersionKind()
metrics.DeletionTime.WithLabelValues(gvk.Group, gvk.Version, gvk.Kind).Observe(time.Since(mg.GetDeletionTimestamp().Time).Seconds())
}
specUpdateRequired := false
if resourceExists {
if mg.GetCondition(xpv1.TypeReady).Status == corev1.ConditionUnknown ||
mg.GetCondition(xpv1.TypeReady).Status == corev1.ConditionFalse {
addTTR(mg)
}
mg.SetConditions(xpv1.Available())
buff, err := upjson.TFParser.Marshal(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot marshal the attributes of the new state for late-initialization")
}
policySet := sets.New[xpv1.ManagementAction](mg.(resource.Terraformed).GetManagementPolicies()...)
policyHasLateInit := policySet.HasAny(xpv1.ManagementActionLateInitialize, xpv1.ManagementActionAll)
if policyHasLateInit {
specUpdateRequired, err = mg.(resource.Terraformed).LateInitialize(buff)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot late-initialize the managed resource")
}
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Errorf("could not set observation: %v", err)
}
connDetails, err = resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
if !hasDiff {
n.metricRecorder.SetReconcileTime(metrics.NameForManaged(mg))
}
if !specUpdateRequired {
resource.SetUpToDateCondition(mg, !hasDiff)
}
if nameChanged, err := n.setExternalName(mg, stateValueMap); err != nil {
return managed.ExternalObservation{}, errors.Wrapf(err, "failed to set the external-name of the managed resource during observe")
} else {
specUpdateRequired = specUpdateRequired || nameChanged
}
}
return managed.ExternalObservation{
ResourceExists: resourceExists,
ResourceUpToDate: !hasDiff,
ConnectionDetails: connDetails,
ResourceLateInitialized: specUpdateRequired,
}, nil
}
func (n *terraformPluginFrameworkExternalClient) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) {
n.logger.Debug("Creating the external resource")
tfConfigDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot construct dynamic value for TF Config")
}
applyRequest := &tfprotov5.ApplyResourceChangeRequest{
TypeName: n.config.Name,
PriorState: n.opTracker.GetFrameworkTFState(),
PlannedState: n.planResponse.PlannedState,
Config: tfConfigDynamicVal,
}
start := time.Now()
applyResponse, err := n.server.ApplyResourceChange(ctx, applyRequest)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot create resource")
}
metrics.ExternalAPITime.WithLabelValues("create").Observe(time.Since(start).Seconds())
if fatalDiags := getFatalDiagnostics(applyResponse.Diagnostics); fatalDiags != nil {
return managed.ExternalCreation{}, errors.Wrap(fatalDiags, "resource creation call returned error diags")
}
newStateAfterApplyVal, err := applyResponse.NewState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot unmarshal planned state")
}
if newStateAfterApplyVal.IsNull() {
return managed.ExternalCreation{}, errors.New("new state is empty after creation")
}
var stateValueMap map[string]any
if goval, err := tfValueToGoValue(newStateAfterApplyVal); err != nil {
return managed.ExternalCreation{}, errors.New("cannot convert native state to go map")
} else {
stateValueMap = goval.(map[string]any)
}
n.opTracker.SetFrameworkTFState(applyResponse.NewState)
if _, err := n.setExternalName(mg, stateValueMap); err != nil {
return managed.ExternalCreation{}, errors.Wrapf(err, "failed to set the external-name of the managed resource during create")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalCreation{}, errors.Errorf("could not set observation: %v", err)
}
conn, err := resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot get connection details")
}
return managed.ExternalCreation{ConnectionDetails: conn}, nil
}
func (n *terraformPluginFrameworkExternalClient) planRequiresReplace() (bool, string) {
if n.planResponse == nil || len(n.planResponse.RequiresReplace) == 0 {
return false, ""
}
var sb strings.Builder
sb.WriteString("diff contains fields that require resource replacement: ")
for _, attrPath := range n.planResponse.RequiresReplace {
sb.WriteString(attrPath.String())
sb.WriteString(", ")
}
return true, sb.String()
}
func (n *terraformPluginFrameworkExternalClient) Update(ctx context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
n.logger.Debug("Updating the external resource")
// refuse plans that require replace for XRM compliance
if isReplace, fields := n.planRequiresReplace(); isReplace {
return managed.ExternalUpdate{}, errors.Errorf("diff contains fields that require resource replacement: %s", fields)
}
tfConfigDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot construct dynamic value for TF Config")
}
applyRequest := &tfprotov5.ApplyResourceChangeRequest{
TypeName: n.config.Name,
PriorState: n.opTracker.GetFrameworkTFState(),
PlannedState: n.planResponse.PlannedState,
Config: tfConfigDynamicVal,
}
start := time.Now()
applyResponse, err := n.server.ApplyResourceChange(ctx, applyRequest)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot update resource")
}
metrics.ExternalAPITime.WithLabelValues("update").Observe(time.Since(start).Seconds())
if fatalDiags := getFatalDiagnostics(applyResponse.Diagnostics); fatalDiags != nil {
return managed.ExternalUpdate{}, errors.Wrap(fatalDiags, "resource update call returned error diags")
}
n.opTracker.SetFrameworkTFState(applyResponse.NewState)
newStateAfterApplyVal, err := applyResponse.NewState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot unmarshal updated state")
}
if newStateAfterApplyVal.IsNull() {
return managed.ExternalUpdate{}, errors.New("new state is empty after update")
}
var stateValueMap map[string]any
if goval, err := tfValueToGoValue(newStateAfterApplyVal); err != nil {
return managed.ExternalUpdate{}, errors.New("cannot convert native state to go map")
} else {
stateValueMap = goval.(map[string]any)
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalUpdate{}, errors.Errorf("could not set observation: %v", err)
}
return managed.ExternalUpdate{}, nil
}
func (n *terraformPluginFrameworkExternalClient) Delete(ctx context.Context, _ xpresource.Managed) (managed.ExternalDelete, error) {
n.logger.Debug("Deleting the external resource")
tfConfigDynamicVal, err := protov5DynamicValueFromMap(n.params, n.resourceValueTerraformType)
if err != nil {
return managed.ExternalDelete{}, errors.Wrap(err, "cannot construct dynamic value for TF Config")
}
// set an empty planned state, this corresponds to deleting
plannedState, err := tfprotov5.NewDynamicValue(n.resourceValueTerraformType, tftypes.NewValue(n.resourceValueTerraformType, nil))
if err != nil {
return managed.ExternalDelete{}, errors.Wrap(err, "cannot set the planned state for deletion")
}
applyRequest := &tfprotov5.ApplyResourceChangeRequest{
TypeName: n.config.Name,
PriorState: n.opTracker.GetFrameworkTFState(),
PlannedState: &plannedState,
Config: tfConfigDynamicVal,
}
start := time.Now()
applyResponse, err := n.server.ApplyResourceChange(ctx, applyRequest)
if err != nil {
return managed.ExternalDelete{}, errors.Wrap(err, "cannot delete resource")
}
metrics.ExternalAPITime.WithLabelValues("delete").Observe(time.Since(start).Seconds())
if fatalDiags := getFatalDiagnostics(applyResponse.Diagnostics); fatalDiags != nil {
return managed.ExternalDelete{}, errors.Wrap(fatalDiags, "resource deletion call returned error diags")
}
n.opTracker.SetFrameworkTFState(applyResponse.NewState)
newStateAfterApplyVal, err := applyResponse.NewState.Unmarshal(n.resourceValueTerraformType)
if err != nil {
return managed.ExternalDelete{}, errors.Wrap(err, "cannot unmarshal state after deletion")
}
// mark the resource as logically deleted if the TF call clears the state
n.opTracker.SetDeleted(newStateAfterApplyVal.IsNull())
return managed.ExternalDelete{}, nil
}
func (n *terraformPluginFrameworkExternalClient) setExternalName(mg xpresource.Managed, stateValueMap map[string]interface{}) (bool, error) {
newName, err := n.config.ExternalName.GetExternalNameFn(stateValueMap)
if err != nil {
return false, errors.Wrap(err, "failed to compute the external-name from the state map")
}
oldName := meta.GetExternalName(mg)
// we have to make sure the newly set external-name is recorded
meta.SetExternalName(mg, newName)
return oldName != newName, nil
}
// tfValueToGoValue converts a given tftypes.Value to Go-native any type.
// Useful for converting terraform values of state to JSON or for setting
// observations at the MR.
// Nested values are recursively converted.
// Supported conversions:
// tftypes.Object, tftypes.Map => map[string]any
// tftypes.Set, tftypes.List, tftypes.Tuple => []string
// tftypes.Bool => bool
// tftypes.Number => int64, float64
// tftypes.String => string
// tftypes.DynamicPseudoType => conversion not supported and returns an error
func tfValueToGoValue(input tftypes.Value) (any, error) { //nolint:gocyclo
if !input.IsKnown() {
return nil, fmt.Errorf("cannot convert unknown value")
}
if input.IsNull() {
return nil, nil
}
valType := input.Type()
switch {
case valType.Is(tftypes.Object{}), valType.Is(tftypes.Map{}):
destInterim := make(map[string]tftypes.Value)
dest := make(map[string]any)
if err := input.As(&destInterim); err != nil {
return nil, err
}
for k, v := range destInterim {
res, err := tfValueToGoValue(v)
if err != nil {
return nil, err
}
dest[k] = res
}
return dest, nil
case valType.Is(tftypes.Set{}), valType.Is(tftypes.List{}), valType.Is(tftypes.Tuple{}):
destInterim := make([]tftypes.Value, 0)
if err := input.As(&destInterim); err != nil {
return nil, err
}
dest := make([]any, len(destInterim))
for i, v := range destInterim {
res, err := tfValueToGoValue(v)
if err != nil {
return nil, err
}
dest[i] = res
}
return dest, nil
case valType.Is(tftypes.Bool):
var x bool
return x, input.As(&x)
case valType.Is(tftypes.Number):
var valBigF big.Float
if err := input.As(&valBigF); err != nil {
return nil, err
}
// try to parse as integer
if valBigF.IsInt() {
intVal, accuracy := valBigF.Int64()
if accuracy != 0 {
return nil, fmt.Errorf("value %v cannot be represented as a 64-bit integer", valBigF)
}
return intVal, nil
}
// try to parse as float64
xf, accuracy := valBigF.Float64()
// Underflow
// Reference: https://pkg.go.dev/math/big#Float.Float64
if xf == 0 && accuracy != big.Exact {
return nil, fmt.Errorf("value %v cannot be represented as a 64-bit floating point", valBigF)
}
// Overflow
// Reference: https://pkg.go.dev/math/big#Float.Float64
if math.IsInf(xf, 0) {
return nil, fmt.Errorf("value %v cannot be represented as a 64-bit floating point", valBigF)
}
return xf, nil
case valType.Is(tftypes.String):
var x string
return x, input.As(&x)
case valType.Is(tftypes.DynamicPseudoType):
return nil, errors.New("DynamicPseudoType conversion is not supported")
default:
return nil, fmt.Errorf("input value has unknown type: %s", valType.String())
}
}
// getFatalDiagnostics traverses the given Terraform protov5 diagnostics type
// and constructs a Go error. If the provided diag slice is empty, returns nil.
func getFatalDiagnostics(diags []*tfprotov5.Diagnostic) error {
var errs error
var diagErrors []string
for _, tfdiag := range diags {
if tfdiag.Severity == tfprotov5.DiagnosticSeverityInvalid || tfdiag.Severity == tfprotov5.DiagnosticSeverityError {
diagErrors = append(diagErrors, fmt.Sprintf("%s: %s", tfdiag.Summary, tfdiag.Detail))
}
}
if len(diagErrors) > 0 {
errs = errors.New(strings.Join(diagErrors, "\n"))
}
return errs
}
// frameworkDiagnosticsToString constructs an error string from the provided
// Plugin Framework diagnostics instance. Only Error severity diagnostics are
// included.
func frameworkDiagnosticsToString(fwdiags fwdiag.Diagnostics) string {
frameworkErrorDiags := fwdiags.Errors()
diagErrors := make([]string, 0, len(frameworkErrorDiags))
for _, tfdiag := range frameworkErrorDiags {
diagErrors = append(diagErrors, fmt.Sprintf("%s: %s", tfdiag.Summary(), tfdiag.Detail()))
}
return strings.Join(diagErrors, "\n")
}
// protov5DynamicValueFromMap constructs a protov5 DynamicValue given the
// map[string]any using the terraform type as reference.
func protov5DynamicValueFromMap(data map[string]any, terraformType tftypes.Type) (*tfprotov5.DynamicValue, error) {
jsonBytes, err := json.Marshal(data)
if err != nil {
return nil, errors.Wrap(err, "cannot marshal json")
}
tfValue, err := tftypes.ValueFromJSONWithOpts(jsonBytes, terraformType, tftypes.ValueFromJSONOpts{IgnoreUndefinedAttributes: true})
if err != nil {
return nil, errors.Wrap(err, "cannot construct tf value from json")
}
dynamicValue, err := tfprotov5.NewDynamicValue(terraformType, tfValue)
if err != nil {
return nil, errors.Wrap(err, "cannot construct dynamic value from tf value")
}
return &dynamicValue, nil
}
func (n *terraformPluginFrameworkExternalClient) Disconnect(_ context.Context) error {
return nil
}

View File

@ -0,0 +1,767 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"testing"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/provider"
"github.com/hashicorp/terraform-plugin-framework/resource"
rschema "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/tfsdk"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/hashicorp/terraform-plugin-go/tfprotov5"
"github.com/hashicorp/terraform-plugin-go/tftypes"
"github.com/pkg/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
func newBaseObject() fake.Terraformed {
return fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": "example",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
}
}
func newBaseSchema() rschema.Schema {
return rschema.Schema{
Attributes: map[string]rschema.Attribute{
"name": rschema.StringAttribute{
Required: true,
PlanModifiers: []planmodifier.String{
stringplanmodifier.UseStateForUnknown(),
},
},
"id": rschema.StringAttribute{
Computed: true,
PlanModifiers: []planmodifier.String{
stringplanmodifier.UseStateForUnknown(),
},
},
"map": rschema.MapAttribute{
Required: true,
ElementType: types.StringType,
},
"list": rschema.ListAttribute{
Required: true,
ElementType: types.StringType,
},
},
}
}
func newMockBaseTPFResource() *mockTPFResource {
return &mockTPFResource{
SchemaMethod: func(ctx context.Context, request resource.SchemaRequest, response *resource.SchemaResponse) {
response.Schema = newBaseSchema()
},
ReadMethod: func(ctx context.Context, request resource.ReadRequest, response *resource.ReadResponse) {
response.State = tfsdk.State{
Raw: tftypes.Value{},
Schema: nil,
}
},
}
}
func newBaseUpjetConfig() *config.Resource {
return &config.Resource{
TerraformPluginFrameworkResource: newMockBaseTPFResource(),
ExternalName: config.IdentifierFromProvider,
Sensitive: config.Sensitive{AdditionalConnectionDetailsFn: func(attr map[string]any) (map[string][]byte, error) {
return nil, nil
}},
}
}
type testConfiguration struct {
r resource.Resource
cfg *config.Resource
obj fake.Terraformed
params map[string]any
currentStateMap map[string]any
plannedStateMap map[string]any
newStateMap map[string]any
readErr error
readDiags []*tfprotov5.Diagnostic
applyErr error
applyDiags []*tfprotov5.Diagnostic
planErr error
planDiags []*tfprotov5.Diagnostic
}
func prepareTPFExternalWithTestConfig(testConfig testConfiguration) *terraformPluginFrameworkExternalClient {
testConfig.cfg.TerraformPluginFrameworkResource = testConfig.r
schemaResp := &resource.SchemaResponse{}
testConfig.r.Schema(context.TODO(), resource.SchemaRequest{}, schemaResp)
tfValueType := schemaResp.Schema.Type().TerraformType(context.TODO())
currentStateVal, err := protov5DynamicValueFromMap(testConfig.currentStateMap, tfValueType)
if err != nil {
panic("cannot prepare TPF")
}
plannedStateVal, err := protov5DynamicValueFromMap(testConfig.plannedStateMap, tfValueType)
if err != nil {
panic("cannot prepare TPF")
}
newStateAfterApplyVal, err := protov5DynamicValueFromMap(testConfig.newStateMap, tfValueType)
if err != nil {
panic("cannot prepare TPF")
}
return &terraformPluginFrameworkExternalClient{
ts: terraform.Setup{
FrameworkProvider: &mockTPFProvider{},
},
config: cfg,
logger: logTest,
// metricRecorder: nil,
opTracker: NewAsyncTracker(),
resource: testConfig.r,
server: &mockTPFProviderServer{
ReadResourceFn: func(ctx context.Context, request *tfprotov5.ReadResourceRequest) (*tfprotov5.ReadResourceResponse, error) {
return &tfprotov5.ReadResourceResponse{
NewState: currentStateVal,
Diagnostics: testConfig.readDiags,
}, testConfig.readErr
},
PlanResourceChangeFn: func(ctx context.Context, request *tfprotov5.PlanResourceChangeRequest) (*tfprotov5.PlanResourceChangeResponse, error) {
return &tfprotov5.PlanResourceChangeResponse{
PlannedState: plannedStateVal,
Diagnostics: testConfig.planDiags,
}, testConfig.planErr
},
ApplyResourceChangeFn: func(ctx context.Context, request *tfprotov5.ApplyResourceChangeRequest) (*tfprotov5.ApplyResourceChangeResponse, error) {
return &tfprotov5.ApplyResourceChangeResponse{
NewState: newStateAfterApplyVal,
Diagnostics: testConfig.applyDiags,
}, testConfig.applyErr
},
},
params: testConfig.params,
planResponse: &tfprotov5.PlanResourceChangeResponse{PlannedState: plannedStateVal},
resourceSchema: schemaResp.Schema,
resourceValueTerraformType: tfValueType,
}
}
func TestTPFConnect(t *testing.T) {
type args struct {
setupFn terraform.SetupFn
cfg *config.Resource
ots *OperationTrackerStore
obj fake.Terraformed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
setupFn: func(_ context.Context, _ client.Client, _ xpresource.Managed) (terraform.Setup, error) {
return terraform.Setup{
FrameworkProvider: &mockTPFProvider{},
}, nil
},
cfg: newBaseUpjetConfig(),
obj: newBaseObject(),
ots: ots,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
c := NewTerraformPluginFrameworkConnector(nil, tc.args.setupFn, tc.args.cfg, tc.args.ots, WithTerraformPluginFrameworkLogger(logTest))
_, err := c.Connect(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTPFObserve(t *testing.T) {
type want struct {
obs managed.ExternalObservation
err error
}
cases := map[string]struct {
testConfiguration
want
}{
"NotExists": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: obj,
currentStateMap: nil,
plannedStateMap: map[string]any{
"name": "example",
},
params: map[string]any{
"name": "example",
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: false,
ResourceUpToDate: false,
ResourceLateInitialized: false,
ConnectionDetails: nil,
Diff: "",
},
},
},
"UpToDate": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: newBaseObject(),
params: map[string]any{
"id": "example-id",
"name": "example",
},
currentStateMap: map[string]any{
"id": "example-id",
"name": "example",
},
plannedStateMap: map[string]any{
"id": "example-id",
"name": "example",
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
ConnectionDetails: nil,
Diff: "",
},
},
},
"LateInitialize": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": "example",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
InitParameters: map[string]any{
"list": []any{"elem1", "elem2", "elem3"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
},
params: map[string]any{
"id": "example-id",
},
currentStateMap: map[string]any{
"id": "example-id",
"name": "example2",
},
plannedStateMap: map[string]any{
"id": "example-id",
"name": "example2",
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
ConnectionDetails: nil,
Diff: "",
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
tpfExternal := prepareTPFExternalWithTestConfig(tc.testConfiguration)
observation, err := tpfExternal.Observe(context.TODO(), &tc.testConfiguration.obj)
if diff := cmp.Diff(tc.want.obs, observation); diff != "" {
t.Errorf("\n%s\nObserve(...): -want observation, +got observation:\n", diff)
}
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTPFCreate(t *testing.T) {
type want struct {
err error
}
cases := map[string]struct {
testConfiguration
want
}{
"Successful": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: obj,
currentStateMap: nil,
plannedStateMap: map[string]any{
"name": "example",
},
params: map[string]any{
"name": "example",
},
newStateMap: map[string]any{
"name": "example",
"id": "example-id",
},
},
},
"EmptyStateAfterCreation": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: obj,
currentStateMap: nil,
plannedStateMap: map[string]any{
"name": "example",
},
params: map[string]any{
"name": "example",
},
newStateMap: nil,
},
want: want{
err: errors.New("new state is empty after creation"),
},
},
"ApplyWithError": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: obj,
currentStateMap: nil,
plannedStateMap: map[string]any{
"name": "example",
},
params: map[string]any{
"name": "example",
},
newStateMap: nil,
applyErr: errors.New("foo error"),
},
want: want{
err: errors.Wrap(errors.New("foo error"), "cannot create resource"),
},
},
"ApplyWithDiags": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: obj,
currentStateMap: nil,
plannedStateMap: map[string]any{
"name": "example",
},
params: map[string]any{
"name": "example",
},
newStateMap: nil,
applyDiags: []*tfprotov5.Diagnostic{
{
Severity: tfprotov5.DiagnosticSeverityError,
Summary: "foo summary",
Detail: "foo detail",
},
},
},
want: want{
err: errors.Wrap(errors.New("foo summary: foo detail"), "resource creation call returned error diags"),
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
tpfExternal := prepareTPFExternalWithTestConfig(tc.testConfiguration)
_, err := tpfExternal.Create(context.TODO(), &tc.testConfiguration.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTPFUpdate(t *testing.T) {
type want struct {
err error
}
cases := map[string]struct {
testConfiguration
want
}{
"Successful": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: newBaseObject(),
currentStateMap: map[string]any{
"name": "example",
"id": "example-id",
},
plannedStateMap: map[string]any{
"name": "example-updated",
"id": "example-id",
},
params: map[string]any{
"name": "example-updated",
},
newStateMap: map[string]any{
"name": "example-updated",
"id": "example-id",
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
tpfExternal := prepareTPFExternalWithTestConfig(tc.testConfiguration)
_, err := tpfExternal.Update(context.TODO(), &tc.testConfiguration.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTPFDelete(t *testing.T) {
type want struct {
err error
}
cases := map[string]struct {
testConfiguration
want
}{
"Successful": {
testConfiguration: testConfiguration{
r: newMockBaseTPFResource(),
cfg: newBaseUpjetConfig(),
obj: newBaseObject(),
currentStateMap: map[string]any{
"name": "example",
"id": "example-id",
},
plannedStateMap: nil,
params: map[string]any{
"name": "example",
},
newStateMap: nil,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
tpfExternal := prepareTPFExternalWithTestConfig(tc.testConfiguration)
_, err := tpfExternal.Delete(context.TODO(), &tc.testConfiguration.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
// Mocks
var _ resource.Resource = &mockTPFResource{}
var _ tfprotov5.ProviderServer = &mockTPFProviderServer{}
var _ provider.Provider = &mockTPFProvider{}
type mockTPFProviderServer struct {
GetMetadataFn func(ctx context.Context, request *tfprotov5.GetMetadataRequest) (*tfprotov5.GetMetadataResponse, error)
GetProviderSchemaFn func(ctx context.Context, request *tfprotov5.GetProviderSchemaRequest) (*tfprotov5.GetProviderSchemaResponse, error)
PrepareProviderConfigFn func(ctx context.Context, request *tfprotov5.PrepareProviderConfigRequest) (*tfprotov5.PrepareProviderConfigResponse, error)
ConfigureProviderFn func(ctx context.Context, request *tfprotov5.ConfigureProviderRequest) (*tfprotov5.ConfigureProviderResponse, error)
StopProviderFn func(ctx context.Context, request *tfprotov5.StopProviderRequest) (*tfprotov5.StopProviderResponse, error)
ValidateResourceTypeConfigFn func(ctx context.Context, request *tfprotov5.ValidateResourceTypeConfigRequest) (*tfprotov5.ValidateResourceTypeConfigResponse, error)
UpgradeResourceStateFn func(ctx context.Context, request *tfprotov5.UpgradeResourceStateRequest) (*tfprotov5.UpgradeResourceStateResponse, error)
ReadResourceFn func(ctx context.Context, request *tfprotov5.ReadResourceRequest) (*tfprotov5.ReadResourceResponse, error)
PlanResourceChangeFn func(ctx context.Context, request *tfprotov5.PlanResourceChangeRequest) (*tfprotov5.PlanResourceChangeResponse, error)
ApplyResourceChangeFn func(ctx context.Context, request *tfprotov5.ApplyResourceChangeRequest) (*tfprotov5.ApplyResourceChangeResponse, error)
ImportResourceStateFn func(ctx context.Context, request *tfprotov5.ImportResourceStateRequest) (*tfprotov5.ImportResourceStateResponse, error)
ValidateDataSourceConfigFn func(ctx context.Context, request *tfprotov5.ValidateDataSourceConfigRequest) (*tfprotov5.ValidateDataSourceConfigResponse, error)
ReadDataSourceFn func(ctx context.Context, request *tfprotov5.ReadDataSourceRequest) (*tfprotov5.ReadDataSourceResponse, error)
}
func (m *mockTPFProviderServer) UpgradeResourceIdentity(_ context.Context, _ *tfprotov5.UpgradeResourceIdentityRequest) (*tfprotov5.UpgradeResourceIdentityResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetResourceIdentitySchemas(_ context.Context, _ *tfprotov5.GetResourceIdentitySchemasRequest) (*tfprotov5.GetResourceIdentitySchemasResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) MoveResourceState(_ context.Context, _ *tfprotov5.MoveResourceStateRequest) (*tfprotov5.MoveResourceStateResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) CallFunction(_ context.Context, _ *tfprotov5.CallFunctionRequest) (*tfprotov5.CallFunctionResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetFunctions(_ context.Context, _ *tfprotov5.GetFunctionsRequest) (*tfprotov5.GetFunctionsResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ValidateEphemeralResourceConfig(_ context.Context, _ *tfprotov5.ValidateEphemeralResourceConfigRequest) (*tfprotov5.ValidateEphemeralResourceConfigResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) OpenEphemeralResource(_ context.Context, _ *tfprotov5.OpenEphemeralResourceRequest) (*tfprotov5.OpenEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) RenewEphemeralResource(_ context.Context, _ *tfprotov5.RenewEphemeralResourceRequest) (*tfprotov5.RenewEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) CloseEphemeralResource(_ context.Context, _ *tfprotov5.CloseEphemeralResourceRequest) (*tfprotov5.CloseEphemeralResourceResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetMetadata(_ context.Context, _ *tfprotov5.GetMetadataRequest) (*tfprotov5.GetMetadataResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) GetProviderSchema(_ context.Context, _ *tfprotov5.GetProviderSchemaRequest) (*tfprotov5.GetProviderSchemaResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) PrepareProviderConfig(_ context.Context, _ *tfprotov5.PrepareProviderConfigRequest) (*tfprotov5.PrepareProviderConfigResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ConfigureProvider(_ context.Context, _ *tfprotov5.ConfigureProviderRequest) (*tfprotov5.ConfigureProviderResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) StopProvider(_ context.Context, _ *tfprotov5.StopProviderRequest) (*tfprotov5.StopProviderResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ValidateResourceTypeConfig(_ context.Context, _ *tfprotov5.ValidateResourceTypeConfigRequest) (*tfprotov5.ValidateResourceTypeConfigResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) UpgradeResourceState(_ context.Context, _ *tfprotov5.UpgradeResourceStateRequest) (*tfprotov5.UpgradeResourceStateResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ReadResource(ctx context.Context, request *tfprotov5.ReadResourceRequest) (*tfprotov5.ReadResourceResponse, error) {
if m.ReadResourceFn == nil {
return nil, nil
}
return m.ReadResourceFn(ctx, request)
}
func (m *mockTPFProviderServer) PlanResourceChange(ctx context.Context, request *tfprotov5.PlanResourceChangeRequest) (*tfprotov5.PlanResourceChangeResponse, error) {
if m.PlanResourceChangeFn == nil {
return nil, nil
}
return m.PlanResourceChangeFn(ctx, request)
}
func (m *mockTPFProviderServer) ApplyResourceChange(ctx context.Context, request *tfprotov5.ApplyResourceChangeRequest) (*tfprotov5.ApplyResourceChangeResponse, error) {
if m.ApplyResourceChangeFn == nil {
return nil, nil
}
return m.ApplyResourceChangeFn(ctx, request)
}
func (m *mockTPFProviderServer) ImportResourceState(_ context.Context, _ *tfprotov5.ImportResourceStateRequest) (*tfprotov5.ImportResourceStateResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ValidateDataSourceConfig(_ context.Context, _ *tfprotov5.ValidateDataSourceConfigRequest) (*tfprotov5.ValidateDataSourceConfigResponse, error) {
// TODO implement me
panic("implement me")
}
func (m *mockTPFProviderServer) ReadDataSource(_ context.Context, _ *tfprotov5.ReadDataSourceRequest) (*tfprotov5.ReadDataSourceResponse, error) {
// TODO implement me
panic("implement me")
}
type mockTPFProvider struct {
// Provider interface methods
MetadataMethod func(context.Context, provider.MetadataRequest, *provider.MetadataResponse)
ConfigureMethod func(context.Context, provider.ConfigureRequest, *provider.ConfigureResponse)
SchemaMethod func(context.Context, provider.SchemaRequest, *provider.SchemaResponse)
DataSourcesMethod func(context.Context) []func() datasource.DataSource
ResourcesMethod func(context.Context) []func() resource.Resource
}
// Configure satisfies the provider.Provider interface.
func (p *mockTPFProvider) Configure(ctx context.Context, req provider.ConfigureRequest, resp *provider.ConfigureResponse) {
if p == nil || p.ConfigureMethod == nil {
return
}
p.ConfigureMethod(ctx, req, resp)
}
// DataSources satisfies the provider.Provider interface.
func (p *mockTPFProvider) DataSources(ctx context.Context) []func() datasource.DataSource {
if p == nil || p.DataSourcesMethod == nil {
return nil
}
return p.DataSourcesMethod(ctx)
}
// Metadata satisfies the provider.Provider interface.
func (p *mockTPFProvider) Metadata(ctx context.Context, req provider.MetadataRequest, resp *provider.MetadataResponse) {
if p == nil || p.MetadataMethod == nil {
return
}
p.MetadataMethod(ctx, req, resp)
}
// Schema satisfies the provider.Provider interface.
func (p *mockTPFProvider) Schema(ctx context.Context, req provider.SchemaRequest, resp *provider.SchemaResponse) {
if p == nil || p.SchemaMethod == nil {
return
}
p.SchemaMethod(ctx, req, resp)
}
// Resources satisfies the provider.Provider interface.
func (p *mockTPFProvider) Resources(ctx context.Context) []func() resource.Resource {
if p == nil || p.ResourcesMethod == nil {
return nil
}
return p.ResourcesMethod(ctx)
}
type mockTPFResource struct {
// Resource interface methods
MetadataMethod func(context.Context, resource.MetadataRequest, *resource.MetadataResponse)
SchemaMethod func(context.Context, resource.SchemaRequest, *resource.SchemaResponse)
CreateMethod func(context.Context, resource.CreateRequest, *resource.CreateResponse)
DeleteMethod func(context.Context, resource.DeleteRequest, *resource.DeleteResponse)
ReadMethod func(context.Context, resource.ReadRequest, *resource.ReadResponse)
UpdateMethod func(context.Context, resource.UpdateRequest, *resource.UpdateResponse)
}
// Metadata satisfies the resource.Resource interface.
func (r *mockTPFResource) Metadata(ctx context.Context, req resource.MetadataRequest, resp *resource.MetadataResponse) {
if r.MetadataMethod == nil {
return
}
r.MetadataMethod(ctx, req, resp)
}
// Schema satisfies the resource.Resource interface.
func (r *mockTPFResource) Schema(ctx context.Context, req resource.SchemaRequest, resp *resource.SchemaResponse) {
if r.SchemaMethod == nil {
return
}
r.SchemaMethod(ctx, req, resp)
}
// Create satisfies the resource.Resource interface.
func (r *mockTPFResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
if r.CreateMethod == nil {
return
}
r.CreateMethod(ctx, req, resp)
}
// Delete satisfies the resource.Resource interface.
func (r *mockTPFResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
if r.DeleteMethod == nil {
return
}
r.DeleteMethod(ctx, req, resp)
}
// Read satisfies the resource.Resource interface.
func (r *mockTPFResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
if r.ReadMethod == nil {
return
}
r.ReadMethod(ctx, req, resp)
}
// Update satisfies the resource.Resource interface.
func (r *mockTPFResource) Update(ctx context.Context, req resource.UpdateRequest, resp *resource.UpdateResponse) {
if r.UpdateMethod == nil {
return
}
r.UpdateMethod(ctx, req, resp)
}

View File

@ -0,0 +1,773 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"fmt"
"strings"
"time"
xpv1 "github.com/crossplane/crossplane-runtime/v2/apis/common/v1"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/hashicorp/go-cty/cty"
tfdiag "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
tf "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/sets"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/metrics"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/resource/json"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
type TerraformPluginSDKConnector struct {
getTerraformSetup terraform.SetupFn
kube client.Client
config *config.Resource
logger logging.Logger
metricRecorder *metrics.MetricRecorder
operationTrackerStore *OperationTrackerStore
isManagementPoliciesEnabled bool
}
// TerraformPluginSDKOption allows you to configure TerraformPluginSDKConnector.
type TerraformPluginSDKOption func(connector *TerraformPluginSDKConnector)
// WithTerraformPluginSDKLogger configures a logger for the TerraformPluginSDKConnector.
func WithTerraformPluginSDKLogger(l logging.Logger) TerraformPluginSDKOption {
return func(c *TerraformPluginSDKConnector) {
c.logger = l
}
}
// WithTerraformPluginSDKMetricRecorder configures a metrics.MetricRecorder for the
// TerraformPluginSDKConnector.
func WithTerraformPluginSDKMetricRecorder(r *metrics.MetricRecorder) TerraformPluginSDKOption {
return func(c *TerraformPluginSDKConnector) {
c.metricRecorder = r
}
}
// WithTerraformPluginSDKManagementPolicies configures whether the client should
// handle management policies.
func WithTerraformPluginSDKManagementPolicies(isManagementPoliciesEnabled bool) TerraformPluginSDKOption {
return func(c *TerraformPluginSDKConnector) {
c.isManagementPoliciesEnabled = isManagementPoliciesEnabled
}
}
// NewTerraformPluginSDKConnector initializes a new TerraformPluginSDKConnector
func NewTerraformPluginSDKConnector(kube client.Client, sf terraform.SetupFn, cfg *config.Resource, ots *OperationTrackerStore, opts ...TerraformPluginSDKOption) *TerraformPluginSDKConnector {
nfc := &TerraformPluginSDKConnector{
kube: kube,
getTerraformSetup: sf,
config: cfg,
operationTrackerStore: ots,
}
for _, f := range opts {
f(nfc)
}
return nfc
}
func copyParameters(tfState, params map[string]any) map[string]any {
targetState := make(map[string]any, len(params))
for k, v := range params {
targetState[k] = v
}
for k, v := range tfState {
targetState[k] = v
}
return targetState
}
func getJSONMap(mg xpresource.Managed) (map[string]any, error) {
pv, err := fieldpath.PaveObject(mg)
if err != nil {
return nil, errors.Wrap(err, "cannot pave the managed resource")
}
v, err := pv.GetValue("spec.forProvider")
if err != nil {
return nil, errors.Wrap(err, "cannot get spec.forProvider value from paved object")
}
return v.(map[string]any), nil
}
type Resource interface {
Apply(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, tfdiag.Diagnostics)
RefreshWithoutUpgrade(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, tfdiag.Diagnostics)
}
type terraformPluginSDKExternal struct {
ts terraform.Setup
resourceSchema Resource
config *config.Resource
instanceDiff *tf.InstanceDiff
params map[string]any
rawConfig cty.Value
logger logging.Logger
metricRecorder *metrics.MetricRecorder
opTracker *AsyncTracker
isManagementPoliciesEnabled bool
}
func getExtendedParameters(ctx context.Context, tr resource.Terraformed, externalName string, cfg *config.Resource, ts terraform.Setup, initParamsMerged bool, kube client.Client) (map[string]any, error) {
params, err := tr.GetMergedParameters(initParamsMerged)
if err != nil {
return nil, errors.Wrap(err, "cannot get merged parameters")
}
params, err = cfg.ApplyTFConversions(params, config.ToTerraform)
if err != nil {
return nil, errors.Wrap(err, "cannot apply tf conversions")
}
if err = resource.GetSensitiveParameters(ctx, &APISecretClient{kube: kube}, tr, params, tr.GetConnectionDetailsMapping()); err != nil {
return nil, errors.Wrap(err, "cannot store sensitive parameters into params")
}
cfg.ExternalName.SetIdentifierArgumentFn(params, externalName)
if cfg.TerraformConfigurationInjector != nil {
m, err := getJSONMap(tr)
if err != nil {
return nil, errors.Wrap(err, "cannot get JSON map for the managed resource's spec.forProvider value")
}
if err := cfg.TerraformConfigurationInjector(m, params); err != nil {
return nil, errors.Wrap(err, "cannot invoke the configured TerraformConfigurationInjector")
}
}
tfID, err := cfg.ExternalName.GetIDFn(ctx, externalName, params, ts.Map())
if err != nil {
return nil, errors.Wrap(err, "cannot get ID")
}
params["id"] = tfID
// we need to parameterize the following for a provider
// not all providers may have this attribute
// TODO: tags-tags_all implementation is AWS specific.
// Consider making this logic independent of provider.
if cfg.TerraformResource != nil {
if _, ok := cfg.TerraformResource.CoreConfigSchema().Attributes["tags_all"]; ok {
params["tags_all"] = params["tags"]
}
}
return params, nil
}
func (c *TerraformPluginSDKConnector) processParamsWithHCLParser(schemaMap map[string]*schema.Schema, params map[string]any) map[string]any {
if params == nil {
return params
}
for key, param := range params {
if sc, ok := schemaMap[key]; ok {
params[key] = c.applyHCLParserToParam(sc, param)
} else {
params[key] = param
}
}
return params
}
func (c *TerraformPluginSDKConnector) applyHCLParserToParam(sc *schema.Schema, param any) any { //nolint:gocyclo
if param == nil {
return param
}
switch sc.Type { //nolint:exhaustive
case schema.TypeMap:
if sc.Elem == nil {
return param
}
pmap, okParam := param.(map[string]any)
// TypeMap only supports schema in Elem
if mapSchema, ok := sc.Elem.(*schema.Schema); ok && okParam {
for pk, pv := range pmap {
pmap[pk] = c.applyHCLParserToParam(mapSchema, pv)
}
return pmap
}
case schema.TypeSet, schema.TypeList:
if sc.Elem == nil {
return param
}
pArray, okParam := param.([]any)
if setSchema, ok := sc.Elem.(*schema.Schema); ok && okParam {
for i, p := range pArray {
pArray[i] = c.applyHCLParserToParam(setSchema, p)
}
return pArray
} else if setResource, ok := sc.Elem.(*schema.Resource); ok {
for i, p := range pArray {
if resParam, okRParam := p.(map[string]any); okRParam {
pArray[i] = c.processParamsWithHCLParser(setResource.Schema, resParam)
}
}
}
case schema.TypeString:
// For String types check if it is an HCL string and process
if isHCLSnippetPattern.MatchString(param.(string)) {
hclProccessedParam, err := processHCLParam(param.(string))
if err != nil {
c.logger.Debug("could not process param, returning original", "param", sc.GoString())
} else {
param = hclProccessedParam
}
}
return param
default:
return param
}
return param
}
func (c *TerraformPluginSDKConnector) Connect(ctx context.Context, mg xpresource.Managed) (managed.ExternalClient, error) { //nolint:gocyclo
c.metricRecorder.ObserveReconcileDelay(mg.GetObjectKind().GroupVersionKind(), metrics.NameForManaged(mg))
logger := c.logger.WithValues("uid", mg.GetUID(), "name", mg.GetName(), "namespace", mg.GetNamespace(), "gvk", mg.GetObjectKind().GroupVersionKind().String())
logger.Debug("Connecting to the service provider")
start := time.Now()
ts, err := c.getTerraformSetup(ctx, c.kube, mg)
metrics.ExternalAPITime.WithLabelValues("connect").Observe(time.Since(start).Seconds())
if err != nil {
return nil, errors.Wrap(err, errGetTerraformSetup)
}
// To Compute the ResourceDiff: n.resourceSchema.Diff(...)
tr := mg.(resource.Terraformed)
opTracker := c.operationTrackerStore.Tracker(tr)
externalName := meta.GetExternalName(tr)
params, err := getExtendedParameters(ctx, tr, externalName, c.config, ts, c.isManagementPoliciesEnabled, c.kube)
if err != nil {
return nil, errors.Wrapf(err, "failed to get the extended parameters for resource %q", client.ObjectKeyFromObject(mg))
}
params = c.processParamsWithHCLParser(c.config.TerraformResource.Schema, params)
schemaBlock := c.config.TerraformResource.CoreConfigSchema()
rawConfig, err := schema.JSONMapToStateValue(params, schemaBlock)
if err != nil {
return nil, errors.Wrap(err, "failed to convert params JSON map to cty.Value")
}
if !opTracker.HasState() {
logger.Debug("Instance state not found in cache, reconstructing...")
tfState, err := tr.GetObservation()
if err != nil {
return nil, errors.Wrap(err, "failed to get the observation")
}
tfState, err = c.config.ApplyTFConversions(tfState, config.ToTerraform)
if err != nil {
return nil, errors.Wrap(err, "failed to run the API converters on the Terraform state")
}
copyParams := len(tfState) == 0
if err = resource.GetSensitiveParameters(ctx, &APISecretClient{kube: c.kube}, tr, tfState, tr.GetConnectionDetailsMapping()); err != nil {
return nil, errors.Wrap(err, "cannot store sensitive parameters into tfState")
}
c.config.ExternalName.SetIdentifierArgumentFn(tfState, externalName)
tfState["id"] = params["id"]
if copyParams {
tfState = copyParameters(tfState, params)
}
tfStateCtyValue, err := schema.JSONMapToStateValue(tfState, schemaBlock)
if err != nil {
return nil, errors.Wrap(err, "cannot convert JSON map to state cty.Value")
}
s, err := c.config.TerraformResource.ShimInstanceStateFromValue(tfStateCtyValue)
if err != nil {
return nil, errors.Wrap(err, "failed to convert cty.Value to terraform.InstanceState")
}
s.RawPlan = tfStateCtyValue
s.RawConfig = rawConfig
timeouts := getTimeoutParameters(c.config)
if len(timeouts) > 0 {
if s == nil {
s = &tf.InstanceState{}
}
if s.Meta == nil {
s.Meta = make(map[string]interface{})
}
s.Meta[schema.TimeoutKey] = timeouts
}
opTracker.SetTfState(s)
}
return &terraformPluginSDKExternal{
ts: ts,
resourceSchema: c.config.TerraformResource,
config: c.config,
params: params,
rawConfig: rawConfig,
logger: logger,
metricRecorder: c.metricRecorder,
opTracker: opTracker,
isManagementPoliciesEnabled: c.isManagementPoliciesEnabled,
}, nil
}
func filterInitExclusiveDiffs(tr resource.Terraformed, instanceDiff *tf.InstanceDiff) error { //nolint:gocyclo
if instanceDiff == nil || instanceDiff.Empty() {
return nil
}
paramsForProvider, err := tr.GetParameters()
if err != nil {
return errors.Wrap(err, "cannot get spec.forProvider parameters")
}
paramsInitProvider, err := tr.GetInitParameters()
if err != nil {
return errors.Wrap(err, "cannot get spec.initProvider parameters")
}
initProviderExclusiveParamKeys := getTerraformIgnoreChanges(paramsForProvider, paramsInitProvider)
for _, keyToIgnore := range initProviderExclusiveParamKeys {
for attributeKey := range instanceDiff.Attributes {
keyToIgnoreAsPrefix := fmt.Sprintf("%s.", keyToIgnore)
if keyToIgnore != attributeKey && !strings.HasPrefix(attributeKey, keyToIgnoreAsPrefix) {
continue
}
delete(instanceDiff.Attributes, attributeKey)
// TODO: tags-tags_all implementation is AWS specific.
// Consider making this logic independent of provider.
keyComponents := strings.Split(attributeKey, ".")
if keyComponents[0] != "tags" {
continue
}
keyComponents[0] = "tags_all"
tagsAllAttributeKey := strings.Join(keyComponents, ".")
delete(instanceDiff.Attributes, tagsAllAttributeKey)
}
}
// Delete length keys, such as "tags.%" (schema.TypeMap) and
// "cidrBlocks.#" (schema.TypeSet), because of two reasons:
//
// 1. Diffs are applied successfully without them, except for
// schema.TypeList.
//
// 2. If only length keys remain in the diff, after ignored
// attributes are removed above, they cause diff to be considered
// non-empty, even though it is effectively empty, therefore causing
// an infinite update loop.
for _, keyToIgnore := range initProviderExclusiveParamKeys {
keyComponents := strings.Split(keyToIgnore, ".")
if len(keyComponents) < 2 {
continue
}
// TODO: Consider locating the schema corresponding to keyToIgnore
// and checking whether it's a collection, before attempting to
// delete its length key.
for _, lengthSymbol := range []string{"%", "#"} {
keyComponents[len(keyComponents)-1] = lengthSymbol
lengthKey := strings.Join(keyComponents, ".")
delete(instanceDiff.Attributes, lengthKey)
}
// TODO: tags-tags_all implementation is AWS specific.
// Consider making this logic independent of provider.
if keyComponents[0] == "tags" {
keyComponents[0] = "tags_all"
keyComponents[len(keyComponents)-1] = "%"
lengthKey := strings.Join(keyComponents, ".")
delete(instanceDiff.Attributes, lengthKey)
}
}
return nil
}
// resource timeouts configuration
func getTimeoutParameters(config *config.Resource) map[string]any { //nolint:gocyclo
timeouts := make(map[string]any)
// first use the timeout overrides specified in
// the Terraform resource schema
if config.TerraformResource.Timeouts != nil {
if config.TerraformResource.Timeouts.Create != nil && *config.TerraformResource.Timeouts.Create != 0 {
timeouts[schema.TimeoutCreate] = config.TerraformResource.Timeouts.Create.Nanoseconds()
}
if config.TerraformResource.Timeouts.Update != nil && *config.TerraformResource.Timeouts.Update != 0 {
timeouts[schema.TimeoutUpdate] = config.TerraformResource.Timeouts.Update.Nanoseconds()
}
if config.TerraformResource.Timeouts.Delete != nil && *config.TerraformResource.Timeouts.Delete != 0 {
timeouts[schema.TimeoutDelete] = config.TerraformResource.Timeouts.Delete.Nanoseconds()
}
if config.TerraformResource.Timeouts.Read != nil && *config.TerraformResource.Timeouts.Read != 0 {
timeouts[schema.TimeoutRead] = config.TerraformResource.Timeouts.Read.Nanoseconds()
}
}
// then, override any Terraform defaults using any upjet
// resource configuration overrides
if config.OperationTimeouts.Create != 0 {
timeouts[schema.TimeoutCreate] = config.OperationTimeouts.Create.Nanoseconds()
}
if config.OperationTimeouts.Update != 0 {
timeouts[schema.TimeoutUpdate] = config.OperationTimeouts.Update.Nanoseconds()
}
if config.OperationTimeouts.Delete != 0 {
timeouts[schema.TimeoutDelete] = config.OperationTimeouts.Delete.Nanoseconds()
}
if config.OperationTimeouts.Read != 0 {
timeouts[schema.TimeoutRead] = config.OperationTimeouts.Read.Nanoseconds()
}
return timeouts
}
func (n *terraformPluginSDKExternal) getResourceDataDiff(tr resource.Terraformed, ctx context.Context, s *tf.InstanceState, resourceExists bool) (*tf.InstanceDiff, error) { //nolint:gocyclo
resourceConfig := tf.NewResourceConfigRaw(n.params)
instanceDiff, err := schema.InternalMap(n.config.TerraformResource.Schema).Diff(ctx, s, resourceConfig, n.config.TerraformResource.CustomizeDiff, n.ts.Meta, false)
if err != nil {
return nil, errors.Wrap(err, "failed to get *terraform.InstanceDiff")
}
// Sanitize Identity field in Diff.
// This causes continuous diff loop.
if instanceDiff != nil {
instanceDiff.Identity = nil
}
if n.config.TerraformCustomDiff != nil {
instanceDiff, err = n.config.TerraformCustomDiff(instanceDiff, s, resourceConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to compute the customized terraform.InstanceDiff")
}
}
if resourceExists {
if err := filterInitExclusiveDiffs(tr, instanceDiff); err != nil {
return nil, errors.Wrap(err, "failed to filter the diffs exclusive to spec.initProvider in the terraform.InstanceDiff")
}
}
if instanceDiff != nil {
v := cty.EmptyObjectVal
v, err = instanceDiff.ApplyToValue(v, n.config.TerraformResource.CoreConfigSchema())
if err != nil {
return nil, errors.Wrap(err, "cannot apply Terraform instance diff to an empty value")
}
instanceDiff.RawPlan = v
}
if instanceDiff != nil && !instanceDiff.Empty() {
n.logger.Debug("Diff detected", "instanceDiff", instanceDiff.GoString())
// Assumption: Source of truth when applying diffs, for instance on updates, is instanceDiff.Attributes.
// Setting instanceDiff.RawConfig has no effect on diff application.
instanceDiff.RawConfig = n.rawConfig
}
timeouts := getTimeoutParameters(n.config)
if len(timeouts) > 0 {
if instanceDiff == nil {
instanceDiff = tf.NewInstanceDiff()
}
if instanceDiff.Meta == nil {
instanceDiff.Meta = make(map[string]interface{})
}
instanceDiff.Meta[schema.TimeoutKey] = timeouts
}
return instanceDiff, nil
}
func (n *terraformPluginSDKExternal) Observe(ctx context.Context, mg xpresource.Managed) (managed.ExternalObservation, error) { //nolint:gocyclo
var err error
n.logger.Debug("Observing the external resource")
if meta.WasDeleted(mg) && n.opTracker.IsDeleted() {
return managed.ExternalObservation{
ResourceExists: false,
}, nil
}
start := time.Now()
newState, diag := n.resourceSchema.RefreshWithoutUpgrade(ctx, n.opTracker.GetTfState(), n.ts.Meta)
metrics.ExternalAPITime.WithLabelValues("read").Observe(time.Since(start).Seconds())
if diag != nil && diag.HasError() {
return managed.ExternalObservation{}, errors.Errorf("failed to observe the resource: %v", diag)
}
diffState := n.opTracker.GetTfState()
n.opTracker.SetTfState(newState) // TODO: missing RawConfig & RawPlan here...
resourceExists := newState != nil && newState.ID != ""
var stateValueMap map[string]any
if resourceExists {
jsonMap, stateValue, err := n.fromInstanceStateToJSONMap(newState)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot convert instance state to JSON map")
}
stateValueMap = jsonMap
newState.RawPlan = stateValue
newState.RawConfig = n.rawConfig
diffState = newState
} else if diffState != nil {
diffState.Attributes = nil
diffState.ID = ""
}
n.instanceDiff = nil
policySet := sets.New[xpv1.ManagementAction](mg.(resource.Terraformed).GetManagementPolicies()...)
observeOnlyPolicy := sets.New(xpv1.ManagementActionObserve)
isObserveOnlyPolicy := policySet.Equal(observeOnlyPolicy)
if !isObserveOnlyPolicy || !n.isManagementPoliciesEnabled {
n.instanceDiff, err = n.getResourceDataDiff(mg.(resource.Terraformed), ctx, diffState, resourceExists)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot compute the instance diff")
}
}
if n.instanceDiff == nil {
n.instanceDiff = tf.NewInstanceDiff()
}
hasDiff := !n.instanceDiff.Empty()
if !resourceExists && mg.GetDeletionTimestamp() != nil {
gvk := mg.GetObjectKind().GroupVersionKind()
metrics.DeletionTime.WithLabelValues(gvk.Group, gvk.Version, gvk.Kind).Observe(time.Since(mg.GetDeletionTimestamp().Time).Seconds())
}
var connDetails managed.ConnectionDetails
specUpdateRequired := false
if resourceExists {
if mg.GetCondition(xpv1.TypeReady).Status == corev1.ConditionUnknown ||
mg.GetCondition(xpv1.TypeReady).Status == corev1.ConditionFalse {
addTTR(mg)
}
mg.SetConditions(xpv1.Available())
// we get the connection details from the observed state before
// the conversion because the sensitive paths assume the native Terraform
// schema.
connDetails, err = resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get connection details")
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot convert the singleton lists in the observed state value map into embedded objects")
}
buff, err := json.TFParser.Marshal(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot marshal the attributes of the new state for late-initialization")
}
policyHasLateInit := policySet.HasAny(xpv1.ManagementActionLateInitialize, xpv1.ManagementActionAll)
if policyHasLateInit {
specUpdateRequired, err = mg.(resource.Terraformed).LateInitialize(buff)
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot late-initialize the managed resource")
}
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalObservation{}, errors.Errorf("could not set observation: %v", err)
}
if !hasDiff {
n.metricRecorder.SetReconcileTime(metrics.NameForManaged(mg))
}
if !specUpdateRequired {
resource.SetUpToDateCondition(mg, !hasDiff)
}
// check for an external-name change
if nameChanged, err := n.setExternalName(mg, stateValueMap); err != nil {
return managed.ExternalObservation{}, errors.Wrapf(err, "failed to set the external-name of the managed resource during observe")
} else {
specUpdateRequired = specUpdateRequired || nameChanged
}
}
return managed.ExternalObservation{
ResourceExists: resourceExists,
ResourceUpToDate: !hasDiff,
ConnectionDetails: connDetails,
ResourceLateInitialized: specUpdateRequired,
}, nil
}
// sets the external-name on the MR. Returns `true`
// if the external-name of the MR has changed.
func (n *terraformPluginSDKExternal) setExternalName(mg xpresource.Managed, stateValueMap map[string]interface{}) (bool, error) {
id, ok := stateValueMap["id"]
if !ok || id.(string) == "" {
return false, nil
}
newName, err := n.config.ExternalName.GetExternalNameFn(stateValueMap)
if err != nil {
return false, errors.Wrapf(err, "failed to compute the external-name from the state map of the resource with the ID %s", id)
}
oldName := meta.GetExternalName(mg)
// we have to make sure the newly set external-name is recorded
meta.SetExternalName(mg, newName)
return oldName != newName, nil
}
func (n *terraformPluginSDKExternal) Create(ctx context.Context, mg xpresource.Managed) (managed.ExternalCreation, error) { //nolint:gocyclo // easier to follow as a unit
n.logger.Debug("Creating the external resource")
start := time.Now()
newState, diag := n.resourceSchema.Apply(ctx, n.opTracker.GetTfState(), n.instanceDiff, n.ts.Meta)
metrics.ExternalAPITime.WithLabelValues("create").Observe(time.Since(start).Seconds())
if diag != nil && diag.HasError() {
// we need to store the Terraform state from the downstream create call if
// one is available even if the diagnostics has reported errors.
// The downstream create call comprises multiple external API calls such as
// the external resource create call, expected state assertion calls
// (external resource state reads) and external resource state refresh
// calls, etc. Any of these steps can fail and if the initial
// external resource create call succeeds, then the TF plugin SDK makes the
// state (together with the TF ID associated with the external resource)
// available reporting any encountered issues in the returned diagnostics.
// If we don't record the returned state from the successful create call,
// then we may hit issues for resources whose Crossplane identifiers cannot
// be computed solely from spec parameters and provider configs, i.e.,
// those that contain a random part generated by the CSP. Please see:
// https://github.com/upbound/provider-aws/issues/1010, or
// https://github.com/upbound/provider-aws/issues/1018, which both involve
// MRs with config.IdentifierFromProvider external-name configurations.
// NOTE: The safe (and thus the proper) thing to do in this situation from
// the Crossplane provider's perspective is to set the MR's
// `crossplane.io/external-create-failed` annotation because the provider
// does not know the exact state the external resource is in and a manual
// intervention may be required. But at the time we are introducing this
// fix, we believe associating the external-resource with the MR will just
// provide a better UX although the external resource may not be in the
// expected/desired state yet. We are also planning for improvements on the
// crossplane-runtime's managed reconciler to better support upjet's async
// operations in this regard.
if !n.opTracker.HasState() { // we do not expect a previous state here but just being defensive
n.opTracker.SetTfState(newState)
}
return managed.ExternalCreation{}, errors.Errorf("failed to create the resource: %v", diag)
}
if newState == nil || newState.ID == "" {
return managed.ExternalCreation{}, errors.New("failed to read the ID of the new resource")
}
n.opTracker.SetTfState(newState)
stateValueMap, _, err := n.fromInstanceStateToJSONMap(newState)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "failed to convert instance state to map")
}
if _, err := n.setExternalName(mg, stateValueMap); err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "failed to set the external-name of the managed resource during create")
}
// we get the connection details from the observed state before
// the conversion because the sensitive paths assume the native Terraform
// schema.
conn, err := resource.GetConnectionDetails(stateValueMap, mg.(resource.Terraformed), n.config)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot get connection details")
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalCreation{}, errors.Wrap(err, "cannot convert the singleton lists in the state value map of the newly created resource into embedded objects")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalCreation{}, errors.Errorf("could not set observation: %v", err)
}
return managed.ExternalCreation{ConnectionDetails: conn}, nil
}
func (n *terraformPluginSDKExternal) assertNoForceNew() error {
if n.instanceDiff == nil {
return nil
}
for k, ad := range n.instanceDiff.Attributes {
if ad == nil {
continue
}
// TODO: use a multi-error implementation to report changes to
// all `ForceNew` arguments.
if ad.RequiresNew {
if ad.Sensitive {
return errors.Errorf("cannot change the value of the argument %q", k)
}
return errors.Errorf("cannot change the value of the argument %q from %q to %q", k, ad.Old, ad.New)
}
}
return nil
}
func (n *terraformPluginSDKExternal) Update(ctx context.Context, mg xpresource.Managed) (managed.ExternalUpdate, error) {
if n.config.UpdateLoopPrevention != nil {
preventResult, err := n.config.UpdateLoopPrevention.UpdateLoopPreventionFunc(n.instanceDiff, mg)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrapf(err, "failed to apply the update loop prevention function for %s", n.config.Name)
}
if preventResult != nil {
return managed.ExternalUpdate{}, errors.Errorf("update operation was blocked because of a possible update loop: %s", preventResult.Reason)
}
}
n.logger.Debug("Updating the external resource")
if err := n.assertNoForceNew(); err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "refuse to update the external resource because the following update requires replacing it")
}
start := time.Now()
newState, diag := n.resourceSchema.Apply(ctx, n.opTracker.GetTfState(), n.instanceDiff, n.ts.Meta)
metrics.ExternalAPITime.WithLabelValues("update").Observe(time.Since(start).Seconds())
if diag != nil && diag.HasError() {
return managed.ExternalUpdate{}, errors.Errorf("failed to update the resource: %v", diag)
}
n.opTracker.SetTfState(newState)
stateValueMap, _, err := n.fromInstanceStateToJSONMap(newState)
if err != nil {
return managed.ExternalUpdate{}, err
}
stateValueMap, err = n.config.ApplyTFConversions(stateValueMap, config.FromTerraform)
if err != nil {
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot convert the singleton lists for the updated resource state value map into embedded objects")
}
err = mg.(resource.Terraformed).SetObservation(stateValueMap)
if err != nil {
return managed.ExternalUpdate{}, errors.Errorf("failed to set observation: %v", err)
}
return managed.ExternalUpdate{}, nil
}
func (n *terraformPluginSDKExternal) Delete(ctx context.Context, _ xpresource.Managed) (managed.ExternalDelete, error) {
n.logger.Debug("Deleting the external resource")
if n.instanceDiff == nil {
n.instanceDiff = tf.NewInstanceDiff()
}
n.instanceDiff.Destroy = true
start := time.Now()
newState, diag := n.resourceSchema.Apply(ctx, n.opTracker.GetTfState(), n.instanceDiff, n.ts.Meta)
metrics.ExternalAPITime.WithLabelValues("delete").Observe(time.Since(start).Seconds())
if diag != nil && diag.HasError() {
return managed.ExternalDelete{}, errors.Errorf("failed to delete the resource: %v", diag)
}
n.opTracker.SetTfState(newState)
// mark the resource as logically deleted if the TF call clears the state
n.opTracker.SetDeleted(newState == nil)
return managed.ExternalDelete{}, nil
}
func (n *terraformPluginSDKExternal) Disconnect(_ context.Context) error {
return nil
}
func (n *terraformPluginSDKExternal) fromInstanceStateToJSONMap(newState *tf.InstanceState) (map[string]interface{}, cty.Value, error) {
impliedType := n.config.TerraformResource.CoreConfigSchema().ImpliedType()
attrsAsCtyValue, err := newState.AttrsAsObjectValue(impliedType)
if err != nil {
return nil, cty.NilVal, errors.Wrap(err, "could not convert attrs to cty value")
}
stateValueMap, err := schema.StateValueToJSONMap(attrsAsCtyValue, impliedType)
if err != nil {
return nil, cty.NilVal, errors.Wrap(err, "could not convert instance state value to JSON")
}
return stateValueMap, attrsAsCtyValue, nil
}

View File

@ -0,0 +1,405 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"testing"
"time"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"github.com/crossplane/crossplane-runtime/v2/pkg/reconciler/managed"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/crossplane/crossplane-runtime/v2/pkg/test"
"github.com/google/go-cmp/cmp"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
tf "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"github.com/pkg/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource/fake"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
var (
zl = zap.New(zap.UseDevMode(true))
logTest = logging.NewLogrLogger(zl.WithName("provider-aws"))
ots = NewOperationStore(logTest)
timeout = time.Duration(1200000000000)
cfg = &config.Resource{
TerraformResource: &schema.Resource{
Timeouts: &schema.ResourceTimeout{
Create: &timeout,
Read: &timeout,
Update: &timeout,
Delete: &timeout,
},
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"id": {
Type: schema.TypeString,
Computed: true,
Required: false,
},
"map": {
Type: schema.TypeMap,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"list": {
Type: schema.TypeList,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
},
ExternalName: config.IdentifierFromProvider,
Sensitive: config.Sensitive{AdditionalConnectionDetailsFn: func(attr map[string]any) (map[string][]byte, error) {
return nil, nil
}},
}
obj = fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": "example",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
}
)
func prepareTerraformPluginSDKExternal(r Resource, cfg *config.Resource) *terraformPluginSDKExternal {
schemaBlock := cfg.TerraformResource.CoreConfigSchema()
rawConfig, err := schema.JSONMapToStateValue(map[string]any{"name": "example"}, schemaBlock)
if err != nil {
panic(err)
}
return &terraformPluginSDKExternal{
ts: terraform.Setup{},
resourceSchema: r,
config: cfg,
params: map[string]any{
"name": "example",
},
rawConfig: rawConfig,
logger: logTest,
opTracker: NewAsyncTracker(),
}
}
type mockResource struct {
ApplyFn func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics)
RefreshWithoutUpgradeFn func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics)
}
func (m mockResource) Apply(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return m.ApplyFn(ctx, s, d, meta)
}
func (m mockResource) RefreshWithoutUpgrade(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return m.RefreshWithoutUpgradeFn(ctx, s, meta)
}
func TestTerraformPluginSDKConnect(t *testing.T) {
type args struct {
setupFn terraform.SetupFn
cfg *config.Resource
ots *OperationTrackerStore
obj fake.Terraformed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
setupFn: func(_ context.Context, _ client.Client, _ xpresource.Managed) (terraform.Setup, error) {
return terraform.Setup{}, nil
},
cfg: cfg,
obj: obj,
ots: ots,
},
},
"HCL": {
args: args{
setupFn: func(_ context.Context, _ client.Client, _ xpresource.Managed) (terraform.Setup, error) {
return terraform.Setup{}, nil
},
cfg: cfg,
obj: fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": " ${jsonencode({\n type = \"object\"\n })}",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
},
ots: ots,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
c := NewTerraformPluginSDKConnector(nil, tc.args.setupFn, tc.args.cfg, tc.args.ots, WithTerraformPluginSDKLogger(logTest))
_, err := c.Connect(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTerraformPluginSDKObserve(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj fake.Terraformed
}
type want struct {
obs managed.ExternalObservation
err error
}
cases := map[string]struct {
args
want
}{
"NotExists": {
args: args{
r: mockResource{
RefreshWithoutUpgradeFn: func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return nil, nil
},
},
cfg: cfg,
obj: obj,
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: false,
ResourceUpToDate: false,
ResourceLateInitialized: false,
ConnectionDetails: nil,
Diff: "",
},
},
},
"UpToDate": {
args: args{
r: mockResource{
RefreshWithoutUpgradeFn: func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id", Attributes: map[string]string{"name": "example"}}, nil
},
},
cfg: cfg,
obj: obj,
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: true,
ResourceLateInitialized: true,
ConnectionDetails: nil,
Diff: "",
},
},
},
"InitProvider": {
args: args{
r: mockResource{
RefreshWithoutUpgradeFn: func(ctx context.Context, s *tf.InstanceState, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id", Attributes: map[string]string{"name": "example2"}}, nil
},
},
cfg: cfg,
obj: fake.Terraformed{
Parameterizable: fake.Parameterizable{
Parameters: map[string]any{
"name": "example",
"map": map[string]any{
"key": "value",
},
"list": []any{"elem1", "elem2"},
},
InitParameters: map[string]any{
"list": []any{"elem1", "elem2", "elem3"},
},
},
Observable: fake.Observable{
Observation: map[string]any{},
},
},
},
want: want{
obs: managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: false,
ResourceLateInitialized: true,
ConnectionDetails: nil,
Diff: "",
},
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKExternal := prepareTerraformPluginSDKExternal(tc.args.r, tc.args.cfg)
observation, err := terraformPluginSDKExternal.Observe(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.obs, observation); diff != "" {
t.Errorf("\n%s\nObserve(...): -want observation, +got observation:\n", diff)
}
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTerraformPluginSDKCreate(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj fake.Terraformed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Unsuccessful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return nil, nil
},
},
cfg: cfg,
obj: obj,
},
want: want{
err: errors.New("failed to read the ID of the new resource"),
},
},
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfg,
obj: obj,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKExternal := prepareTerraformPluginSDKExternal(tc.args.r, tc.args.cfg)
_, err := terraformPluginSDKExternal.Create(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTerraformPluginSDKUpdate(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj fake.Terraformed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfg,
obj: obj,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKExternal := prepareTerraformPluginSDKExternal(tc.args.r, tc.args.cfg)
_, err := terraformPluginSDKExternal.Update(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}
func TestTerraformPluginSDKDelete(t *testing.T) {
type args struct {
r Resource
cfg *config.Resource
obj fake.Terraformed
}
type want struct {
err error
}
cases := map[string]struct {
args
want
}{
"Successful": {
args: args{
r: mockResource{
ApplyFn: func(ctx context.Context, s *tf.InstanceState, d *tf.InstanceDiff, meta interface{}) (*tf.InstanceState, diag.Diagnostics) {
return &tf.InstanceState{ID: "example-id"}, nil
},
},
cfg: cfg,
obj: obj,
},
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
terraformPluginSDKExternal := prepareTerraformPluginSDKExternal(tc.args.r, tc.args.cfg)
_, err := terraformPluginSDKExternal.Delete(context.TODO(), &tc.args.obj)
if diff := cmp.Diff(tc.want.err, err, test.EquateErrors()); diff != "" {
t.Errorf("\n%s\nConnect(...): -want error, +got error:\n", diff)
}
})
}
}

View File

@ -0,0 +1,51 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
)
const (
errRemoveTracker = "cannot remove tracker from the store"
)
// TrackerCleaner is the interface for the common finalizer of both Terraform
// plugin SDK and framework managed resources.
type TrackerCleaner interface {
RemoveTracker(obj xpresource.Object) error
}
// NewOperationTrackerFinalizer returns a new OperationTrackerFinalizer.
func NewOperationTrackerFinalizer(tc TrackerCleaner, af xpresource.Finalizer) *OperationTrackerFinalizer {
return &OperationTrackerFinalizer{
Finalizer: af,
OperationStore: tc,
}
}
// OperationTrackerFinalizer removes the operation tracker from the workspace store and only
// then calls RemoveFinalizer of the underlying Finalizer.
type OperationTrackerFinalizer struct {
xpresource.Finalizer
OperationStore TrackerCleaner
}
// AddFinalizer to the supplied Managed resource.
func (nf *OperationTrackerFinalizer) AddFinalizer(ctx context.Context, obj xpresource.Object) error {
return nf.Finalizer.AddFinalizer(ctx, obj)
}
// RemoveFinalizer removes the workspace from workspace store before removing
// the finalizer.
func (nf *OperationTrackerFinalizer) RemoveFinalizer(ctx context.Context, obj xpresource.Object) error {
if err := nf.OperationStore.RemoveTracker(obj); err != nil {
return errors.Wrap(err, errRemoveTracker)
}
return nf.Finalizer.RemoveFinalizer(ctx, obj)
}

View File

@ -0,0 +1,129 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package handler
import (
"context"
"sync"
"time"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/workqueue"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
const NoRateLimiter = ""
// EventHandler handles Kubernetes events by queueing reconcile requests for
// objects and allows upjet components to queue reconcile requests.
type EventHandler struct {
innerHandler handler.EventHandler
queue workqueue.TypedRateLimitingInterface[reconcile.Request]
rateLimiterMap map[string]workqueue.TypedRateLimiter[reconcile.Request]
logger logging.Logger
mu *sync.RWMutex
}
// Option configures an option for the EventHandler.
type Option func(eventHandler *EventHandler)
// WithLogger configures the logger for the EventHandler.
func WithLogger(logger logging.Logger) Option {
return func(eventHandler *EventHandler) {
eventHandler.logger = logger
}
}
// NewEventHandler initializes a new EventHandler instance.
func NewEventHandler(opts ...Option) *EventHandler {
eh := &EventHandler{
innerHandler: &handler.EnqueueRequestForObject{},
mu: &sync.RWMutex{},
rateLimiterMap: make(map[string]workqueue.TypedRateLimiter[reconcile.Request]),
}
for _, o := range opts {
o(eh)
}
return eh
}
// RequestReconcile requeues a reconciliation request for the specified name.
// Returns true if the reconcile request was successfully queued.
func (e *EventHandler) RequestReconcile(rateLimiterName string, name types.NamespacedName, failureLimit *int) bool {
e.mu.Lock()
defer e.mu.Unlock()
if e.queue == nil {
return false
}
logger := e.logger.WithValues("name", name)
item := reconcile.Request{
NamespacedName: name,
}
var when time.Duration = 0
if rateLimiterName != NoRateLimiter {
rateLimiter := e.rateLimiterMap[rateLimiterName]
if rateLimiter == nil {
rateLimiter = workqueue.DefaultTypedControllerRateLimiter[reconcile.Request]()
e.rateLimiterMap[rateLimiterName] = rateLimiter
}
if failureLimit != nil && rateLimiter.NumRequeues(item) > *failureLimit {
logger.Info("Failure limit has been exceeded.", "failureLimit", *failureLimit, "numRequeues", rateLimiter.NumRequeues(item))
return false
}
when = rateLimiter.When(item)
}
e.queue.AddAfter(item, when)
logger.Debug("Reconcile request has been requeued.", "rateLimiterName", rateLimiterName, "when", when)
return true
}
// Forget indicates that the reconcile retries is finished for
// the specified name.
func (e *EventHandler) Forget(rateLimiterName string, name types.NamespacedName) {
e.mu.RLock()
defer e.mu.RUnlock()
rateLimiter := e.rateLimiterMap[rateLimiterName]
if rateLimiter == nil {
return
}
rateLimiter.Forget(reconcile.Request{
NamespacedName: name,
})
}
func (e *EventHandler) setQueue(limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.mu.Lock()
defer e.mu.Unlock()
if e.queue == nil {
e.queue = limitingInterface
}
}
func (e *EventHandler) Create(ctx context.Context, ev event.CreateEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Create event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Create(ctx, ev, limitingInterface)
}
func (e *EventHandler) Update(ctx context.Context, ev event.UpdateEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Update event.", "name", ev.ObjectOld.GetName(), "namespace", ev.ObjectOld.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Update(ctx, ev, limitingInterface)
}
func (e *EventHandler) Delete(ctx context.Context, ev event.DeleteEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Delete event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Delete(ctx, ev, limitingInterface)
}
func (e *EventHandler) Generic(ctx context.Context, ev event.GenericEvent, limitingInterface workqueue.TypedRateLimitingInterface[reconcile.Request]) {
e.setQueue(limitingInterface)
e.logger.Debug("Calling the inner handler for Generic event.", "name", ev.Object.GetName(), "namespace", ev.Object.GetNamespace(), "queueLength", limitingInterface.Len())
e.innerHandler.Generic(ctx, ev, limitingInterface)
}

163
pkg/controller/hcl.go Normal file
View File

@ -0,0 +1,163 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"encoding/base64"
"fmt"
"log"
"regexp"
"unicode/utf8"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/gohcl"
"github.com/hashicorp/hcl/v2/hclparse"
ctyyaml "github.com/zclconf/go-cty-yaml"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
ctyfuncstdlib "github.com/zclconf/go-cty/cty/function/stdlib"
)
var Base64DecodeFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
AllowMarked: true,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
str, strMarks := args[0].Unmark()
s := str.AsString()
sDec, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return cty.UnknownVal(cty.String), fmt.Errorf("failed to decode base64 data %s", s)
}
if !utf8.Valid(sDec) {
log.Printf("[DEBUG] the result of decoding the provided string is not valid UTF-8: %s", s)
return cty.UnknownVal(cty.String), fmt.Errorf("the result of decoding the provided string is not valid UTF-8")
}
return cty.StringVal(string(sDec)).WithMarks(strMarks), nil
},
})
var Base64EncodeFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
return cty.StringVal(base64.StdEncoding.EncodeToString([]byte(args[0].AsString()))), nil
},
})
// evalCtx registers the known functions for HCL processing
// variable interpolation is not supported, as in our case they are irrelevant
var evalCtx = &hcl.EvalContext{
Variables: map[string]cty.Value{},
Functions: map[string]function.Function{
"abs": ctyfuncstdlib.AbsoluteFunc,
"ceil": ctyfuncstdlib.CeilFunc,
"chomp": ctyfuncstdlib.ChompFunc,
"coalescelist": ctyfuncstdlib.CoalesceListFunc,
"compact": ctyfuncstdlib.CompactFunc,
"concat": ctyfuncstdlib.ConcatFunc,
"contains": ctyfuncstdlib.ContainsFunc,
"csvdecode": ctyfuncstdlib.CSVDecodeFunc,
"distinct": ctyfuncstdlib.DistinctFunc,
"element": ctyfuncstdlib.ElementFunc,
"chunklist": ctyfuncstdlib.ChunklistFunc,
"flatten": ctyfuncstdlib.FlattenFunc,
"floor": ctyfuncstdlib.FloorFunc,
"format": ctyfuncstdlib.FormatFunc,
"formatdate": ctyfuncstdlib.FormatDateFunc,
"formatlist": ctyfuncstdlib.FormatListFunc,
"indent": ctyfuncstdlib.IndentFunc,
"join": ctyfuncstdlib.JoinFunc,
"jsondecode": ctyfuncstdlib.JSONDecodeFunc,
"jsonencode": ctyfuncstdlib.JSONEncodeFunc,
"keys": ctyfuncstdlib.KeysFunc,
"log": ctyfuncstdlib.LogFunc,
"lower": ctyfuncstdlib.LowerFunc,
"max": ctyfuncstdlib.MaxFunc,
"merge": ctyfuncstdlib.MergeFunc,
"min": ctyfuncstdlib.MinFunc,
"parseint": ctyfuncstdlib.ParseIntFunc,
"pow": ctyfuncstdlib.PowFunc,
"range": ctyfuncstdlib.RangeFunc,
"regex": ctyfuncstdlib.RegexFunc,
"regexall": ctyfuncstdlib.RegexAllFunc,
"reverse": ctyfuncstdlib.ReverseListFunc,
"setintersection": ctyfuncstdlib.SetIntersectionFunc,
"setproduct": ctyfuncstdlib.SetProductFunc,
"setsubtract": ctyfuncstdlib.SetSubtractFunc,
"setunion": ctyfuncstdlib.SetUnionFunc,
"signum": ctyfuncstdlib.SignumFunc,
"slice": ctyfuncstdlib.SliceFunc,
"sort": ctyfuncstdlib.SortFunc,
"split": ctyfuncstdlib.SplitFunc,
"strrev": ctyfuncstdlib.ReverseFunc,
"substr": ctyfuncstdlib.SubstrFunc,
"timeadd": ctyfuncstdlib.TimeAddFunc,
"title": ctyfuncstdlib.TitleFunc,
"trim": ctyfuncstdlib.TrimFunc,
"trimprefix": ctyfuncstdlib.TrimPrefixFunc,
"trimspace": ctyfuncstdlib.TrimSpaceFunc,
"trimsuffix": ctyfuncstdlib.TrimSuffixFunc,
"upper": ctyfuncstdlib.UpperFunc,
"values": ctyfuncstdlib.ValuesFunc,
"zipmap": ctyfuncstdlib.ZipmapFunc,
"yamldecode": ctyyaml.YAMLDecodeFunc,
"yamlencode": ctyyaml.YAMLEncodeFunc,
"base64encode": Base64EncodeFunc,
"base64decode": Base64DecodeFunc,
},
}
// hclBlock is the target type for decoding the specially-crafted HCL document.
// interested in processing HCL snippets for a single parameter
type hclBlock struct {
Parameter string `hcl:"parameter"`
}
// isHCLSnippetPattern is the regex pattern for determining whether
// the param is an HCL template
var isHCLSnippetPattern = regexp.MustCompile(`\$\{\w+\s*\([\S\s]*\}`)
// processHCLParam processes the given string parameter
// with HCL format and including HCL functions,
// coming from the Managed Resource spec parameters.
// It prepares a tailored HCL snippet which consist of only a single attribute
// parameter = theGivenParameterValueInHCLSyntax
// It only operates on string parameters, and returns a string.
// caller should ensure that the given parameter is an HCL snippet
func processHCLParam(param string) (string, error) {
param = fmt.Sprintf("parameter = \"%s\"\n", param)
return processHCLParamBytes([]byte(param))
}
// processHCLParamBytes parses and decodes the HCL snippet
func processHCLParamBytes(paramValueBytes []byte) (string, error) {
hclParser := hclparse.NewParser()
// here the filename argument is not important,
// used by the hcl parser lib for tracking caching purposes
// it is just a name reference
hclFile, diag := hclParser.ParseHCL(paramValueBytes, "dummy.hcl")
if diag.HasErrors() {
return "", diag
}
var paramWrapper hclBlock
diags := gohcl.DecodeBody(hclFile.Body, evalCtx, &paramWrapper)
if diags.HasErrors() {
return "", diags
}
return paramWrapper.Parameter, nil
}

View File

@ -0,0 +1,62 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import "fmt"
// getTerraformIgnoreChanges returns a sorted Terraform `ignore_changes`
// lifecycle meta-argument expression by looking for differences between
// the `initProvider` and `forProvider` maps. The ignored fields are the ones
// that are present in initProvider, but not in forProvider.
// TODO: This method is copy-pasted from `pkg/resource/ignored.go` and adapted.
// Consider merging this implementation with the original one.
func getTerraformIgnoreChanges(forProvider, initProvider map[string]any) []string {
ignored := getIgnoredFieldsMap("%s", forProvider, initProvider)
return ignored
}
// TODO: This method is copy-pasted from `pkg/resource/ignored.go` and adapted.
// Consider merging this implementation with the original one.
func getIgnoredFieldsMap(format string, forProvider, initProvider map[string]any) []string {
ignored := []string{}
for k := range initProvider {
if _, ok := forProvider[k]; !ok {
ignored = append(ignored, fmt.Sprintf(format, k))
} else {
// both are the same type so we dont need to check for forProvider type
if _, ok = initProvider[k].(map[string]any); ok {
ignored = append(ignored, getIgnoredFieldsMap(fmt.Sprintf(format, k)+".%v", forProvider[k].(map[string]any), initProvider[k].(map[string]any))...)
}
// if its an array, we need to check if its an array of maps or not
if _, ok = initProvider[k].([]any); ok {
ignored = append(ignored, getIgnoredFieldsArray(fmt.Sprintf(format, k), forProvider[k].([]any), initProvider[k].([]any))...)
}
}
}
return ignored
}
// TODO: This method is copy-pasted from `pkg/resource/ignored.go` and adapted.
// Consider merging this implementation with the original one.
func getIgnoredFieldsArray(format string, forProvider, initProvider []any) []string {
ignored := []string{}
for i := range initProvider {
// Construct the full field path with array index and prefix.
fieldPath := fmt.Sprintf("%s.%d", format, i)
if i < len(forProvider) {
if _, ok := initProvider[i].(map[string]any); ok {
ignored = append(ignored, getIgnoredFieldsMap(fieldPath+".%s", forProvider[i].(map[string]any), initProvider[i].(map[string]any))...)
}
if _, ok := initProvider[i].([]any); ok {
ignored = append(ignored, getIgnoredFieldsArray(fieldPath, forProvider[i].([]any), initProvider[i].([]any))...)
}
} else {
ignored = append(ignored, fieldPath)
}
}
return ignored
}

View File

@ -1,15 +1,17 @@
/*
Copyright 2021 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"context"
"github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/resource"
"github.com/upbound/upjet/pkg/terraform"
"k8s.io/apimachinery/pkg/types"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// TODO(muvaf): It's a bit weird that the functions return the struct of a
@ -23,9 +25,15 @@ type Workspace interface {
DestroyAsync(terraform.CallbackFn) error
Destroy(context.Context) error
Refresh(context.Context) (terraform.RefreshResult, error)
Import(context.Context, resource.Terraformed) (terraform.ImportResult, error)
Plan(context.Context) (terraform.PlanResult, error)
}
// ProviderSharer shares a native provider process with the receiver.
type ProviderSharer interface {
UseProvider(inuse terraform.InUse, attachmentConfig string)
}
// Store is where we can get access to the Terraform workspace of given resource.
type Store interface {
Workspace(ctx context.Context, c resource.SecretClient, tr resource.Terraformed, ts terraform.Setup, cfg *config.Resource) (*terraform.Workspace, error)
@ -34,6 +42,7 @@ type Store interface {
// CallbackProvider provides functions that can be called with the result of
// async operations.
type CallbackProvider interface {
Apply(name string) terraform.CallbackFn
Destroy(name string) terraform.CallbackFn
Create(name types.NamespacedName) terraform.CallbackFn
Update(name types.NamespacedName) terraform.CallbackFn
Destroy(name types.NamespacedName) terraform.CallbackFn
}

View File

@ -0,0 +1,206 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"sync"
"sync/atomic"
"github.com/crossplane/crossplane-runtime/v2/pkg/logging"
xpresource "github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/hashicorp/terraform-plugin-go/tfprotov5"
tfsdk "github.com/hashicorp/terraform-plugin-sdk/v2/terraform"
"k8s.io/apimachinery/pkg/types"
"github.com/crossplane/upjet/v2/pkg/resource"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// AsyncTracker holds information for a managed resource to track the
// async Terraform operations and the
// Terraform state (TF SDKv2 or TF Plugin Framework) of the external resource
//
// The typical usage is to instantiate an AsyncTracker for a managed resource,
// and store in a global OperationTrackerStore, to carry information between
// reconciliation scopes.
//
// When an asynchronous Terraform operation is started for the resource
// in a reconciliation (e.g. with a goroutine), consumers can mark an operation start
// on the LastOperation field, then access the operation status in the
// forthcoming reconciliation cycles, and act upon
// (e.g. hold further actions if there is an ongoing operation, mark the end
// when underlying Terraform operation is completed, save the resulting
// terraform state etc.)
//
// When utilized without the LastOperation usage, it can act as a Terraform
// state cache for synchronous reconciliations
type AsyncTracker struct {
// LastOperation holds information about the most recent operation.
// Consumers are responsible for managing the last operation by starting,
// ending and flushing it when done with processing the results.
// Designed to allow only one ongoing operation at a given time.
LastOperation *terraform.Operation
logger logging.Logger
mu *sync.Mutex
// TF Plugin SDKv2 instance state for TF Plugin SDKv2-based resources
tfState *tfsdk.InstanceState
// TF Plugin Framework instance state for TF Plugin Framework-based resources
fwState *tfprotov5.DynamicValue
// lifecycle of certain external resources are bound to a parent resource's
// lifecycle, and they cannot be deleted without actually deleting
// the owning external resource (e.g., a database resource as the parent
// resource and a database configuration resource whose lifecycle is bound
// to it. For such resources, Terraform still removes the state for them
// after a successful delete call either by resetting to some defaults in
// the parent resource, or by a noop. We logically delete such resources as
// deleted after a successful delete call so that the next observe can
// tell the managed reconciler that the resource no longer "exists".
isDeleted atomic.Bool
}
type AsyncTrackerOption func(manager *AsyncTracker)
// WithAsyncTrackerLogger sets the logger of AsyncTracker.
func WithAsyncTrackerLogger(l logging.Logger) AsyncTrackerOption {
return func(w *AsyncTracker) {
w.logger = l
}
}
// NewAsyncTracker initializes an AsyncTracker with given options
func NewAsyncTracker(opts ...AsyncTrackerOption) *AsyncTracker {
w := &AsyncTracker{
LastOperation: &terraform.Operation{},
logger: logging.NewNopLogger(),
mu: &sync.Mutex{},
}
for _, f := range opts {
f(w)
}
return w
}
// GetTfState returns the stored Terraform Plugin SDKv2 InstanceState for
// SDKv2 Terraform resources
// MUST be only used for SDKv2 resources.
func (a *AsyncTracker) GetTfState() *tfsdk.InstanceState {
a.mu.Lock()
defer a.mu.Unlock()
return a.tfState
}
// HasState returns whether the AsyncTracker has a SDKv2 state stored.
// MUST be only used for SDKv2 resources.
func (a *AsyncTracker) HasState() bool {
a.mu.Lock()
defer a.mu.Unlock()
return a.tfState != nil && a.tfState.ID != ""
}
// SetTfState stores the given SDKv2 Terraform InstanceState into
// the AsyncTracker
// MUST be only used for SDKv2 resources.
func (a *AsyncTracker) SetTfState(state *tfsdk.InstanceState) {
a.mu.Lock()
defer a.mu.Unlock()
a.tfState = state
}
// GetTfID returns the Terraform ID of the external resource currently
// stored in this AsyncTracker's SDKv2 instance state.
// MUST be only used for SDKv2 resources.
func (a *AsyncTracker) GetTfID() string {
a.mu.Lock()
defer a.mu.Unlock()
if a.tfState == nil {
return ""
}
return a.tfState.ID
}
// IsDeleted returns whether the associated external resource
// has logically been deleted.
func (a *AsyncTracker) IsDeleted() bool {
return a.isDeleted.Load()
}
// SetDeleted sets the logical deletion status of
// the associated external resource.
func (a *AsyncTracker) SetDeleted(deleted bool) {
a.isDeleted.Store(deleted)
}
// GetFrameworkTFState returns the stored Terraform Plugin Framework external
// resource state in this AsyncTracker as *tfprotov5.DynamicValue
// MUST be used only for Terraform Plugin Framework resources
func (a *AsyncTracker) GetFrameworkTFState() *tfprotov5.DynamicValue {
a.mu.Lock()
defer a.mu.Unlock()
return a.fwState
}
// HasFrameworkTFState returns whether this AsyncTracker has a
// Terraform Plugin Framework state stored.
// MUST be used only for Terraform Plugin Framework resources
func (a *AsyncTracker) HasFrameworkTFState() bool {
a.mu.Lock()
defer a.mu.Unlock()
return a.fwState != nil
}
// SetFrameworkTFState stores the given *tfprotov5.DynamicValue Terraform Plugin Framework external
// resource state into this AsyncTracker's fwstate
// MUST be used only for Terraform Plugin Framework resources
func (a *AsyncTracker) SetFrameworkTFState(state *tfprotov5.DynamicValue) {
a.mu.Lock()
defer a.mu.Unlock()
a.fwState = state
}
// OperationTrackerStore stores the AsyncTracker instances associated with the
// managed resource instance.
type OperationTrackerStore struct {
store map[types.UID]*AsyncTracker
logger logging.Logger
mu *sync.Mutex
}
// NewOperationStore returns a new OperationTrackerStore instance
func NewOperationStore(l logging.Logger) *OperationTrackerStore {
ops := &OperationTrackerStore{
store: map[types.UID]*AsyncTracker{},
logger: l,
mu: &sync.Mutex{},
}
return ops
}
// Tracker returns the associated *AsyncTracker stored in this
// OperationTrackerStore for the given managed resource.
// If there is no tracker stored previously, a new AsyncTracker is created and
// stored for the specified managed resource. Subsequent calls with the same managed
// resource will return the previously instantiated and stored AsyncTracker
// for that managed resource
func (ops *OperationTrackerStore) Tracker(tr resource.Terraformed) *AsyncTracker {
ops.mu.Lock()
defer ops.mu.Unlock()
tracker, ok := ops.store[tr.GetUID()]
if !ok {
l := ops.logger.WithValues("trackerUID", tr.GetUID(), "resourceName", tr.GetName(), "resourceNamespace", tr.GetNamespace(), "gvk", tr.GetObjectKind().GroupVersionKind().String())
ops.store[tr.GetUID()] = NewAsyncTracker(WithAsyncTrackerLogger(l))
tracker = ops.store[tr.GetUID()]
}
return tracker
}
// RemoveTracker will remove the stored AsyncTracker of the given managed
// resource from this OperationTrackerStore.
func (ops *OperationTrackerStore) RemoveTracker(obj xpresource.Object) error {
ops.mu.Lock()
defer ops.mu.Unlock()
delete(ops.store, obj.GetUID())
return nil
}

View File

@ -1,15 +1,16 @@
/*
Copyright 2022 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package controller
import (
"github.com/crossplane/crossplane-runtime/pkg/controller"
"k8s.io/apimachinery/pkg/runtime/schema"
"time"
"github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/terraform"
"github.com/crossplane/crossplane-runtime/v2/pkg/controller"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/terraform"
)
// Options contains incriminating options for a given Upjet controller instance.
@ -25,12 +26,17 @@ type Options struct {
// instance should use.
WorkspaceStore *terraform.WorkspaceStore
OperationTrackerStore *OperationTrackerStore
// SetupFn contains the provider-specific initialization logic, such as
// preparing the auth token for Terraform CLI.
SetupFn terraform.SetupFn
// SecretStoreConfigGVK is the GroupVersionKind for the Secret StoreConfig
// resource. Setting this enables External Secret Stores for the controller
// by adding connection.DetailsManager as a ConnectionPublisher.
SecretStoreConfigGVK *schema.GroupVersionKind
// PollJitter adds the specified jitter to the configured reconcile period
// of the up-to-date resources in managed.Reconciler.
PollJitter time.Duration
// StartWebhooks enables starting of the conversion webhooks by the
// provider's controllerruntime.Manager.
StartWebhooks bool
}

View File

@ -0,0 +1,179 @@
// SPDX-FileCopyrightText: 2024 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package conversion
import (
"bytes"
"fmt"
"io"
"log"
"os"
"path/filepath"
"strings"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
k8sschema "k8s.io/apimachinery/pkg/runtime/schema"
kyaml "k8s.io/apimachinery/pkg/util/yaml"
"sigs.k8s.io/yaml"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/config/conversion"
)
// ConvertSingletonListToEmbeddedObject generates the example manifests for
// the APIs with converted singleton lists in their new API versions with the
// embedded objects. All manifests under `startPath` are scanned and the
// header at the specified path `licenseHeaderPath` is used for the converted
// example manifests.
func ConvertSingletonListToEmbeddedObject(pc *config.Provider, startPath, licenseHeaderPath string) error {
resourceRegistry := prepareResourceRegistry(pc)
var license string
var lErr error
if licenseHeaderPath != "" {
license, lErr = getLicenseHeader(licenseHeaderPath)
if lErr != nil {
return errors.Wrap(lErr, "failed to get license header")
}
}
err := filepath.Walk(startPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
return errors.Wrapf(err, "walk failed: %s", startPath)
}
var convertedFileContent string
if !info.IsDir() && strings.HasSuffix(info.Name(), ".yaml") {
log.Printf("Converting: %s\n", path)
content, err := os.ReadFile(filepath.Clean(path))
if err != nil {
return errors.Wrapf(err, "failed to read the %s file", path)
}
examples, err := decodeExamples(string(content))
if err != nil {
return errors.Wrap(err, "failed to decode examples")
}
rootResource := resourceRegistry[fmt.Sprintf("%s/%s", examples[0].GroupVersionKind().Kind, examples[0].GroupVersionKind().Group)]
if rootResource == nil {
log.Printf("Warning: Skipping %s because the corresponding resource could not be found in the provider", path)
return nil
}
newPath := strings.ReplaceAll(path, examples[0].GroupVersionKind().Version, rootResource.Version)
if path == newPath {
return nil
}
annotationValue := strings.ToLower(fmt.Sprintf("%s/%s/%s", rootResource.ShortGroup, rootResource.Version, rootResource.Kind))
for _, e := range examples {
if resource, ok := resourceRegistry[fmt.Sprintf("%s/%s", e.GroupVersionKind().Kind, e.GroupVersionKind().Group)]; ok {
conversionPaths := resource.CRDListConversionPaths()
if conversionPaths != nil && e.GroupVersionKind().Version != resource.Version {
for i, cp := range conversionPaths {
// Here, for the manifests to be converted, only `forProvider
// is converted, assuming the `initProvider` field is empty in the
// spec.
conversionPaths[i] = "spec.forProvider." + cp
}
converted, err := conversion.Convert(e.Object, conversionPaths, conversion.ToEmbeddedObject, nil)
if err != nil {
return errors.Wrapf(err, "failed to convert example to embedded object in manifest %s", path)
}
e.Object = converted
e.SetGroupVersionKind(k8sschema.GroupVersionKind{
Group: e.GroupVersionKind().Group,
Version: resource.Version,
Kind: e.GetKind(),
})
}
annotations := e.GetAnnotations()
if annotations == nil {
annotations = make(map[string]string)
log.Printf("Missing annotations: %s", path)
}
annotations["meta.upbound.io/example-id"] = annotationValue
e.SetAnnotations(annotations)
}
}
convertedFileContent = license + "\n\n"
if err := writeExampleContent(path, convertedFileContent, examples, newPath); err != nil {
return errors.Wrap(err, "failed to write example content")
}
}
return nil
})
if err != nil {
log.Printf("Error walking the path %q: %v\n", startPath, err)
}
return nil
}
func writeExampleContent(path string, convertedFileContent string, examples []*unstructured.Unstructured, newPath string) error {
for i, e := range examples {
var convertedData []byte
e := e
convertedData, err := yaml.Marshal(&e)
if err != nil {
return errors.Wrap(err, "failed to marshal example to yaml")
}
if i == len(examples)-1 {
convertedFileContent += string(convertedData)
} else {
convertedFileContent += string(convertedData) + "\n---\n\n"
}
}
dir := filepath.Dir(newPath)
// Create all necessary directories if they do not exist
if err := os.MkdirAll(dir, os.ModePerm); err != nil {
return errors.Wrap(err, "failed to create directory")
}
f, err := os.Create(filepath.Clean(newPath))
if err != nil {
return errors.Wrap(err, "failed to create file")
}
if _, err := f.WriteString(convertedFileContent); err != nil {
return errors.Wrap(err, "failed to write to file")
}
log.Printf("Converted: %s\n", path)
return nil
}
func getLicenseHeader(licensePath string) (string, error) {
licenseData, err := os.ReadFile(licensePath)
if err != nil {
return "", errors.Wrapf(err, "failed to read license file: %s", licensePath)
}
return string(licenseData), nil
}
func prepareResourceRegistry(pc *config.Provider) map[string]*config.Resource {
reg := map[string]*config.Resource{}
for _, r := range pc.Resources {
reg[fmt.Sprintf("%s/%s.%s", r.Kind, r.ShortGroup, pc.RootGroup)] = r
}
return reg
}
func decodeExamples(content string) ([]*unstructured.Unstructured, error) {
var manifests []*unstructured.Unstructured
decoder := kyaml.NewYAMLOrJSONDecoder(bytes.NewBufferString(content), 1024)
for {
u := &unstructured.Unstructured{}
if err := decoder.Decode(&u); err != nil {
if errors.Is(err, io.EOF) {
break
}
return nil, errors.Wrap(err, "cannot decode manifest")
}
if u != nil {
manifests = append(manifests, u)
}
}
return manifests, nil
}

View File

@ -1,6 +1,6 @@
/*
Copyright 2022 Upbound Inc.
*/
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package examples
@ -8,24 +8,23 @@ import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"github.com/crossplane/crossplane-runtime/pkg/fieldpath"
xpmeta "github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/v2/pkg/fieldpath"
xpmeta "github.com/crossplane/crossplane-runtime/v2/pkg/meta"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/pkg/errors"
"sigs.k8s.io/yaml"
"github.com/upbound/upjet/pkg/config"
"github.com/upbound/upjet/pkg/registry/reference"
"github.com/upbound/upjet/pkg/resource/json"
tjtypes "github.com/upbound/upjet/pkg/types"
"github.com/upbound/upjet/pkg/types/name"
"github.com/crossplane/upjet/v2/pkg/config"
"github.com/crossplane/upjet/v2/pkg/registry/reference"
"github.com/crossplane/upjet/v2/pkg/resource/json"
tjtypes "github.com/crossplane/upjet/v2/pkg/types"
"github.com/crossplane/upjet/v2/pkg/types/name"
)
var (
@ -43,27 +42,53 @@ const (
// Generates example manifests for Terraform resources under examples-generated.
type Generator struct {
reference.Injector
rootDir string
exampleDir string
configResources map[string]*config.Resource
resources map[string]*reference.PavedWithManifest
exampleNamespace string
localSecretRefs bool
}
type GeneratorOption func(*Generator)
// WithLocalSecretRefs configures the example generator to
// generate examples with local secret references,
// i.e. no namespace specified.
func WithLocalSecretRefs() GeneratorOption {
return func(g *Generator) {
g.localSecretRefs = true
}
}
// WithNamespacedExamples configures the example generator to
// generate examples with the default namespace
func WithNamespacedExamples() GeneratorOption {
return func(g *Generator) {
g.exampleNamespace = defaultNamespace
}
}
// NewGenerator returns a configured Generator
func NewGenerator(rootDir, modulePath, shortName string, configResources map[string]*config.Resource) *Generator {
return &Generator{
func NewGenerator(exampleDir, apisModulePath, shortName string, configResources map[string]*config.Resource, opts ...GeneratorOption) *Generator {
g := &Generator{
Injector: reference.Injector{
ModulePath: modulePath,
ModulePath: apisModulePath,
ProviderShortName: shortName,
},
rootDir: rootDir,
exampleDir: exampleDir,
configResources: configResources,
resources: make(map[string]*reference.PavedWithManifest),
}
for _, opt := range opts {
opt(g)
}
return g
}
// StoreExamples stores the generated example manifests under examples-generated in
// their respective API groups.
func (eg *Generator) StoreExamples() error { // nolint:gocyclo
func (eg *Generator) StoreExamples() error { //nolint:gocyclo
for rn, pm := range eg.resources {
manifestDir := filepath.Dir(pm.ManifestPath)
if err := os.MkdirAll(manifestDir, 0750); err != nil {
@ -99,24 +124,27 @@ func (eg *Generator) StoreExamples() error { // nolint:gocyclo
// e.g. meta.upbound.io/example-id: ec2/v1beta1/instance
eGroup := fmt.Sprintf("%s/%s/%s", strings.ToLower(r.ShortGroup), r.Version, strings.ToLower(r.Kind))
pmd := paveCRManifest(exampleParams, dr.Config,
reference.NewRefPartsFromResourceName(dn).ExampleName, dr.Group, dr.Version, eGroup)
reference.NewRefPartsFromResourceName(dn).ExampleName, dr.Group, dr.Version, eGroup, eg.exampleNamespace, eg.localSecretRefs)
if err := eg.writeManifest(&buff, pmd, context); err != nil {
return errors.Wrapf(err, "cannot store example manifest for %s dependency: %s", rn, dn)
}
}
}
newBuff := bytes.TrimSuffix(buff.Bytes(), []byte("\n---\n\n"))
// no sensitive info in the example manifest
if err := ioutil.WriteFile(pm.ManifestPath, buff.Bytes(), 0600); err != nil {
if err := os.WriteFile(pm.ManifestPath, newBuff, 0600); err != nil {
return errors.Wrapf(err, "cannot write example manifest file %s for resource %s", pm.ManifestPath, rn)
}
}
return nil
}
func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, group, version, eGroup string) *reference.PavedWithManifest {
func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, group, version, eGroup, namespace string, localSecretRefs bool) *reference.PavedWithManifest {
delete(exampleParams, "depends_on")
delete(exampleParams, "lifecycle")
transformFields(r, exampleParams, r.ExternalName.OmittedFields, "")
transformFields(r, exampleParams, r.ExternalName.OmittedFields, "", localSecretRefs)
metadata := map[string]any{
"labels": map[string]string{
labelExampleName: eName,
@ -125,6 +153,9 @@ func paveCRManifest(exampleParams map[string]any, r *config.Resource, eName, gro
annotationExampleGroup: eGroup,
},
}
if namespace != "" {
metadata["namespace"] = namespace
}
example := map[string]any{
"apiVersion": fmt.Sprintf("%s/%s", group, version),
"kind": r.Kind,
@ -182,8 +213,8 @@ func (eg *Generator) Generate(group, version string, r *config.Resource) error {
groupPrefix := strings.ToLower(strings.Split(group, ".")[0])
// e.g. gvk = ec2/v1beta1/instance
gvk := fmt.Sprintf("%s/%s/%s", groupPrefix, version, strings.ToLower(r.Kind))
pm := paveCRManifest(rm.Examples[0].Paved.UnstructuredContent(), r, rm.Examples[0].Name, group, version, gvk)
manifestDir := filepath.Join(eg.rootDir, "examples-generated", groupPrefix)
pm := paveCRManifest(rm.Examples[0].Paved.UnstructuredContent(), r, rm.Examples[0].Name, group, version, gvk, eg.exampleNamespace, eg.localSecretRefs)
manifestDir := filepath.Join(eg.exampleDir, groupPrefix, r.Version)
pm.ManifestPath = filepath.Join(manifestDir, fmt.Sprintf("%s.yaml", strings.ToLower(r.Kind)))
eg.resources[fmt.Sprintf("%s.%s", r.Name, reference.Wildcard)] = pm
return nil
@ -204,7 +235,7 @@ func isStatus(r *config.Resource, attr string) bool {
return tjtypes.IsObservation(s)
}
func transformFields(r *config.Resource, params map[string]any, omittedFields []string, namePrefix string) { // nolint:gocyclo
func transformFields(r *config.Resource, params map[string]any, omittedFields []string, namePrefix string, localSecretRefs bool) { //nolint:gocyclo
for n := range params {
hName := getHierarchicalName(namePrefix, n)
if isStatus(r, hName) {
@ -222,7 +253,7 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
for n, v := range params {
switch pT := v.(type) {
case map[string]any:
transformFields(r, pT, omittedFields, getHierarchicalName(namePrefix, n))
transformFields(r, pT, omittedFields, getHierarchicalName(namePrefix, n), localSecretRefs)
case []any:
for _, e := range pT {
@ -230,7 +261,7 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
if !ok {
continue
}
transformFields(r, eM, omittedFields, getHierarchicalName(namePrefix, n))
transformFields(r, eM, omittedFields, getHierarchicalName(namePrefix, n), localSecretRefs)
}
}
}
@ -248,11 +279,14 @@ func transformFields(r *config.Resource, params map[string]any, omittedFields []
switch {
case sch.Sensitive:
secretName, secretKey := getSecretRef(v)
params[fn.LowerCamelComputed+"SecretRef"] = getRefField(v, map[string]any{
ref := map[string]any{
"name": secretName,
"namespace": defaultNamespace,
"key": secretKey,
})
}
if !localSecretRefs {
ref["namespace"] = defaultNamespace
}
params[fn.LowerCamelComputed+"SecretRef"] = getRefField(v, ref)
case r.References[fieldPath] != config.Reference{}:
switch v.(type) {
case []any:

View File

@ -1,16 +1,6 @@
// Copyright 2021 Upbound Inc.
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// SPDX-License-Identifier: Apache-2.0
//go:build generate
// +build generate

200
pkg/metrics/metrics.go Normal file
View File

@ -0,0 +1,200 @@
// SPDX-FileCopyrightText: 2023 The Crossplane Authors <https://crossplane.io>
//
// SPDX-License-Identifier: Apache-2.0
package metrics
import (
"context"
"fmt"
"sync"
"time"
"github.com/crossplane/crossplane-runtime/v2/pkg/resource"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/tools/cache"
"sigs.k8s.io/controller-runtime/pkg/cluster"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
const (
promNSUpjet = "upjet"
promSysTF = "terraform"
promSysResource = "resource"
)
var (
// CLITime is the Terraform CLI execution times histogram.
CLITime = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: promNSUpjet,
Subsystem: promSysTF,
Name: "cli_duration",
Help: "Measures in seconds how long it takes a Terraform CLI invocation to complete",
Buckets: []float64{1.0, 3, 5, 10, 15, 30, 60, 120, 300},
}, []string{"subcommand", "mode"})
// ExternalAPITime is the SDK processing times histogram.
ExternalAPITime = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "ext_api_duration",
Help: "Measures in seconds how long it takes a Cloud SDK call to complete",
Buckets: []float64{1, 5, 10, 15, 30, 60, 120, 300, 600, 1800, 3600},
}, []string{"operation"})
// ExternalAPICalls is a counter metric of the number of external
// API calls. "service" and "operation" labels could be used to
// classify calls into a two-level hierarchy, in which calls are
// "operations" that belong to a "service". Users should beware of
// performance implications of high cardinality that could occur
// when there are many services and operations. See:
// https://prometheus.io/docs/practices/naming/#labels
ExternalAPICalls = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "external_api_calls_total",
Help: "The number of external API calls.",
}, []string{"service", "operation"})
// DeletionTime is the histogram metric for collecting statistics on the
// intervals between the deletion timestamp and the moment when
// the resource is observed to be missing (actually deleted).
DeletionTime = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "deletion_seconds",
Help: "Measures in seconds how long it takes for a resource to be deleted",
Buckets: []float64{1, 5, 10, 15, 30, 60, 120, 300, 600, 1800, 3600},
}, []string{"group", "version", "kind"})
// ReconcileDelay is the histogram metric for collecting statistics on the
// delays between when the expected reconciles of an up-to-date resource
// should happen and when the resource is actually reconciled. Only
// delays from the expected reconcile times are considered.
ReconcileDelay = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "reconcile_delay_seconds",
Help: "Measures in seconds how long the reconciles for a resource have been delayed from the configured poll periods",
Buckets: []float64{1, 5, 10, 15, 30, 60, 120, 300, 600, 1800, 3600},
}, []string{"group", "version", "kind"})
// CLIExecutions are the active number of terraform CLI invocations.
CLIExecutions = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: promNSUpjet,
Subsystem: promSysTF,
Name: "active_cli_invocations",
Help: "The number of active (running) Terraform CLI invocations",
}, []string{"subcommand", "mode"})
// TFProcesses are the active number of
// terraform CLI & Terraform provider processes running.
TFProcesses = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: promNSUpjet,
Subsystem: promSysTF,
Name: "running_processes",
Help: "The number of running Terraform CLI and Terraform provider processes",
}, []string{"type"})
// TTRMeasurements are the time-to-readiness measurements for
// the managed resources.
TTRMeasurements = prometheus.NewHistogramVec(prometheus.HistogramOpts{
Namespace: promNSUpjet,
Subsystem: promSysResource,
Name: "ttr",
Help: "Measures in seconds the time-to-readiness (TTR) for managed resources",
Buckets: []float64{1, 5, 10, 15, 30, 60, 120, 300, 600, 1800, 3600},
}, []string{"group", "version", "kind"})
)
var _ manager.Runnable = &MetricRecorder{}
type MetricRecorder struct {
observations sync.Map
gvk schema.GroupVersionKind
cluster cluster.Cluster
pollInterval time.Duration
}
type Observations struct {
expectedReconcileTime *time.Time
observeReconcileDelay bool
}
func NameForManaged(mg resource.Managed) string {
if mg.GetNamespace() == "" {
return mg.GetName()
}
return fmt.Sprintf("%s/%s", mg.GetNamespace(), mg.GetName())
}
func NewMetricRecorder(gvk schema.GroupVersionKind, c cluster.Cluster, pollInterval time.Duration) *MetricRecorder {
return &MetricRecorder{
gvk: gvk,
cluster: c,
pollInterval: pollInterval,
}
}
func (r *MetricRecorder) SetReconcileTime(name string) {
if r == nil {
return
}
o, ok := r.observations.Load(name)
if !ok {
o = &Observations{}
r.observations.Store(name, o)
}
t := time.Now().Add(r.pollInterval)
o.(*Observations).expectedReconcileTime = &t
o.(*Observations).observeReconcileDelay = true
}
func (r *MetricRecorder) ObserveReconcileDelay(gvk schema.GroupVersionKind, name string) {
if r == nil {
return
}
o, _ := r.observations.Load(name)
if o == nil || !o.(*Observations).observeReconcileDelay || o.(*Observations).expectedReconcileTime == nil {
return
}
d := time.Since(*o.(*Observations).expectedReconcileTime)
if d < 0 {
d = 0
}
ReconcileDelay.WithLabelValues(gvk.Group, gvk.Version, gvk.Kind).Observe(d.Seconds())
o.(*Observations).observeReconcileDelay = false
}
func (r *MetricRecorder) Start(ctx context.Context) error {
inf, err := r.cluster.GetCache().GetInformerForKind(ctx, r.gvk)
if err != nil {
return errors.Wrapf(err, "cannot get informer for metric recorder for resource %s", r.gvk)
}
registered, err := inf.AddEventHandler(cache.ResourceEventHandlerFuncs{
DeleteFunc: func(obj interface{}) {
if final, ok := obj.(cache.DeletedFinalStateUnknown); ok {
obj = final.Obj
}
managed := obj.(resource.Managed)
r.observations.Delete(NameForManaged(managed))
},
})
if err != nil {
return errors.Wrap(err, "cannot add delete event handler to informer for metric recorder")
}
defer inf.RemoveEventHandler(registered) //nolint:errcheck // this happens on destruction. We cannot do anything anyway.
<-ctx.Done()
return nil
}
func init() {
metrics.Registry.MustRegister(CLITime, CLIExecutions, TFProcesses, TTRMeasurements, ExternalAPITime, ExternalAPICalls, DeletionTime, ReconcileDelay)
}

Some files were not shown because too many files have changed in this diff Show More