Pin controller-runtime to main branch commit `67b27f2` due to
breaking change in Kubernetes 1.30 `client-go/tools/leaderelection`.
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
- bump `k8s.io` packages to v0.28.4
- bump `sigs.k8s.io/kustomize` to v5.2.1
- bump `sigs.k8s.io/controller-runtime` to v0.16.3
- bump `sigs.k8s.io/yaml` to v1.4.0
- migrate from `google/gnostic` to `google/gnostic-models`
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
This change updates the set-inventory logic to retain objects which
failed to reconcile. This ensures that if you run the applier/destroyer
multiple times, an object that is failing to reconcile will be retained
in the inventory. Before this change, an object failing to reconcile
could be lost after multiple attempts (e.g. multiple destroys).
Prior to this change, the inventory always was deleted at the end of a
Destroy event. This would occur even in the case of a pruning
failure, resulting in the objects being removed from the inventory
without being deleted. This change makes it so that the inventory is
only deleted if all objects have been pruned.
The stress tests create 1000 copies of this deployment on the kind
cluster, which means that these deployments are in contention for
resources with the kind control plane and test suite. Reducing the size
of these test deployments should help the stress test run with fewer
resources available, and will hopefully get the presubmits passing.
During pruning the code is filtering objects with `CurrentUIDFilter`:
> // CurrentUIDFilter implements ValidationFilter interface to determine
// if an object should not be pruned (deleted) because it has recently
// been applied.
This must mean that the object has ended up being tracked in the inventory by more than one reference and the one that was flagged by the filter should be removed from the inventory. Otherwise, it gets stuck in the inventory indefinitely and during every invocation of prune it'll be logged as skipped.
A good example of this is `Ingress` kind existing in both groups `extensions` and `networking.k8s.io` in k8s 1.19 & 1.20. This means when you rename the apiGroup in your local yaml to the new recommended group, then the reference with the old group name gets permastuck in the inventory.
There was a bug where an error from an apply filter wasn't propagated to the applyFailed event, which leaded to printed events without error message set on them:
> {"group":"rbac.authorization.k8s.io","kind":"RoleBinding","name":"redacted","namespace":"redacted","status":"Failed","timestamp":"2022-12-21T12:43:46Z","type":"apply"}