This adds a .status.lastObservedTime field which records when the
HelmRelease was last observed by the controller. This allows the user
to observe whether the spec.interval and reconcileAt annotations are
triggering reconciliation attempts as desired.
- Ensure upgrade actually occurs if known state was not reached
for any reason (other than install failure).
- After transient failures not tied to new state application, ensure
spurious upgrades do not occur and ready state is again reached,
by remembering that the known state was already successfully applied.
- Reset failure counts after success so they're not stale.
- Only lookup post-deployment release revision on remediation,
since otherwise we already have it.
- Push status update after finding new state so user can observe.
This commit adds support for conditional remediation, enabling the user
to:
* configure if test failures should be ignored
* configure what action should taken when a Helm install or upgrade
action fails (e.g. rollback, uninstall)
* configure if a failed Helm action should be retried
* configure if a failed release should be kept for debugging purposes
The previous behaviour where failed Helm tests did not mark the
`HelmRelease` as not `Ready` has changed, it now marks them as failed
by default.
Co-authored-by: Hidde Beydals <hello@hidde.co>
To allow the Helm Operator and helm-controller HelmReleases to
co-exist in the cluster, while being handled by separate controllers
during e.g. the migration period.
This is not possible without using another domain due to how Custom
Resource Definitions work, as newer API versions are seen as a
replacement of older versions, and are expected to be handled by a
single controller.
As "condition type names should describe the current observed state of
the resource, rather than describing the current state transitions".
Described by the draft convention for update conditions in
kubernetes/community#4521.