This is backwards compatible, as it only changes the type without the
further requirements around the YAML declaration.
Signed-off-by: Hidde Beydals <hidde@hhh.computer>
Formalises the API requirements around TargetPath and ValuesKey,
which were the two fields missing validation within ValuesReference.
In both cases the validation was introduced at CRD level, so that
the apiserver will enforce it.
ValuesKey must be a valid Data Key. Therefore the same logic used by
upstream Kubernetes is reused here to ensure a valid key is being used.
For TargetPath a loose regex is being used to largely represent the
expected format. A max length of 250 is now being enforced.
This is a breaking change, as invalid TargetPath and ValuesKey will now
be rejected by the apiserver, instead of being accepted and potentially
failing at reconciliation time.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
When the flag --default-service-account was added it changed
slightly the behaviour of the spec.KubeConfig field. It forces
the impersonation to always take place, either via the contents
of spec.ServiceAccountName or its fallback at controller level.
Signed-off-by: Paulo Gomes <paulo.gomes@weave.works>
This includes an update of the source-controller to v0.22.0, to pull in
the v1beta2 API which makes use of the same packages.
Signed-off-by: Sunny <darkowlzz@protonmail.com>
This commit changes the default behavior of the Helm uninstall action
to wait for all resources to be deleted, and introduces a
`.spec.uninstall.disableWait` flag to disable this behavior.
Signed-off-by: Samuel Torres <samuelpirestorres@gmail.com>
This adds support for creating the target release namespace if it is not
present which can be be useful in certain scenarios.
Note that when the release is uninstalled, the namespace is not removed
and remains on the cluster, and managing your namespace _outside_ of
the HelmRelease is advised.
Signed-off-by: Hidde Beydals <hello@hidde.co>
Changes the condition type to the one introduced in k8s 1.19, including
the newly introduced helpers in place of the old pkg/apis/meta types.
Signed-off-by: Hidde Beydals <hello@hidde.co>
This is an initial implementation for cross-cluster Helm release
support that relies on a KubeConfig secret, and a reference to it in
the HelmRelease resource.
If set, all actions taken by the Helm runner are executed using the
KubeConfig from the secret. The Helm storage is stored on the remote
cluster in a namespace that equals to the namespace of the HelmRelease
in the managing cluster, the release itself is made in either this
namespace, or the configured TargetNamespace. In any case, both are
expected to exist and/or created beforehand.
Other references to Kubernetes resources in the HelmRelease, like
ValuesReference resources, are expected to exist on the managing
cluster.
This makes it possible for e.g. the GOTK CLI to observe if the
controller has handled the resource since the manual reconciliation
request was made. It replaces the `LastObservedTime` status field,
as this was prone to time skew issues and does not offer much additional
value over the timestamps of the conditions.
This commit adds support for optional values references, as discussions
have brought to light that there are some valid use cases in which
having this option is desirable. For example to allow a user to extend
behaviour at a later date with their own repository without modifying
the HelmRelease object.
When a values reference is marked as optional, not found errors for the
value reference are ignored, but any ValuesKey, TargetPath or transient
error will still result in a reconciliation failure.
This also removes the sorting from the `HelmChartWatcher`, as with
the current `HelmChartTemplateSpec` a chart is only used by a single
`HelmRelease`. Rendering the action obsolete.
This removes:
- Installed, Upgraded, RolledBack, and Uninstalled status conditions
since they did not represent current state, but rather actions
taken, which are already recorded by events.
- status.observedStateReconciled since it solved the problem of
remembering past release (install/upgrade/test) success, but not
past release failures, after other subsequent failures such as
dependency failures, k8s API failures, etc.
This adds:
- Remediated status condition which records whether the release is
currently in a remediated state. It is used to prevent release retries
after remediation failures. We were previously not doing this for
rollback failures.
- Released status condition which records whether the current state
has been successfully released (install/upgrade/test). This is used to
remember the last release attempt status, regardless of any subsequent
other failures such as dependency failures, k8s API failures, etc.
This renames:
- Tested > TestsSuccess status condition, for forward compatibility
with interval based helm tests.
This adds a .status.lastObservedTime field which records when the
HelmRelease was last observed by the controller. This allows the user
to observe whether the spec.interval and reconcileAt annotations are
triggering reconciliation attempts as desired.
- Ensure upgrade actually occurs if known state was not reached
for any reason (other than install failure).
- After transient failures not tied to new state application, ensure
spurious upgrades do not occur and ready state is again reached,
by remembering that the known state was already successfully applied.
- Reset failure counts after success so they're not stale.
- Only lookup post-deployment release revision on remediation,
since otherwise we already have it.
- Push status update after finding new state so user can observe.
This commit adds support for conditional remediation, enabling the user
to:
* configure if test failures should be ignored
* configure what action should taken when a Helm install or upgrade
action fails (e.g. rollback, uninstall)
* configure if a failed Helm action should be retried
* configure if a failed release should be kept for debugging purposes
The previous behaviour where failed Helm tests did not mark the
`HelmRelease` as not `Ready` has changed, it now marks them as failed
by default.
Co-authored-by: Hidde Beydals <hello@hidde.co>
To allow the Helm Operator and helm-controller HelmReleases to
co-exist in the cluster, while being handled by separate controllers
during e.g. the migration period.
This is not possible without using another domain due to how Custom
Resource Definitions work, as newer API versions are seen as a
replacement of older versions, and are expected to be handled by a
single controller.