The new Registry field allows to prepend a common registry to the
image URLs of the embedded ManagedOSVersion resources.
Fixes#549
Signed-off-by: Francesco Giudici <francesco.giudici@suse.com>
* operator: fix ManagedOSVersionChannel sync
After the very fist initial sync, all subsequently channel syncs fail.
Fixed.
fixes: https://github.com/rancher/elemental-operator/issues/766
* operator: don't assume ManagedOSVersion resources have annotations
Older resources may not have annotations yet: initialize the field.
Fixes: https://github.com/rancher/elemental-operator/issues/767
---------
Signed-off-by: Francesco Giudici <francesco.giudici@suse.com>
* Update system-upgrade-controller API
Signed-off-by: Andrea Mazzotti <andrea.mazzotti@suse.com>
* Update Fleet API
Signed-off-by: Andrea Mazzotti <andrea.mazzotti@suse.com>
* Sanitize dependencies
Signed-off-by: Andrea Mazzotti <andrea.mazzotti@suse.com>
---------
Signed-off-by: Andrea Mazzotti <andrea.mazzotti@suse.com>
Ensure we keep checking synchronization pod status once
the syncrhonization started. This is to recover in case
there was a missed pod phase change.
Signed-off-by: David Cassany <dcassany@suse.com>
* Add a sync failure counter
This commit adds a channel sync failure counter to count the
number of consecutive sync failures. This logic is meant to
prevent creating and deleting a pod in case of errors (e.g.
unreachable download URL) in an infinite loop. After several
attempts to synchronize it will give up until the next
scheduled synchronization.
* Add syncedGeneration in status
This commit adds in managedOSVersionChannel status
the generation of the last synchronization attempt. This
is useful to prevent spurious reconciles to trigger an
unexpected sync and also to force immediate resync in case
of a channel update.
Signed-off-by: David Cassany <dcassany@suse.com>
During upgrades we have a race-condition for the
ManagedOSVersionChannels.
This commit checks to see if a ManagedOSVersionChannel syncer pod was
created with another image than the provided elemental-operator and if
true, will delete the old pod and recreate the syncer pod.
Signed-off-by: Fredrik Lönnegren <fredrik.lonnegren@suse.com>
This commit moves the synchronization logic to allways happen in a Pod,
regardless being a Custom or JSON syncer. This allows having a simpler Pod
lifecycle management as part of the channel controller logic.
In addition, syncer pod logs are read on succeeded state instate of
running state to simplify Pod lifecycle management.
As a result channel updates trigger a new channel synchronization
without having to wait for the next scheduled sync.
Signed-off-by: David Cassany <dcassany@suse.com>
This commit checks on each reconcile loop if the service
account token secret is missing despite being on ready
state.
In addition it also adds optimistic locking for patch calls. The
motivations is to prevent concurrent controllers to modify
outdated data.
Signed-off-by: David Cassany <dcassany@suse.com>
This commit adds a rate limiter to the ManagedOSVersionChannel controller to prevent
stacking reconcile loops over the same resource in fast rates (doesn't make sense for a
ManagedOSVersionChannel). By default the controller runtime already includes an
equivalent rate limiter, but starts in the range of milliseconds, starting the exponential
rate limiter in the range of seconds is more than enough in this context.
In addition it drops the failures counter in the resource. This counter was supposed to
be used to limit the number attempts to sync in case of failure. This was a bad design,
status should not keep a counter like this as any change in status triggers a new
immediate reconcile loop, hence the counter was reaching the maximum as fast as the
controller runtime was executing reconcile loops without any rate limiter (rate limiter
applies only when there are no changes including status).
For now I think we can just live without the setting any maxium for failures. If we ever
need it I believe it should be coded and tracked within the controller itself, not in each
resource as this prevents the reconcile loop of being idempotent. Alternatively we could
prevent triggering the reconcile loop on status changes, however this prevents
reconciling if any third party (or user from the kubectl client) changes a resource status.
Fixes#257
Part of #240
Signed-off-by: David Cassany <dcassany@suse.com>
This commit adds few changes on the syncer logic:
* Makes use of ManagedOSVersionChannel status reason to track if there
is an on going synchronization rather than polling for the existence of a synchronization pod or not.
* Adds a logic to stop trying to synchronize after 4 consecutive attempts.
If it exceeds the maximum it just programs the next re-sync after the given sync
interval instead of immediately retrying.
* Adds some logging and comments here and there.
Signed-off-by: David Cassany <dcassany@suse.com>
* Implement syncer logic as part of the ManagedOSVersionChannel controller
This commit adds the logic to synchronize managedosversionchannels
within the already existing controller.
* make generate
* make build-manifests
* Update chart
* update e2e tests
Signed-off-by: David Cassany <dcassany@suse.com>
* Update vendor
* Run generation tasks
* Minor fixes in Makefile
* Remove old code
* Add remaning controllers
* Minor e2e tests improvements
* Switch osversionchannel syncer to controll runtime
* Minor fixes in controllers
* Fix unit tests