Enabled was being set to 'false' instead of false but this was being
masked by the execution of initScheduledClusterScan. When the
KDM values werent present initScheduledClusterScan was doing
an early exit and no longer masking the poor 'false' default value.
This will further improve the earlier solution for rancher/rancher#26996.
It appears that sometimes the cisscanconfig schema isn't present (I haven't
been able to consistently reproduce) which causes the UI to prevent the user
from saving changes to their cluster. Though I think this could be fixed
by the backend I'd like to stop the bleeding and have the UI handle this
better.
So there's two things that are here to help out.
1. I added a default value to the initial empty object just in case it never gets set
elsewhere.
2. If an exception is thrown while creating the scanConfig
we now just prevent the schedule from being set since it wouldn't work
anyway.
Either of these should allow saving to proceed.
rancher/rancher#26996
I converted the enabled value back to a boolean instead of it being a string
but I forgot to switch the three disabled fields to expect a boolean.
@rancher/rancher#26245
When editing a cluster the value of scheduledClusterScan.enabled comes
back as a string rather than a boolean. I just ensure the value will be a
boolean to make sure the value is displayed on the radio buttons as
expected.
rancher/rancher#26245
The default rancherKubernetesEngineConfig is only created for new
clusters. This now creates a default upgradeStraegy object when one
doesn't exist in rancherKubernetesEngineConfig.
rancher/rancher#25951
This will now allow KDM to drive what the benchmark options are and what
the default option is for profile selection given a kubernetes version.
The 'cisConfig' type is what we look up for this information.
rancher/rancher#25888
- Make table sorting work with scheduled scans
- Make the cis table fit on laptop screen
- Add an appropriate placeholder for the scheduled scans cron field
rancher/rancher#25937rancher/rancher#25939
The profile helper methods were attached to the cluster model.
Unfortunately, the cluster isn't available when creating a new rke
template.
To resolve this I moved all of the cis helpers out of the cluster model
and utils and moved them into a cisHelpers service so they could be
used without access to the cluster itself.
- Added Set Alert button
- This will set the appropriate options for cis
- Added Set Schedule button
- This will scroll the settings into view
- Added a modal so profiles can be picked
Was previously using the presence of nodeDrainInput to determine
the value of drain. Drain is now a part of the backend so I'm using
that value instead of inferring.
rancher/rancher#25732
Adds an empty name to the cluster model creation so it isn't missing if the user
opens yaml editor and knows they should input it
Removes incorrect next usage
Adds logic to handle overriding the name in name-desc when updateYaml is called
Updates form-name-description model observer to watch the two props it actually
cares about
Fixes bug in removeEmpty util which would remove excludedKeys during filter phase
rancher/rancher#24971
This required refactoring the drain modal into a reusable component
since these fields were going to be used in more than one place.
rancher/rancher#24110
I originally tried to fixrancher/rancher#24704 without completely
special casing. Unfortunately that lead to other issues:
rancher/rancher#24745rancher/rancher#24794rancher/rancher#24814
I decided to revert all of the related changes and to just special
case this one instance. Ultimately I think the removeEmpty is the
culprit but it requires backend changes in order to properly fix
and those changes are not happening right now.
We were erroneously adding cloud_provider.awsCloudProvider on
digital ocean etc due to rancher/rancher#24515.
This change assumes that the presence of
onfig.rancher_kubernetes_engine_config.cloud_provider.name
implies that the cloud_provider should be present. If that
nested field isn't present we remove cloud_provider.
rancher/rancher#24745
While editing a cluster properly support .x kube version comparisons when
filtering out cluster template revisions.
Coercing a .x version converts it to a .0 which made the revision look like
it was a kube downgrade. By making use of .satisfies when the revision
kube version ends with a '.x' we're now better able to check if the
kube version is a downgrade and filter appropriately.
rancher/rancher#23489
the check-override-allowed component did not know how to deal with the k8s
version question because of its tri state and how we deal with the patch version
that is an override but not really an override. I added a check to verify the
mode is view and we have the param then display param so we don't initialize the
form-version component which has logic to inject the current version into its
versions dropdown but only if we're new, editing, or cloning.
rancher/rancher#23478rancher/rancher#23465
The current kubernetes version wasn't being shown if it was no
longer a part of the supported versions when in view mode. Instead
the latest version was being displayed even if that wasn't what was
deployed. To resolve this we include the current version as one of
the choices if it's not present.
rancher/rancher#23465
When cloning a RKE template revision with overrides the values of the override
in the form were not reflected in the overrides section at the bottom of the
page because the alias on the question was never created.
rancher/rancher#23056
`Custom Cluster Overrides` was originally designed when we allowed users to
create custom overrides for items not in the UI but since that was removed the
template consumer will only ever see overrides for sections we have built in
and when launching we never display the overrides for ones with UI components.
rancher/rancher#23069
When consuming a cluster template selection of the template id from the drop
down didn't change the cluster template revision becuase the UI component was
using the readOnly value. Attached the selection to the correct value so the
action floats up (DDAU).
rancher/rancher#22977
When editing a cluster that was created with cluster template
the cluster template revision couldn't be saved.
The revisionId was stored as a component member variable instead
of as a part of the model. It needed to be stored as part of the
model in order for the NewOrEdit to see the changes and save
them. I went ahead and referenced the model directly everywhere
in the component and removed the component member variable.
rancher/rancher#22920