Cluster version can only deviate from node group versions by 1, so if any node group version is more than 1 behind the cluster can not upgrade until the node groups upgrade
New node groups should version set to that of the cluster not the other node groups
rancher/rancher#28968rancher/rancher#29166
This makes the node group upgrade ux closer to the ux of AWS console.
Removes the node group version select, replaces with a radio.
Adds logic to set the new nodegroup version on edit to the version of other nodegroups
Adds logic to prevent node group version upgrade when cluster version is being upgraded.
rancher/rancher#28965rancher/rancher#28966rancher/rancher#28968
- The entire tag collection needs to be sent if it changed
- I noticed that when editing a tag key each letter was added so I resolved that as well
rancher/rancher#28949
adds both tooltip warning on cluster row and alert warning on cluster dashboard
version can only be updated on edit when cluster version is different from node group version
all other times we set the node group version to the clusters version
rancher/rancher#28335
Turns out that the min value that the backend accepts won't allow
upgrades to complete. This switches the value to the default value to
mitigate that issue.
rancher/rancher#27333
error handler
wip - dev for aws cloud creds
wip - refactor aws login
wip - kms key
wip - encrypt secrets
wip - private access & vpcs
wip - translations and formatting
wip - more cleanup
remove unneeded code
wip - node groups
add ability to disable value label
fix double import, fix double negative disable add button and expose
addbuttonenabled
clean up public access and node groups
cleanup variable names and default config
eks regions
wip - eks v2 select
differentiate v1/2 amazon eks providers
clean up node groups, translations
unionize top errors
Consolidate aws util statics
WIP - Import/Cloud Creds
tweak cloud cred events for eks driver
WIP new import selector
drop unneeded variables eks import
kms keys cleanup
allow user to enter if the kms keys call fails
firefox styles
drop vpc selection and group subnets by vpcs
import cluster name input if allClusters fails to load from eks
make eks import a bit more dynamic
fix bug in driver eks for default subnet
more imported cleanup
eks v2 edit
eks v2 vendors
remove use cloud creds temp branchs
push current version to version choices if it doesnt exist
eks v2 rename cloudcred param
fix eks v2 versions
clean up for pr
When upgrading rancher the old 1.4 cis profiles were removed which prevented
the currently selected profile for scheduled scans to be selected.
This adds a selected profile to the options if it's not present allowing the
profile to be selected by default. If a different profile is selected and saved
the option will no longer be present. This option will also not be present in
the manual run scan modal.
rancher/rancher#27867rancher/rancher#27866rancher/rancher#27374
Enabled was being set to 'false' instead of false but this was being
masked by the execution of initScheduledClusterScan. When the
KDM values werent present initScheduledClusterScan was doing
an early exit and no longer masking the poor 'false' default value.
This will further improve the earlier solution for rancher/rancher#26996.
It appears that sometimes the cisscanconfig schema isn't present (I haven't
been able to consistently reproduce) which causes the UI to prevent the user
from saving changes to their cluster. Though I think this could be fixed
by the backend I'd like to stop the bleeding and have the UI handle this
better.
So there's two things that are here to help out.
1. I added a default value to the initial empty object just in case it never gets set
elsewhere.
2. If an exception is thrown while creating the scanConfig
we now just prevent the schedule from being set since it wouldn't work
anyway.
Either of these should allow saving to proceed.
rancher/rancher#26996