If the imported cluster is on a release channel we can then limit the versions available to the user for upgrade scenarios to those that exist in the release channel.
rancher/rancher#32360
If a user launches a new GKE cluster but chooses something alignment of options that fails before the cluster ever starts provisioning
we now allow them to edit the cluster and all its fields.
rancher/rancher#32207
the default nodePool object was not being cloned properly when creating a new record so the second np object was just referencing the first one.
renamed model to nodePool because I find it less confusing.
rancher/rancher#32167
Versioned the parseCloudeProviderVersion choices for a more robust and native implmentation.
Discards coerce versions and just passes the raw versions around because semver can handle this.
Adds include prerelease input as well.
which caused the version choices to recompute and add anything less that then selected version to the can't downgrade list before the cluster was saved
rancher/rancher#32135
rancher/rancher#31221
initial creation of gke driver
create shared google service for v1/v2 driver
wip
all fetch methods for google to shared service
cluster/kubernetes options
ippolicy conditionals
private nodes observer so can force ipaliases true
subnetwork logic
cru-private-cluster for gke
initial values
gke node pools
default configs and service cleanup
node group changes
subnet work
hide private nodes config if not enabled
useIpAliases work
master authorized network component
loggings
no initial master version
gke node pool fixups
input-cidr component and validation
wip - new np logic
node pool updates
fix ups from launching
edit mode changes
more edit updates
more fix ups
node pool edits
import gke
reset auto-scale
implment cloud credentials in gke v2
Cloud cred changes for gke nice to haves
imp fetch clusters
Implement Shared Subnets
cleanup
Import private cluster work and other fixes
private cluster changes
More import/register changes
Null values and node pool version changes
gke private networks warning
fixups
k3s clusters that do not have a config should not allow editing of items on the config.
A null k3s config indicates a docker installed cluster so I've added extra logic
to check if the config exists and if it does not we don't show the manage import cluster info component.
I also added logic to the final save method in cru cluster to only show the registration step if the cluster is pending
and not a null k3s config.
rancher/rancher#30977
Since the env vars have to be saved on a cluster there is no easy way of dynamically updating the cluster command without trying to split the string and do all kinds of nasty things.
I refactored the logic for when we show the command for both Custom and Import clusters until after the cluster has been saved. This ensures the user is required to save the cluster before fetching the command on an edit action
rancher/rancher#31529
After some discussions we decided to simplify things since all that's actually needed are keyValues.
I made the changes within the existing component instead of switching to keyValue to reduce risk at this stage of the release.
rancher/rancher#31545
Since the reference types can't exist until the cluster is created I added an option to disable all reference types and use it during RKE create
Even though the backend doesn't indicate the variable name is required we're marking it as such because it casues cluster to fail. No validation will be done at this time.
rancher/rancher#31528
- Fixed issues with determining the provider when editing clusters that are still provisioning.
- Fixed empty values causing problems
- Fixed an issue where the default value wasn't getting set properly for configmaps and secrets when selecting the type and nothing else
https://github.com/rancher/rancher/issues/31023#issuecomment-784717552