WIP - Initial creation of AKS v2 cluster
add cloud cred component to aks
location and availablity zone changes aks v2
remove tenant id and fix bug is missing driver for new aks
Cloud creds proxy calls and wip auth networks
update input cidr to move error to tool tip
this accounts for help text below the input
aks v2 networking and use input-cidr
convert tooltip around input cidr to info icon tooltip
fixups
Initial aks node pools
system node pools remove and radio buttons logic
hide primary/user logic for now
markup and node availablity zones
sync aks config
import aks
state logic for authorizedIpRanges and Private Cluster AKS v2
kubenet and azure cni networking setup
advanced networking
default version aks v2
aks edit modes
windows profiles changes
new proxy calls for region, vm size, and clusters aks v2
fix ups
import mode edit and failed cluster edit
add disabled options to searchable select
network policy fixups
node pool versions
save fixes
match import link order with v2 create order on select page
aks v2 set resourceGroup from proxy call not input
testing fixes
editing state steps for aks v2
translations and defaults
version bugs
if a cluster errors out and user tries to save again they'll see an error that they need at least one owner role. moving to destory ensures we're leaving the route.
rancher/rancher#32447
If the imported cluster is on a release channel we can then limit the versions available to the user for upgrade scenarios to those that exist in the release channel.
rancher/rancher#32360
If a user launches a new GKE cluster but chooses something alignment of options that fails before the cluster ever starts provisioning
we now allow them to edit the cluster and all its fields.
rancher/rancher#32207
We would spuriously skip the command step because we skipped post save and occasionally the cluster would have its state set to active which is an indication to skip the command.
rancher/rancher#32242
the default nodePool object was not being cloned properly when creating a new record so the second np object was just referencing the first one.
renamed model to nodePool because I find it less confusing.
rancher/rancher#32167
Versioned the parseCloudeProviderVersion choices for a more robust and native implmentation.
Discards coerce versions and just passes the raw versions around because semver can handle this.
Adds include prerelease input as well.
which caused the version choices to recompute and add anything less that then selected version to the can't downgrade list before the cluster was saved
rancher/rancher#32135
rancher/rancher#31221
initial creation of gke driver
create shared google service for v1/v2 driver
wip
all fetch methods for google to shared service
cluster/kubernetes options
ippolicy conditionals
private nodes observer so can force ipaliases true
subnetwork logic
cru-private-cluster for gke
initial values
gke node pools
default configs and service cleanup
node group changes
subnet work
hide private nodes config if not enabled
useIpAliases work
master authorized network component
loggings
no initial master version
gke node pool fixups
input-cidr component and validation
wip - new np logic
node pool updates
fix ups from launching
edit mode changes
more edit updates
more fix ups
node pool edits
import gke
reset auto-scale
implment cloud credentials in gke v2
Cloud cred changes for gke nice to haves
imp fetch clusters
Implement Shared Subnets
cleanup
Import private cluster work and other fixes
private cluster changes
More import/register changes
Null values and node pool version changes
gke private networks warning
fixups
k3s clusters that do not have a config should not allow editing of items on the config.
A null k3s config indicates a docker installed cluster so I've added extra logic
to check if the config exists and if it does not we don't show the manage import cluster info component.
I also added logic to the final save method in cru cluster to only show the registration step if the cluster is pending
and not a null k3s config.
rancher/rancher#30977
Since the env vars have to be saved on a cluster there is no easy way of dynamically updating the cluster command without trying to split the string and do all kinds of nasty things.
I refactored the logic for when we show the command for both Custom and Import clusters until after the cluster has been saved. This ensures the user is required to save the cluster before fetching the command on an edit action
rancher/rancher#31529