updated the spot instance to make spot instance types required along with adding a default row
during the process I also found that the nodegroups data was being reset on a failure so added some wrappers around the observer for this.
rancher/rancher#31213rancher/rancher#31288
rancher/rancher#30613
Initial seperation of the node instance info and group info EKS v2
Template versions
inputs enabled for eks launch templates
build files
Resource instance tags
resource template cleanup
lint fix
dont disable gpu or spot during create
launch template pr feedback
Add example user data and removal on save logic
* Add LKE string constants
Supersedes https://github.com/rancher/ui/pull/4298
* New cluster driver boilerplate
* Pull assets from from https://github.com/linode/ui-cluster-driver-lke
* Convert from skeleton format
* Add LKE to build_in_ui
- refactored to have the two locations match
* Replace " with '
* Translations for labels
* Language was moved to translations file
* Linter errors
* replace multiple sets with setProperties
* Move bare strings to translations
* Swap this.setProperties for ember/object's version
* Remove custom language change reactions
* remove dollar sign in favor of format-number
* Convert node pool list to a sortable table
Co-authored-by: Tamal Saha <tamal@appscode.com>
custom config has a type of `map[json]` and when we process this for the edit yaml view we can goof some of the keys in the arbitrary JSON.
specifically the `Resources` key gets dasherized and lower cased which borks the cluster on save.
rancher/rancher#30555
exposes the kubeApi Secrets config enabled property which allows users to rotate keys
disable rotate action when prop is not set, we are rotating or, the cluster is transitioning
rancher/rancher#30077
previously we prevented users from changing the choice of standard or custom vpc which also loads the security groups, this adds the same logic back in for eks v2
rancher/rancher#30301
for this field anything other than 'standard' is the same as 'basic'
update the ui default to basic, the UI should always send a value not empty string but for clusters created outside the ui we will display basic if not standard
rancher/rancher#29908
default on the backend was change from `standard` to `""` to alievate issues with upgrading clusters that didn't have a lb sku before
rancher/rancher#29908
For imported or custom clusters that never had a host register there was
never a way to get back to the registration command. This exposes a new
modal and button ont he cluster dashboard the allows the user to fetch
this command if the cluster doesn't have any nodes.
rancher/rancher#28548
Add import-command component the import command on imported clusters
this is the first step to exposing the command as an action in the modal for show command
Create CustomCommand component for modal show command
add custom input to show command modal
translations
this issue cropped up after a large dependency upgrade and I believe it has to do with some underlying ember changes. Basically we'd hit a race condition where we'd click next before the observer has had a chance to update the project id.
Found an additional issue where the observer on zone change would cause all the fetchs to fire again after the cluster has saved because we merge the results, I added checks to see if we'd saved as saving was already set false but we hadn't started destroying yet.
rancher/rancher#29646
There are changes required for the ember upgrade but additional changes
for individual libs are also present. Commit has been squashed, see
individual commits if needed.
There are a bunch of HBS changes as well. These are to deal with a
couple new rules and there impact is low. The button one has bitten us a
few time so this seemed great to add IMO.
update ember 3.12.4
ember 3.13
fixes for new eslint rules
ember 3.13->3.14
ember3.14->3.16
ember3.16->3.20
3.20 lint rules
yarn upgrade
update ember-optional-feature
update deps that can go to patch versions
upgrade major versions that are possible
update ansi_up
only reset term var if we're not destroying
prevents new ember 'same computation' error
update async
upgrade dot-object
new-catalog - add set to deal with new warning
marked-down - drop call into next to ensure its called at the correct time
upgrade ember-assign
update ember-cli-clipboard
remove unused & deprecated ember-cli-release
remove unused drag-drop lib
use set on tracked prop
update ember-flatpickr && cli-test-loader
upgrade ember-href-to
update filesaver
update liquid-fire and ipaddr
upgrade jsondiffpatch
upgrade marked
upgrade semver
update xterm
Update ember-basic-dropdown
the library has changed quite a bit and no longer provides an addon for the content-item where we handling the click event to close the dropdown,
thus click events must be added manually to the items being clicked to close the dropdown.
udpate dompurify
fix page header project styles
Bump ember api store, remove npm-run-all
Autofix button types from hbs linting
this change looks large but only adds `type` button to any buttons that dont have a type, which should help to reduce weird side effects
more hbs lint changes for no-negate-condition
turned off a couple rules that could be too much to test right now
Cluster version can only deviate from node group versions by 1, so if any node group version is more than 1 behind the cluster can not upgrade until the node groups upgrade
New node groups should version set to that of the cluster not the other node groups
rancher/rancher#28968rancher/rancher#29166
This makes the node group upgrade ux closer to the ux of AWS console.
Removes the node group version select, replaces with a radio.
Adds logic to set the new nodegroup version on edit to the version of other nodegroups
Adds logic to prevent node group version upgrade when cluster version is being upgraded.
rancher/rancher#28965rancher/rancher#28966rancher/rancher#28968
- The entire tag collection needs to be sent if it changed
- I noticed that when editing a tag key each letter was added so I resolved that as well
rancher/rancher#28949
adds both tooltip warning on cluster row and alert warning on cluster dashboard
version can only be updated on edit when cluster version is different from node group version
all other times we set the node group version to the clusters version
rancher/rancher#28335
Turns out that the min value that the backend accepts won't allow
upgrades to complete. This switches the value to the default value to
mitigate that issue.
rancher/rancher#27333
error handler
wip - dev for aws cloud creds
wip - refactor aws login
wip - kms key
wip - encrypt secrets
wip - private access & vpcs
wip - translations and formatting
wip - more cleanup
remove unneeded code
wip - node groups
add ability to disable value label
fix double import, fix double negative disable add button and expose
addbuttonenabled
clean up public access and node groups
cleanup variable names and default config
eks regions
wip - eks v2 select
differentiate v1/2 amazon eks providers
clean up node groups, translations
unionize top errors
Consolidate aws util statics
WIP - Import/Cloud Creds
tweak cloud cred events for eks driver
WIP new import selector
drop unneeded variables eks import
kms keys cleanup
allow user to enter if the kms keys call fails
firefox styles
drop vpc selection and group subnets by vpcs
import cluster name input if allClusters fails to load from eks
make eks import a bit more dynamic
fix bug in driver eks for default subnet
more imported cleanup
eks v2 edit
eks v2 vendors
remove use cloud creds temp branchs
push current version to version choices if it doesnt exist
eks v2 rename cloudcred param
fix eks v2 versions
clean up for pr
When upgrading rancher the old 1.4 cis profiles were removed which prevented
the currently selected profile for scheduled scans to be selected.
This adds a selected profile to the options if it's not present allowing the
profile to be selected by default. If a different profile is selected and saved
the option will no longer be present. This option will also not be present in
the manual run scan modal.
rancher/rancher#27867rancher/rancher#27866rancher/rancher#27374
Enabled was being set to 'false' instead of false but this was being
masked by the execution of initScheduledClusterScan. When the
KDM values werent present initScheduledClusterScan was doing
an early exit and no longer masking the poor 'false' default value.
This will further improve the earlier solution for rancher/rancher#26996.
It appears that sometimes the cisscanconfig schema isn't present (I haven't
been able to consistently reproduce) which causes the UI to prevent the user
from saving changes to their cluster. Though I think this could be fixed
by the backend I'd like to stop the bleeding and have the UI handle this
better.
So there's two things that are here to help out.
1. I added a default value to the initial empty object just in case it never gets set
elsewhere.
2. If an exception is thrown while creating the scanConfig
we now just prevent the schedule from being set since it wouldn't work
anyway.
Either of these should allow saving to proceed.
rancher/rancher#26996
Adds a new service which parses version from the various cloud provider version
list. I moved this to a new service rather than use form-versions because
form-versions is already fairly complicated with how it has to deal with RKE
Templates and unknown patch versions. It was simpler, cleaner, and faster to
move the CP cluster version parsing to a service and use new select because the
versions coming down do not include unknown patch versions. Addtionally going
this route allows us to not have to test all clusters for regressions, only CP ones.
rancher/rancher#26255
I converted the enabled value back to a boolean instead of it being a string
but I forgot to switch the three disabled fields to expect a boolean.
@rancher/rancher#26245
When editing a cluster the value of scheduledClusterScan.enabled comes
back as a string rather than a boolean. I just ensure the value will be a
boolean to make sure the value is displayed on the radio buttons as
expected.
rancher/rancher#26245
The default rancherKubernetesEngineConfig is only created for new
clusters. This now creates a default upgradeStraegy object when one
doesn't exist in rancherKubernetesEngineConfig.
rancher/rancher#25951
This will now allow KDM to drive what the benchmark options are and what
the default option is for profile selection given a kubernetes version.
The 'cisConfig' type is what we look up for this information.
rancher/rancher#25888
without the value in place when the user visited this page with only one node
when we auto select the dropdown wouldn't have an initialized value.
rancher/rancher#25966
- Make table sorting work with scheduled scans
- Make the cis table fit on laptop screen
- Add an appropriate placeholder for the scheduled scans cron field
rancher/rancher#25937rancher/rancher#25939
The profile helper methods were attached to the cluster model.
Unfortunately, the cluster isn't available when creating a new rke
template.
To resolve this I moved all of the cis helpers out of the cluster model
and utils and moved them into a cisHelpers service so they could be
used without access to the cluster itself.
- Added Set Alert button
- This will set the appropriate options for cis
- Added Set Schedule button
- This will scroll the settings into view
- Added a modal so profiles can be picked
Was previously using the presence of nodeDrainInput to determine
the value of drain. Drain is now a part of the backend so I'm using
that value instead of inferring.
rancher/rancher#25732
Adds an empty name to the cluster model creation so it isn't missing if the user
opens yaml editor and knows they should input it
Removes incorrect next usage
Adds logic to handle overriding the name in name-desc when updateYaml is called
Updates form-name-description model observer to watch the two props it actually
cares about
Fixes bug in removeEmpty util which would remove excludedKeys during filter phase
rancher/rancher#24971
This required refactoring the drain modal into a reusable component
since these fields were going to be used in more than one place.
rancher/rancher#24110
I originally tried to fixrancher/rancher#24704 without completely
special casing. Unfortunately that lead to other issues:
rancher/rancher#24745rancher/rancher#24794rancher/rancher#24814
I decided to revert all of the related changes and to just special
case this one instance. Ultimately I think the removeEmpty is the
culprit but it requires backend changes in order to properly fix
and those changes are not happening right now.
We were erroneously adding cloud_provider.awsCloudProvider on
digital ocean etc due to rancher/rancher#24515.
This change assumes that the presence of
onfig.rancher_kubernetes_engine_config.cloud_provider.name
implies that the cloud_provider should be present. If that
nested field isn't present we remove cloud_provider.
rancher/rancher#24745
While working on a ticket to provide the ability to de-select subnetworks when
using the create subnetwork option I discovered the options were all messed up
and allowed you to misconfigure yourself into a hole.
I've moved ipalias and related networks settings out of advanced becuase
depending on what you select for your subnetwork the ability to choose ipalias
and the other settings changes.
This change allows you deselect a node subnet so you can create a subnetwork
automatically.
rancher/rancher#21079
After pressing the 'create' button of the EKS driver the user was being
transitioned back to step 3 (Vpc & Subnet) rather than waiting on the
final page until the save is complete and returning to the cluster page.
An observer was being triggered by the save process which subsequently
set the step back to 3. To resolve this we will only enter the branch if
there are initialized values that need to be set back to default.
rancher/rancher#23493
While editing a cluster properly support .x kube version comparisons when
filtering out cluster template revisions.
Coercing a .x version converts it to a .0 which made the revision look like
it was a kube downgrade. By making use of .satisfies when the revision
kube version ends with a '.x' we're now better able to check if the
kube version is a downgrade and filter appropriately.
rancher/rancher#23489
the check-override-allowed component did not know how to deal with the k8s
version question because of its tri state and how we deal with the patch version
that is an override but not really an override. I added a check to verify the
mode is view and we have the param then display param so we don't initialize the
form-version component which has logic to inject the current version into its
versions dropdown but only if we're new, editing, or cloning.
rancher/rancher#23478rancher/rancher#23465
The current kubernetes version wasn't being shown if it was no
longer a part of the supported versions when in view mode. Instead
the latest version was being displayed even if that wasn't what was
deployed. To resolve this we include the current version as one of
the choices if it's not present.
rancher/rancher#23465
- Moved from Ember.$() to importing jquery.
- Moved from fn().on() to on(fn())
- Moved from fn().observes() to observer(fn())
This got /g/clusters from 27 warnings to 5 warnings for me.
When cloning a RKE template revision with overrides the values of the override
in the form were not reflected in the overrides section at the bottom of the
page because the alias on the question was never created.
rancher/rancher#23056
`Custom Cluster Overrides` was originally designed when we allowed users to
create custom overrides for items not in the UI but since that was removed the
template consumer will only ever see overrides for sections we have built in
and when launching we never display the overrides for ones with UI components.
rancher/rancher#23069
When consuming a cluster template selection of the template id from the drop
down didn't change the cluster template revision becuase the UI component was
using the readOnly value. Attached the selection to the correct value so the
action floats up (DDAU).
rancher/rancher#22977
When editing a cluster that was created with cluster template
the cluster template revision couldn't be saved.
The revisionId was stored as a component member variable instead
of as a part of the model. It needed to be stored as part of the
model in order for the NewOrEdit to see the changes and save
them. I went ahead and referenced the model directly everywhere
in the component and removed the component member variable.
rancher/rancher#22920
We want the user to be able to see the security options that were
selected even if they can't be edited when editing the cluster.
We had to extract and infer the selected options given the oauthScopes.
It would be better if our API could more closely reflect our fields.
rancher/rancher#19070