* disable scale down to zero
* scaling down to zero restrictions for RKE1
* removed extra optional chaining
* removed wrong condition for minus button
* added unit tests
* combined rke1 and rke2 scale down modals + more tests + minor refactor
* delete unused file ScaleRke1NodeDownDialog.vue
* moved logic from created to data + used nameDisplay for both rke1 and rke2
---------
Co-authored-by: Mo Mesgin <mmesgin@Mos-M2-MacBook-Pro.local>
* add logic to support new table cols via extensions hook on resource table instead of type-map so that we can capture table with locally defined headers like the nodes list
* cleanup
* address PR comments
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MacBook-Pro.local>
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
* Extension support for custom provisioning
* FIx lint issues
* Discovery / Tweaks
- fix issue where namespacesOverride was lost
- tidy up PROVIDER
- try to handle missing provider=type url param (could be missing extension-params)
- added a few comments to comments to come back to
* Names and typings
- change param --> customParam to make it clearer it's not url params
- add labels-annotations to shell types
* Wire in provider detailTabs
- as per original readme this should be made generic (extension point working directly with ResourceTabs)
* Update IClusterProvisioner & docs
* Improvements / Changes to support proving out DO extension
- cluster hooks optional
- buff up save hook (and pass in cluster when calling apply fn)
- move normalizeName into generic place
- bring back async create machine config
- hack for do extension (map example provider to do provider)
* updates, add optional saveCluster, add missing kube file
- saveCluster complements hooks, doesn't skip handling of addons, etc
- ensure register hooks take the `this` context in all worlds
* Adding docs
* Updates
- location config based changes
- change customParams to context
- add query param
- add new extention point to add tabs to cluster create cluster config section
- fixed some typing
- fixed issue where cluster was not passed to before / after hooks (only important if 'this' changes
* Changes following review, fix `t` in plugins
* Fix linting
* Docs updates, pass through more edit/view things
* Conditionally show the namespace grouping in the cluster list
- means users can differentiate between clusters with same name in different namespaces
- useful when clusters created via extension provisioner where ns can be selected
* docs tweaks, actually include the provisioning page in docs
---------
Co-authored-by: Richard Cox <richard.cox@suse.com>
* Adds form fields to edit the autoscaling behavior in the HPA edit form
* Adds autoscaling behavior information on the HPA detail page
Fixes https://github.com/rancher/dashboard/issues/9032
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
This embeds the project monitoring Grafana metric dashboards if they are present and user has no access to cluster monitoring.
Fixes https://github.com/rancher/dashboard/issues/7286
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
If alertmanager is deactivated, the prometheus links should not be disabled. Previously all links were disabled as soon as the alertmanager was disabled, because it picked the namespace from the alertmanager url, which is empty if disabled.
Fixes https://github.com/rancher/dashboard/issues/8350
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
This adds the following functionality to the violations list on the OPA Gatekeeper constraint detail page:
* Add a namespace column to the violations
* Make the violations list searchable
* Allow to download the violations as a CSV, similar to CIS scanner violations
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
* change condition to check if buttons for scale up/scale down deployment are enabled
* revert changes
* change condition to check if buttons for scale up/scale down deployment are enabled
* `canUpdate` exists in inherited `resource-class` class
- I've confirmed this change is ok with Alex
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
Co-authored-by: Richard Cox <ricox@suse.com>
* Handle nodeGroups undefined for manually imported RKE cluster.
* Forward part changes from #8222
---------
Co-authored-by: Neil MacDougall <nmacdougall@suse.com>
- fix issue where ..
- state 1 - X machines + Y fake machines = total
- state 2 - X+1 machines + Y-1 fake machines = same total
- same total meant sortable table `arrangedRows` value wasn't updating
- fix is to ensure the sort generation changes so `arrangedRows` doesn't return the cached rows
- this is the same method used for the project/namespace list
* update fakeMachines naming matching function
* update code based on PR comment
* Final tweak
- isElementalCluster will always result in the same `machinePoolInfName.includes(machineFullName)`, so exit early with it
- tidy up var names
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
Co-authored-by: Alexandre Alves <aalves@Alexandres-MacBook-Pro.local>
Co-authored-by: Richard Cox <ricox@suse.com>
* getting elemental changes on cluster provisioning back to rancher dashboard
* code cleanup
* apiversion created from machineCconfig schema attributes
* add machine-config loader to load it from an extension
* fix issue where elemental cluster details could not be displayed + minor changes and fixes
* fix bug where elemental infrastructureRef.name for elemental start with nc- and therefore was generating a fake machine when it shouldnt + cleanup prov cluster model
* prevent code change
* getting k8s file back up to master state to avoid complex merge conflicts
* getting k8s file back up to master state to avoid complex merge conflicts
* applying changes to cluster.x-k8s.io.machinedeployment
* Address PR feedback
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
Co-authored-by: Alexandre Alves <aalves@Alexandres-MacBook-Pro.local>
Co-authored-by: Neil MacDougall <nmacdougall@suse.com>
* Moves sockets into the advanced worker
* worker can die peacefully now, making switching between cluster work.
* Make waitFor generic, wire in to waitForTestFn
* General Changes
- Fixes for switching cluster
- includes using common getPerformanceSetting
- avoid new code to unsub before socket disconnect
- handle `watch` `stop` requests
- lots of TODO's (questions, work, checks, test, etc)
- use common
* Switch socket fixes
- isAdvancedWorker should only be true for cluster store
- advancedWorker to be wired in
* Fix socket id for cluster workers
- sockets use an incremented local var for id
- when we nuke the socket file within the worker this resets, so they all ahve id of 1
- work around this by applying the unix time
* Fix handling of new partical counts response
- seen in dex cluster explorer dashboard
- count cards would be removed when partial counts response received
* Make resourceWatcher the sole location for watch state
- getters canWatch, watchStarted now are worked around (they look at state in the UI thread)
- we now don't call resource.stop or restart.start in subscription
- tidied up `forgetType`
- moved clearFromQueue from steve mutations into subscription mutations (better location)
- added and removed some TODOs
- fixed watch (stop handler should be higher up, include force watch handling)
* pushes the csrf value into worker and adds it to fetch request headers.
* refactors batchChanges to address ref concerns and be more performant
* Maintain schema reference whilst updating
- This change mutates input in a function, which is bad...
- but ensures the reference isn't broken, which is needed to maintain similar functionality as before
* Fix waitForTestFn
- Seen when creating or viewing clusters
* On unwatch ensure any pending watch requests are removed from the queue
- the probably would have been a problem if the worker wasn't nuked
- however as the codes there lets make it safe
Also added `trace` feature in advanced worker, will probably bring out to other places as well
* Fix navigation from cluster manager world to any cluster
- Ensure that we handle the case where the advanced worker was created but the resource watcher wasn't
- ... but fix case where this was happening (aka ensure that a blank cluster context is ignored)
* Tidy some TODOs
* Add perf settings page
- This will help test normal flow (when advanced worker is disabled)
- Note - setting is now in a bag. This may help us better support further settings (enable client side pagination, etc)
```
advancedWorker: { enabled: false },
```
* FIX - Nav from cluster dashboard --> specific event --> cluster dashboard and events not re-subbed
- Ensure we block default handling of resource.start (keep state in resource watcher)
* Tidying up some TODOs
* Adds in a cache and uses it to validate SCHEMA messages before batching.
* Forgot to actually save CSRF to the resourceWatcher when instantiated.
* an empty resource in a batchChange to signal remove
* Move addSchemaIndexFields to and created removeSchemaIndexFields in new file
- this avoids bringing class files into the worker
* Fix disconnect/reconnect
- Remove `syncWatch` (do the watch/unwatch straight away)
- Test/Fix re-sub on reconnect
- Test/Fix growls on disconnect
* Tidying up some TODO's
- including clean of workerQueue on resource.stop (this is SUPER defensive)
* batchChanges will now handle aliases
* Fix pods list - WIP
- ensure podsByNamespace is updated on batchChange
TODO
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Fix pods list - fixes
- ensure podsByNamespace is updated on batchChange
Tested / Fixed
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Tidying TODOs
* Remove default same-origin header
- https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
* Fixed TODO description
* Refactor subscribe, make it clear which vuex feature relates to what
* Lots of Fixes
- batchChanges fixes
- fix index is 0 issues (!/!!index)
- only `set` if we have to
- ensure we set the correct index after pushing to list
- ensure map is updated after reducing list size with limit
- podsByNamespace fixes
- ensure when ew replace... we don't use the same referenced object
- general service resource fixes
- ensure service's pods list stays up to date with store
* Multiple improvements/fixes
- resourceCache - store the hash instead of the whole object. This means longer load time be reduces memory footprint
- resourceWatcher
- don't re-sub on socket reconnect if watcher is in error
- don't sub if watcher is in error
- don't unwatch for 'failed to find schema' and 'too old' errors
- this clears the error, we won't to keep it to ensure we don't watch
- Remove #5997 comments, follow on work #7917
* toggle debug, remap alias types, cleaned up comments and console
* Unit tests for batchChanges
Much more scope for some crazy content
* Logging tweaks
- disable logging by default
- initWorker comes in too late to affect initial trace, so just rely on the `debug` to toggle at runtime
Co-authored-by: Richard Cox <richard.cox@suse.com>
* Added prompt in machinedeployment
* Save users promptConfirmation in cookies
* Changed pormpt size
* Added comments to the code, replace mounted function with create
* Fixed review comments
* Removed cookies added scale pool promt variable in prefs file
* Corrected pref variable name format and update comments
* Added confirmation prompt option in pref page
The catalog.cattle.io.clusterrepos/{repo} resource doesn't subscribe to updates. We now force a full request in the areas where it's appropriate to do so.
fixes#7668