We didn't actually need to call redirect from the methods since the `loadingError` store action did the redirect for us.
We still need to exit early from the middleware so instead of having to check and execute a return value (that was a boolean so couldn't be executed anyway) we now just throw an exception.
- Test count of settings requests
- during setup flow
- login page
- home page
- login page --> home page
In addition
- Fix issue where now populated fromHeader can be `false`
- Don't mix up CATTLE_BOOTSTRAP_PASSWORD and TEST_PASSWORD, this causes issues in the setup flow when they're different
- Refactor setup test, re-enable check at end
- when findAll runs we cache running results with key `JSON.stringify(headers) + method + opt.url`
- if a request is made with a matching key we return the first result as the second result (avoiding duplicate requests)
- when mgmt settings were fetched with a hardcoded url ... url contains singular `setting`
- when mgmt settings are fetched without hardcoded url ... url contains plural `settings`
- therefore the second request was not using the cached first request
Also
- replaced findBy with .find
* resource edit AS yaml
* fix cruresource (yaml from form)
- lazy load the schemaDefinitions when needed, avoids sync all to createYaml before we have an async chance to fetch schemaDefinitions
* Fix questions
- there are only four places we use questions, none of which use schema, this is just to be safe
* cluster scan, plugins/fieldsForDriver, defaultFor, validationErrors
* pathExistsInSchema
- used to optionally show conditions tab/list in resource detail view
- logs of things in ingress list/edit
* createPopulated / defaultFor
defaultFor requires resourceFields, it's only used by createPopulated in one place to support machine configs without components
* wip
* WIP MONITORING.SPOOFED
- these aren't spoofed types, but secondary schemas
- testing fix blocked, primary schema's have resourceFields
* Move steve specific (resourceField) code to steve models
- create models for steve schemas and apply to cluster and management stores
- move resoureField based validationto steve model
- move pathExistsInSchema to steve store getter
- don't fetch schemaDefinitions on start up when saving prefs (not needed and blocking)
* comments / improvements
* (untested) refactoring
* Fix alertmanager definitions, add retry definition fetch
* Fix pathExistsInSchema for path length > 2
* Fix questions that accept schemas
- tested by adding Questions to random page and the node schema
* Fix to saving configmap part 1
- the save works but doesn't show data. the yaml is the same as before. debug info added
* Validation by resourceFields is a norman specific thing, so make it such
* small refactor
* Tidying up
* Remove rebase junk
* fix linting and unit tests
* fix unit tests
* fix linting from fix for test....
* Tidying up, fix alertmanagerconfig
* Remove unit test todos
* add unit tests for resource fields
* sdssdf
* Add unit tests for pathExistsInSchema
* JS --> TS
* Store schemas in local singleton cache to avoid hitting store
* fix minor changes from review
* cruresource changes following review
- improvement - remove spurious canDiff
- createResourceYaml - pass in resource to use instead of calc in code
* WIP changes to parseType
* Fix generic cloud credential and node driver forms
* handle missing reactivity given schema definitions not in store
* fix and add unit tests for `parseType`
* Fix create-yaml test
* Changes following review
- improved comments
- SchemaDefinitionCache is now per store (and is reset as such)
- typeRef now uses parseType
* Fix dep loop by moving route based helps in auth out to utils file
* fix unit tests
* Changes following review
- if entering the product, loading cluster, etc fails extensions can now throw a `RedirectToError` error
- this avoids the user being redirect to the dashboard home or fail-whale page
- example - epinio auth fails and we want to return user to the epinio cluster list
In theory we should be able to to instanceof with something that extends error and has a `name`,
unfortunately this does not work, so gone for something more manual.
Also bump shell so we can publish a new version
* check mgmt store and norman store before throwing resource not found
* Final attempt at fixing resource check in authentication middleware
- Some resource pages don't use the product's store. This happens when..
- the product changes the store type of a resource via product config `typeStoreMap`
- explorer project, cluster binding and project binding pages
- the resource detail / list overrides the store directly with store-override
- create api key (this isn't in the product / type / id world though)
- cloud credential create / edit / view
- To fix this
- ensure we use the correct getter to fetch the store a resource might be in
- covers typeStoreMap case
- avoid using `store-override` param for resouces in product/ type / id world
- covers cloud cred world (use correct typeStoreMap) instead
- also maake sure we use resourceoverride to get the correct store for a resource
- I'm trying to make sure we support the generic case to avoid breaking extensions which would use the generic inStore toolset
- If this fix doesn't work, we should remove all checks for resources from authentication and instead return to checking for the resource type in resource list & detail components
Tested user with project role in downstream cluster
- cluster instance create / view
- cluster project list & view (saves will fail due to permissions, but screens are ok)
- cluster membership tab (and add page), project membership tab
- cloud creds list, create (and cancel), edit
- places that use resource-override
- create api key, auth config, monitoring resources
- refresh on all of above
---------
Co-authored-by: Richard Cox <richard.cox@suse.com>
- We don't need to check management and rancher stores, just the current store
- We need to ensure that the product has been correctly set to get it's current store
- Also check that schemaFor exists, to cover exension cases
- replace workaround for workload type with a check for virtual types
- note - spoofed types have schemas
- add additional e2e tests for other troublesome areas
* working on making sure we show a 404 page with a proper error
* code cleanup + add logic to capture 404s for resource instance details
* add e2e tests
* address PR comments + adjust e2e tests
* cover 404 on cluster for dynamic plugins
* address PR comments
* catching bogus resources on authenticated middleware with redirect to 404 page
* fix lint issue
* address PR comments + fix issue with e2e tests
* Fix l10n
- Ensure error messages doesn't reference 'list' when not on a list page
- The new way the feature works means going to a list with an unknown resource results in the generic message, but this is preferably over the above
* fix e2e tests
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MacBook-Pro.local>
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
Co-authored-by: Richard Cox <richard.cox@suse.com>
Remove resources from the store if they meet certain criteria. This will reduce the memory footprint of the dashboard and load on the backend (less watchers for large collections)
- GC is disabled by default and can be enabled via the Global Settings --> Performance tab
- User can configure
- The age in milliseconds in which a resource has to exceed in order to be gc'd
- The count which a resource has to exceed in order to be gc'd
- GC occures in stores that have it enabled
- ATM this is just the `cluster` store... but could be enabled for dashboard-store's such as the harvester one (one liner plus optional `gcIgnoreTypes` override for ignoring types)
- GC will be kicked off in two cases
- Route Change for a logged in state
- At a given interval
- Resource type _not_ GC'd if
- The store is ignoring the type
- For example the `cluster` store doesn't want to gc things like `schema` and `count`
- We're going to a page for that resource (list, detail, etc)
- For example don't GC pods if we're going to a pods page
- The last time the resource was accessed was recently
- We store the resource accessed time via hooking into actions and getters
- Setting the last accessed time will cause watchers of that type to trigger (only an issue for duplicate watchers)... but importantly not watchers of other types
- The resource is being used in the current page/context
- We store the route changed time and compare it to the resource accessed time
- There's too few resources
- We might as well keep them to avoid a network request to re-populate
// TODO:
- Should additional features be added to preferences
- if GC on route change is enabled
- if GC on interval is enabled, and how often it runs
- Sensible default preferences
- Remove some logging
- my custom way no longer worked
- for some reason checkClusterChanging wasn't firing quick enough anymore
- the order of things looked correct (it should have disabled the outlets via clusterChanging before the auth middleware runs)
- however the old explorer page errored when navigating to something outside of the explorer product
- beef up the simpler route change handler
- add something to try to catch the change of product as well
- uses similar approach, don't show anything until the store is up to date with the route
- harvester store is a steve store that now lives in the plugin
- harvester `loadVirtual` replaced with a shortened `loadCluster` in it's own store
- Also fix xterm css import
- previously combined request for user and principal together, instead of making them sequentially
- this should have been safe (both calls will fail or succeed given auth state)...
- ... might not be given ways requests are handled (i chickened out)
Home Page
- Don't block whole page on loading of mgmt and prov clusters
- Use table `loading` indicator when clusters are loading
- Use correct cluster count (with harv cluster filter) - To confirm
Cluster Dashboard
- EventsTable - use standard table loading indicator
- Don't block on fetch at all (or show page loading indicator)
- Remove fetch for nodeTemplates and rke1NodePools. I went through a lot of code and don't think these are needed
- Reminaing calls for Node and Metrics can happen at the same time
- Forget additional resource types when leaving page
- Optimise fetch of management nodes
Pre-Page optimisations
- Authentication Mixin
- if applicable, fetch `principal` 'me' same time as `user` 'me'
Other tweaks
- Don't show AwsComplianceBanner or AzureWarning until management store ready
- This was broken by https://github.com/rancher/dashboard/pull/6261
- The `activeNamespaceCache` depends on the product (fleet requires workspaces, everything else namespaces)
- This needs updating when going to or from fleet
NOTE - On `head` (but not `ui-dashboard-index` `latest`) refreshing on the explorer pods page does not show the correct namespace filtered pods
Small tidyup for `activeNamespaceCache` and `activeNamespaceFilters` getters