* generic changes from https://github.com/rancher/dashboard/pull/14492/files
* VERY WIP
- move out watch event stuff into own file
- improve typing
- start resolving core unwatch & unwatchIncompatible of side nav cluster watches (not going to store) being seen as incompatible with find cluster action that does go into store
* very wip, vaguely working
* starting to tidy up
* more tidying
* wip - pre-pivot
* maybe...
instead of upfront unwatch with lots of complicated logic... only do it if we hit the issue (i.e. no page entry into page )
* tidying up
* asd
* Remove now uneeded sideNavCache
* big refactor, untested
* tidying up
* tidying
* more tidying
* more tidying
* add some super basic unit tests, fix another
* remove debug logger and last todos
* more unit test fixes
* Fix two bugs, and fix e2e tests
- we don't always have the mgmt cluster, so take this into account
- i've checked all usages of `['management/byId'](MANAGEMENT.CLUSTER`
* more e2e fixes
* Tidying up following review
- more comments
- rename of method to something more sensisble
* Disable support for ssp in side bar and clusters
* wip
* tests - wip
* wip - tidying
* wip
* wip
* tweaks
* fix and expand sub tests
* backoff tests
* tidying up
* more tidying up
* WIP
* address backoff find wiping findPage res after nav from detail to list
* add more docs
* fixes
* comments and logs updates
* remove debug
- when we calculate the value of a filter we remove entries that no longer exist
- ensure that this new version of the filter is persisted
- this means whilst on vai supported lists it correctly kicks off a http request
- this also avoids removed namespaces being used in filters if the list changes
Also
- improved some comments
- labelSelector has two primary applicators
- matching utils function
- this should normally NOT be cached and not receive updates over sockets
- this is done by stipulating transient = true
- findLabelSelector
- this should normally be cached and receive updates
- this is done by stipulating transient = false
- when applicable (workload and services detail page) we want live updates whilst we're on the page
- we don't when we leave, so unwatch
- fix and align these two and their usages
- ensure each input (transient) is correct for context
- ensure response is in the correct format and handled correctly
- improved typing
Additionally
- Improve labelling for network policy ingress/egress label selectors
- Replace empty table with 'no details' in cis report detail page's list's sub row
- On services page handle the very weird use case of no visibility on pods
- Fix issue where extension catalog was not showing when refreshing on extension catalog page list
- Fixed an issue where we would ALWAYS show false positive invalid field warning in console
- Bump up default page size from 10k to 100k
- this is for requests we make to the new vai cache outside of pages
- it matches the default they use when proxying requests to target kube cluster
* port of wip
* asdassad
* Update vai / sql cache based api filtering to match latest changes
* Revert "Pin Rancher version to v2.11-2053ce644a31cd8053d1f58e2487154b0b8513b6-head for e2e tests"
This reverts commit 60f62107e7.
* sdfdsf
* dsfdsf
* asdsad
* fix dynamic hide local cluster changes
* improvements
* aaaa
* Working through todo's/tidys
* Remove debug / tidy up
* resolving todos
* remove some debug
* Tidying up #1
* Make manual refresh and auto-refresh visible on perf setting, disabled by default
* Remove dev stuff
* Fix some e2e tests
* Updated comment
* Wire in resource.changes debounce, clearer label for feature `listAutoRefreshToggle`
* Temporarily disable all watches when vai enable
- remove once rancher/rancher#40773 is resolved, which will then finally fix#12734
* Revert "Temporarily disable all watches when vai enable"
This reverts commit c708f468e4.
* Fix nextResourceVersion
- Ensure it handles resource revisions (in both LIST or individual resources) that are strings
- add unit tests
* Update after review
* Server-side pagination for home page clusters list and side bar clusters
- Functional Changes
- SSP now works after vue3 bump
- Home Page Clusters list now uses server-side pagination
- Side Bar clusters list now uses server-side pagination
- Wire in now supported sorting / filtering by id and name used for table columns
- Allow pagination to be enabled given a specific context
- Call findPage without persisting to store
- New Pagination Tools
- PaginatedResourceTable - Convenience Component, wraps ResourceTable with pagination specific props
- PaginationWrapper - Convenience class to handle requests for resources and updates to them (avoiding store)
- Regressions
- Side Nav menu ready state was `mgmtCluster.isReady && !pCluster?.hasError`, now ???
* Iteration
Note - prov clusters is broken (only fetches local) due to blocking pr. breals
- notPinned list
* Fix dupe inStore
- remove from resource list, put in resource-fetch (used also by pag res table)
* Two fixes
- changes namespaces kicked of side nav cluster requests (thought pinnedIds changed)
- fix generic lists re-fetching given ns filter changes (they don't have namespaced arg)
* remove comment, backport fix
* test fixes
* E2E: Ensure we wait for cluster entries to exist before clicking on them
* backport fix for local/api filtering
* Remove debug code
* Changes after review
* e2e fixes / debugging
* More e2e fixes
* More e2e fixes
* More e2e fixes
* Fix generic pages that filter on pagination
* Attempt to fix flaky vai test
* Fix after merge from master
* Updates following new indexed files
* Fix lint and test
* Changes given real cluster tests
- general fixes
- correct issue were sorting prov clusters on mgmt cluster props (issue in master as well...)
- bit the bullet, we now don't fetch all mgmt clusters on dashboard visit.
- there could be knock on affects, but we'd need to remove it sometime in 2.11....
* Fix issues with diplaying rke1 data in home page
- includes https://github.com/rancher/dashboard/pull/12881
* Fix unit tests
* Changes for new design
- New visuals
- Pagination controls --> load more
- finished testing of label select with pagination off
# Conflicts:
# shell/edit/provisioning.cattle.io.cluster/__tests__/Basics.tests.ts
* Changes following review
* Update Node list to support server-side pagination
- Setup pagination headers for the node type
- Define a pattern for fetching custom list secondary resources
- Major improvements to the way pagination settings are defined and created
- Lots of docs improvements
- Handle calling fetch again once fetch is in progress (nuxt caches running request)
- Validate filter fields (not all are supported by the vai cache
- General pagination fixes
* Lint / test / fixes
* Improvements to configmap e2e test & Improve pagination disabled
* Beef up validation
* Fix missing name column in non-server-side paginated node list
* Fix PR automation actions
- fix syntax
- catch scenario where a pr has no fixed issue
> There's duplication between files, see https://github.com/rancher/dashboard/pull/10534
* CI bump
* Fixes post merge
* Wire in 2.9.0 settings for server-side pagination
- Everything is gated on `on-disk-steve-cache` feature flag
- There's a backend in progress item to resolve a `revision` issue, until then disable watching a resource given it
- Global Settings - Performance
- Added new setting to enable server side pagination
- this is incompatible with two other performance settings
* Integrate pagination with configmaps in cis clusterscanbenchmark edit form
Also
- improved labeled select pagination
- gate label select pagination functinality on steve cache being enabled
* - harvester machine-config
- project monitoring (and bug fixes)
* Disable workload screen if vai cache is on
- temp step until we get new overview
* TODOs and TEST
* Conditionally remove fetch of all secrets from SelectOrCreateAuthSecret
* TODOs and TEST
* Update SimpleSecretSelector
- only used in monitoring.coreos.com.alertmanagerconfig context
* View and Edit ingress - secrets
* node detail page - pods list
* Backup/Restore: Secrets (WIP)
* Backup/Restore: Secrets, and other usages of SimpleSecretSelector / SelectOrCreateAuthSecret
* Edit: Service account
* Add comments for remaining items
* Paginate Secret selection for logging providers
- Allow `None` option in Paginationed LabelSelect
- Optionally classify pagination response
* WIP
* fixes arfter merge
* Don't suggest container names, not practical
- previously all pods were fetched... and we scrapped all container names from them
- this is a scaling nightmare, user now must just enter the name/s to match
* Avoid findAll secrets in SimpleSecretSelector
* tidying up
* Move LabeledSelect/index.vue back to LabeledSelect.vue to not break extensions
* changes after self review... 1
* changes after self review... 2
* ooof
* changes after self review... 3
* fix formatting
* Link new paginated label select with pagination setting
* Work around failing kubewarden unit tests in check-plugins gate
* Fix backup.spec e2e test
* fix formatting, paginationUtils.isSteveCacheEnabled --> paginationUtils.isEnabled
* Don't fetch all secrets on cloud creds page
* Fix backup.spec e2e test
* TODO tidying / tracking
* don't getch ALL workloads for hacky way to get a link to a service's workload
* Fix bad merge
* Create a convienence wrapper called ResourceLabelSelector that hides most of the complexity
* fix unit test
* Updates following review
* changes following self review
* Fix bottom bar of edit backup, edit restore pages
* revert temp change
* changes following self review
* Workaround for kubewarden unit tests in check plugin gate
* bump
* Fix e2e
- Everything is gated on `on-disk-steve-cache` feature flag
- There's a backend in progress item to resolve a `revision` issue, until then disable watching a resource given it
- Global Settings - Performance
- Added new setting to enable server side pagination
- this is incompatible with two other performance settings
- 9318936c72
- BUG 1
- Navigating from nodes list to a node detail page unwatches nodes list but doesn't watch new resources
- Node's list destroy has a forgetType node
- this removes entries from store and unwatches nodes list watch
- we clear the fact we're watching the node list once we receieve a resource.stop from socket
- There's a race condition, the node we're going to is still in the store... but the find action for this doesn't kick off a watch for the rsource
- This was resolved by the change in the find action
- BUG 2
- Refreshing on the detail page results in a watch for that specific node
- Navigating to the list starts a watch for all nodes, but doesn't stop the individual watch
- This was resolved by the change in subscribe
- HOWEVER
- These fixes could impact how what we watch in other cases where we might call find all and find specific in the same context
- Safer to address later
1. Switching from a detail page with a watch on a specific resource to the list page where we watch all resources did not unwatch on the specific one
2. Switching to a detail page of a resource that's already in the store should ensure we're watching it
- examples metrics.k8s.io.podmetrics, metrics.k8s.io.nodemetrics, componentstatus
- change 1
- when these are watched the BE now sends an error... which we ignore and try to watch again
- so handle the error
- change 2
- avoid this scenario though by stopping watches that don't have the watch verb
- because of this change 1 can only be tested by changing code
- caused because
- resource.stop is sent alongside resource.error
- on resource.error we set the state of the watch to errored
- this was just using the root type as key and nothing else
- on resource.stop we call resource.start
- on resource.start we check for inError and bail if so
- this didn't happen, as we checked the inError using more than just root type
- fix is to make sure setInError useds same key process as inError getter
- this follows other places it's used
- This occurred because..
- The user tries to access resource with 401 --> onLogout dispatched
- onLogout unbsubscribes to all stores
- stores are waiting for mgmt stores to be ready
- user still redirected to log in page
- user logs in
- mgmt stores ready
- stores now unblocked from waiting for mgmt --> continue in original onLogOut action
- Fix is to ensure we exit the wait early if we get the call to unsubscribe
- Given forced filtering is now not resource dependent have a high level utils function to determine if enabled
- This should have opened up the door to setting a nicer default then ALL_USER, however it's actually initially applied somewhere other than ns filter
- Fixed a bug where the all option [] was valid
> This uses a new endpoint that has yet to merge. See https://github.com/rancher/rancher/issues/40140
WIP
- Contains console.warns (via custom logger, can be disabled)
- Waiting for final BE endpoint changes to merge
- Contains TODOs to resolve on final enpoint changes delivered
Pertinent Points
- Incompatible with incremental loading / manual refresh
- Harder to get counts (need to sum up from different namespaces)
- Requires use of new steve pagination
- Enforced NS threshold has been removed
- The threshold only applies to the primary resource. This has issues when loading a low count primary (daemon sets) which depends on a very high count secondary (pods)
- Fixing this would involve knowing all secondary resources a list uses, which isn't currently possible (each resource is requested individually, need to know them all first)
- There is no way to subscribe to multiple namespaces (one or all)
- We mock this in subscribe by only persisting changes to resources from within target namespaces
- Everything should work with Advanced Worker enabled
* cluster and rancher stores now use advanced worker
* Addressing feedback
* addressing PR feedback and unit tests
* Fixed sticky footer in Helm Chart Install Yaml (#8497)
* Remove dependency: nuxt
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency @nuxt/types
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* fix lint
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency: @nuxt/typescript-build
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency: @nuxtjs/eslint-module
Remove dependency: @nuxtjs/proxy
Remove dependency: @nuxtjs/style-resources
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* align package.log
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Storybook, add vue-loader dependency
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Added placeholder in url input (#8036)
* Added placeholder in url input
* Added format checker for Url input
* Update test
* Fixed lint
* Fixed breaking for other providers
* Fixed lint
* Update vue options to preserve whitespace (#8742)
* Update vue options to preserve whitespace
* Bug fix with tidy-up
* Fix response times for multiple clusters (#8720)
* Update extensions publish workflow for assets in gh-pages branch (#8618)
* Update extensions workflow to move assets into gh-pages branch
Fix gh-pages check
* Move bundle execution - add default basename and ext version
* Fix basename op
* PR changes
* [v2.7] Update Chinese translation (#8731)
* Upgrade to Vue 2.7 (#8654)
* upgrade @vue/cli-xxx
* Update Vue to latest 2.7
* Update eslint-plugin-vue
* Disable new linting rules
* Remove linting issue
* Pin Dom purify library version
* Add resolution to avoid conflicts with packages
* Update yarn/lock after the enforced resolution
* Exclude node 16 types resolution
* Fixed extra space bug in the generic cluster-page (#8626)
* Fixed extra space bug in the generic cluster-page
* Fixed space issue in banner
* Revert changes
* Removed extra div
* Fixed selected rows counter (#8419)
* Fixed selected rows counter
* Fixed lint
* Fixed counter in selection.js
* Small fix in toAdd condition
* Lints
* Fixed condition for selected row counter
* Changes in focusAdjacent function
* Fixed lints
* Improve OPA Gatekeeper constraint detail page (#8586)
This adds the following functionality to the violations list on the OPA Gatekeeper constraint detail page:
* Add a namespace column to the violations
* Make the violations list searchable
* Allow to download the violations as a CSV, similar to CIS scanner violations
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
* Increase memory on build (#8751)
* Remove extension autoLoad functionality (#8700)
* fix(BrandImage): remove async fetch method (#8432)
* Removed hasClass method (#8752)
* Removed hasClass method
* Clean code
* Two minor tweaks
- supported store types now a const
- WORKERMODES --> WORKER_MODES
* Fix two bugs
- Ensure advanced worker runs in cluster manager with no current cluster context
- Ensure `resource.stop` from advanced worker successfully kicks off watch
Also
- make rancher, managment and cluster store names a const
* Tweaks
- Fix some comments (jsdoc --> standard, todos)
- Made the web worker redispatch requirement clearer
* Fix unit tests
* Fix resource.stop / resource.error TOO_OLD bugs
- Persist watcher error state after resource.stop
- this will ensure the resource.stop given alongside the resource.error TOO_OLD is ignored
- Ensure any errors are cleared when we succesfully connect (given above)
- Should fix the next resource.stop being ignored after recovering from TOO_OLD
- Fix resync watch params
- these weren't correct, so the resync after TOO_OLD never worked
- the format could be slimmer, but i thinkg it matches how other socket's data works
Note - I manufactored the TOO_OLD by setting a revision of 1 in subscribe resource.stop if it came from the advanced worker
---------
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
Co-authored-by: richa <richa.bisht@suse.com>
Co-authored-by: Francesco Torchia <francesco.torchia@suse.com>
Co-authored-by: Neil MacDougall <nwmac@users.noreply.github.com>
Co-authored-by: Jordon Leach <40806497+jordojordo@users.noreply.github.com>
Co-authored-by: vickyhella <vickyhella@hotmail.com>
Co-authored-by: Giuseppe Leo <giuseppe.leo@suse.com>
Co-authored-by: Bastian Hofmann <mail@bastianhofmann.de>
Co-authored-by: LiuYan <361112237@qq.com>
Co-authored-by: Richard Cox <richard.cox@suse.com>
- when we recieve a `too old` socket watch error we kick off a resync which will watch with a valid revision
- we'll get a resource.stop event following the previous error. socket is in error though so we correctly abort
- the error for this was misleading
Can be trigger with a fake revision on cluster list
In subscription `watch`
```
if (!trigger && type === 'management.cattle.io.fleetworkspace') {
trigger = true;
revision = 1;
}
```
- changes cover create, change and remove
- resource.stop events happen
- we unusb
- after socket errors (that rancher sends, like revision `too old`)
- after resource type permissions change
- there would be a gap between resource.stop (fetch latest revision, wait 5 seconds) and resource.start
- this could lead to missed resource changes and stale info on screen
Linking a couple of pertinent changes
- forceWatch partially implemented - 14862b2924 (diff-42632b5ed3c30e60abade8a67748b16d45e0778091713dd71a46d4bbe9211d2c)
- too old originally removed https://github.com/rancher/dashboard/pull/3743/files
- this was implemented before the backend fixed their spam
Note - resource.stop can be forced with CATTLE_WATCH_TIMEOUT_SECONDS=300 (on v1 will resource.stop every 5 mins)
Note - Too old can be forced by editing resource.stop with
// const revision = type === '' ? undefined : 1;
// dispatch('watch', { ...obj, revision });
- fix issue where ..
- state 1 - X machines + Y fake machines = total
- state 2 - X+1 machines + Y-1 fake machines = same total
- same total meant sortable table `arrangedRows` value wasn't updating
- fix is to ensure the sort generation changes so `arrangedRows` doesn't return the cached rows
- this is the same method used for the project/namespace list
- Alternative fix to https://github.com/rancher/dashboard/pull/8064
- Assign the steve worker creator to the store via plugin
- This avoids package build errors (in harvester) due to the package build process missing web worker specific build config
- On the downside this means rancher/steve specific stuff is applied at a more global level
* Moves sockets into the advanced worker
* worker can die peacefully now, making switching between cluster work.
* Make waitFor generic, wire in to waitForTestFn
* General Changes
- Fixes for switching cluster
- includes using common getPerformanceSetting
- avoid new code to unsub before socket disconnect
- handle `watch` `stop` requests
- lots of TODO's (questions, work, checks, test, etc)
- use common
* Switch socket fixes
- isAdvancedWorker should only be true for cluster store
- advancedWorker to be wired in
* Fix socket id for cluster workers
- sockets use an incremented local var for id
- when we nuke the socket file within the worker this resets, so they all ahve id of 1
- work around this by applying the unix time
* Fix handling of new partical counts response
- seen in dex cluster explorer dashboard
- count cards would be removed when partial counts response received
* Make resourceWatcher the sole location for watch state
- getters canWatch, watchStarted now are worked around (they look at state in the UI thread)
- we now don't call resource.stop or restart.start in subscription
- tidied up `forgetType`
- moved clearFromQueue from steve mutations into subscription mutations (better location)
- added and removed some TODOs
- fixed watch (stop handler should be higher up, include force watch handling)
* pushes the csrf value into worker and adds it to fetch request headers.
* refactors batchChanges to address ref concerns and be more performant
* Maintain schema reference whilst updating
- This change mutates input in a function, which is bad...
- but ensures the reference isn't broken, which is needed to maintain similar functionality as before
* Fix waitForTestFn
- Seen when creating or viewing clusters
* On unwatch ensure any pending watch requests are removed from the queue
- the probably would have been a problem if the worker wasn't nuked
- however as the codes there lets make it safe
Also added `trace` feature in advanced worker, will probably bring out to other places as well
* Fix navigation from cluster manager world to any cluster
- Ensure that we handle the case where the advanced worker was created but the resource watcher wasn't
- ... but fix case where this was happening (aka ensure that a blank cluster context is ignored)
* Tidy some TODOs
* Add perf settings page
- This will help test normal flow (when advanced worker is disabled)
- Note - setting is now in a bag. This may help us better support further settings (enable client side pagination, etc)
```
advancedWorker: { enabled: false },
```
* FIX - Nav from cluster dashboard --> specific event --> cluster dashboard and events not re-subbed
- Ensure we block default handling of resource.start (keep state in resource watcher)
* Tidying up some TODOs
* Adds in a cache and uses it to validate SCHEMA messages before batching.
* Forgot to actually save CSRF to the resourceWatcher when instantiated.
* an empty resource in a batchChange to signal remove
* Move addSchemaIndexFields to and created removeSchemaIndexFields in new file
- this avoids bringing class files into the worker
* Fix disconnect/reconnect
- Remove `syncWatch` (do the watch/unwatch straight away)
- Test/Fix re-sub on reconnect
- Test/Fix growls on disconnect
* Tidying up some TODO's
- including clean of workerQueue on resource.stop (this is SUPER defensive)
* batchChanges will now handle aliases
* Fix pods list - WIP
- ensure podsByNamespace is updated on batchChange
TODO
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Fix pods list - fixes
- ensure podsByNamespace is updated on batchChange
Tested / Fixed
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Tidying TODOs
* Remove default same-origin header
- https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
* Fixed TODO description
* Refactor subscribe, make it clear which vuex feature relates to what
* Lots of Fixes
- batchChanges fixes
- fix index is 0 issues (!/!!index)
- only `set` if we have to
- ensure we set the correct index after pushing to list
- ensure map is updated after reducing list size with limit
- podsByNamespace fixes
- ensure when ew replace... we don't use the same referenced object
- general service resource fixes
- ensure service's pods list stays up to date with store
* Multiple improvements/fixes
- resourceCache - store the hash instead of the whole object. This means longer load time be reduces memory footprint
- resourceWatcher
- don't re-sub on socket reconnect if watcher is in error
- don't sub if watcher is in error
- don't unwatch for 'failed to find schema' and 'too old' errors
- this clears the error, we won't to keep it to ensure we don't watch
- Remove #5997 comments, follow on work #7917
* toggle debug, remap alias types, cleaned up comments and console
* Unit tests for batchChanges
Much more scope for some crazy content
* Logging tweaks
- disable logging by default
- initWorker comes in too late to affect initial trace, so just rely on the `debug` to toggle at runtime
Co-authored-by: Richard Cox <richard.cox@suse.com>
- Steve socket times out watches every 30 minues and we get a `resource.stop` event
- Previously we attempted to re-watch with a dodgy revision causing a `too old` error and the dashboard then fetching all resources for that type
- Avoid this by tracking latest revision which we should be up to date with
* Convert Rancher settings into Typescript and add interface
* Allow Rancher settings to be validated
* Add Rancher Settings min password length validation
* Replace settings number input with labeled input of type number for validation; Added missing required, focus, locale and labels attributes
* Add min/max/between value/length cases to global validation cases
* Correct validation syntax
* Add tests for the global settings
* Correct naming and assign directly rules to the inputs
* Create initial tests for CreateEditView
* Prevent Settings view to break if no setting is found for given ID
* Add max password length validation
* Add i18n to settings validation
* Add form validation to the CRUD component
* Prevent form to fail for resource types without validation
* Add test for no validation cases
* Remove form validation in favor of local view logic, due complexity issues
* Correct validator linting issue
* Correct i18n; Add types; Correct min/max/between validations i18n and combine the last
* Add translation type
* Correct validation translation types and definitions
* Replace custom validations with predefined rules
* Reintroduce form validation in abstracted configuration to pass settings through
* Add tests for new generic form validations
* Correct between values and length validation
* Split tests to use pre-existing rulesets due complexity and different cases
* Cleanup jsdoc in form validation
* Cleanup form validation mixin
* Add global settings test for generating rules from config
* Replace value.value with value for validating the resource
* Correct validation call and test instantiation
* Add note about value.value exception
* Disable faulty test due lacks of information
* Replace min/max value validation with between
* Add missing type for settings getter
* Move type folder within shell
* Move settings logic from config to utils