- Add a new tab, next to events, in the cluster dashboard
- Tab shows a list of all secrets of cert type
- List allows users to see which certs are expiring soon, how long they've lived etc
- Tab also shows a notification if certs are expiring or have expired
- Related fixes
- plumb in option list paging params (so we can show X of Y Certificates in pagination controls)
- fix usages of from = from || now
- Count requests to kube steve as steve requests (alongside local steve requests)
* add growl message to display warning messages from response headers
* update growl styling based on feedback from Eden
* Many updates
- Move functionality into it's own file
- Add unit tests
- Fix issue where YAML PUTs weren't included in check (bug in master)
- Fix issue with separating warnings (bug in master)
- previously this was done via `,`, however that appears in the messages themselves
- add configuration which allows this to be customised
- Add configuration which would allow us to disable or expand how growls are shown
---------
Co-authored-by: Alexandre Alves <aalves@Alexandres-MBP.lan>
Co-authored-by: Richard Cox <richard.cox@suse.com>
* add "exludeFields" option on findAll, unit tests.
* fix e2e
* lint fix and e2e test
* updated action unit test and fixed cy.intercepts
* deleted urloptions and test, converted unit tests to ts, removed lodash
* deleted unused urloptions and test
* converted unit tests to ts
* removed lodash
* lint fix
* Added comment with issue for updating query parameters for steve
* Updated based on feedback
* added partial flag to mutations
* pulled out partial flag, excludeOptions logic now entirely within getter
* fixed tests after ripping out partial flag
* Fixed e2e test as 'find' action behavior changed but is still valid
- This occurred because..
- The user tries to access resource with 401 --> onLogout dispatched
- onLogout unbsubscribes to all stores
- stores are waiting for mgmt stores to be ready
- user still redirected to log in page
- user logs in
- mgmt stores ready
- stores now unblocked from waiting for mgmt --> continue in original onLogOut action
- Fix is to ensure we exit the wait early if we get the call to unsubscribe
- Moving the store setting to enable/disable filtering in the store to a function in a computed property caused havoc for churn
- Make this much neater by moving flag to the setup stage for both ns filter
- Given forced filtering is now not resource dependent have a high level utils function to determine if enabled
- This should have opened up the door to setting a nicer default then ALL_USER, however it's actually initially applied somewhere other than ns filter
- Fixed a bug where the all option [] was valid
- Filtering is now no longer done via `resources.project.cattle.io.`
- No need to update the URL anywhere or massage resources fetched via endpoint
- Also no need to make the planned change to remove `resources.project.cattle.io.` from side nav
> This uses a new endpoint that has yet to merge. See https://github.com/rancher/rancher/issues/40140
WIP
- Contains console.warns (via custom logger, can be disabled)
- Waiting for final BE endpoint changes to merge
- Contains TODOs to resolve on final enpoint changes delivered
Pertinent Points
- Incompatible with incremental loading / manual refresh
- Harder to get counts (need to sum up from different namespaces)
- Requires use of new steve pagination
- Enforced NS threshold has been removed
- The threshold only applies to the primary resource. This has issues when loading a low count primary (daemon sets) which depends on a very high count secondary (pods)
- Fixing this would involve knowing all secondary resources a list uses, which isn't currently possible (each resource is requested individually, need to know them all first)
- There is no way to subscribe to multiple namespaces (one or all)
- We mock this in subscribe by only persisting changes to resources from within target namespaces
- Everything should work with Advanced Worker enabled
* cluster and rancher stores now use advanced worker
* Addressing feedback
* addressing PR feedback and unit tests
* Fixed sticky footer in Helm Chart Install Yaml (#8497)
* Remove dependency: nuxt
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency @nuxt/types
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* fix lint
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency: @nuxt/typescript-build
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Remove dependency: @nuxtjs/eslint-module
Remove dependency: @nuxtjs/proxy
Remove dependency: @nuxtjs/style-resources
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* align package.log
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Storybook, add vue-loader dependency
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
* Added placeholder in url input (#8036)
* Added placeholder in url input
* Added format checker for Url input
* Update test
* Fixed lint
* Fixed breaking for other providers
* Fixed lint
* Update vue options to preserve whitespace (#8742)
* Update vue options to preserve whitespace
* Bug fix with tidy-up
* Fix response times for multiple clusters (#8720)
* Update extensions publish workflow for assets in gh-pages branch (#8618)
* Update extensions workflow to move assets into gh-pages branch
Fix gh-pages check
* Move bundle execution - add default basename and ext version
* Fix basename op
* PR changes
* [v2.7] Update Chinese translation (#8731)
* Upgrade to Vue 2.7 (#8654)
* upgrade @vue/cli-xxx
* Update Vue to latest 2.7
* Update eslint-plugin-vue
* Disable new linting rules
* Remove linting issue
* Pin Dom purify library version
* Add resolution to avoid conflicts with packages
* Update yarn/lock after the enforced resolution
* Exclude node 16 types resolution
* Fixed extra space bug in the generic cluster-page (#8626)
* Fixed extra space bug in the generic cluster-page
* Fixed space issue in banner
* Revert changes
* Removed extra div
* Fixed selected rows counter (#8419)
* Fixed selected rows counter
* Fixed lint
* Fixed counter in selection.js
* Small fix in toAdd condition
* Lints
* Fixed condition for selected row counter
* Changes in focusAdjacent function
* Fixed lints
* Improve OPA Gatekeeper constraint detail page (#8586)
This adds the following functionality to the violations list on the OPA Gatekeeper constraint detail page:
* Add a namespace column to the violations
* Make the violations list searchable
* Allow to download the violations as a CSV, similar to CIS scanner violations
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
* Increase memory on build (#8751)
* Remove extension autoLoad functionality (#8700)
* fix(BrandImage): remove async fetch method (#8432)
* Removed hasClass method (#8752)
* Removed hasClass method
* Clean code
* Two minor tweaks
- supported store types now a const
- WORKERMODES --> WORKER_MODES
* Fix two bugs
- Ensure advanced worker runs in cluster manager with no current cluster context
- Ensure `resource.stop` from advanced worker successfully kicks off watch
Also
- make rancher, managment and cluster store names a const
* Tweaks
- Fix some comments (jsdoc --> standard, todos)
- Made the web worker redispatch requirement clearer
* Fix unit tests
* Fix resource.stop / resource.error TOO_OLD bugs
- Persist watcher error state after resource.stop
- this will ensure the resource.stop given alongside the resource.error TOO_OLD is ignored
- Ensure any errors are cleared when we succesfully connect (given above)
- Should fix the next resource.stop being ignored after recovering from TOO_OLD
- Fix resync watch params
- these weren't correct, so the resync after TOO_OLD never worked
- the format could be slimmer, but i thinkg it matches how other socket's data works
Note - I manufactored the TOO_OLD by setting a revision of 1 in subscribe resource.stop if it came from the advanced worker
---------
Signed-off-by: Francesco Torchia <francesco.torchia@suse.com>
Signed-off-by: Bastian Hofmann <mail@bastianhofmann.de>
Co-authored-by: richa <richa.bisht@suse.com>
Co-authored-by: Francesco Torchia <francesco.torchia@suse.com>
Co-authored-by: Neil MacDougall <nwmac@users.noreply.github.com>
Co-authored-by: Jordon Leach <40806497+jordojordo@users.noreply.github.com>
Co-authored-by: vickyhella <vickyhella@hotmail.com>
Co-authored-by: Giuseppe Leo <giuseppe.leo@suse.com>
Co-authored-by: Bastian Hofmann <mail@bastianhofmann.de>
Co-authored-by: LiuYan <361112237@qq.com>
Co-authored-by: Richard Cox <richard.cox@suse.com>
- Use `SteveDescriptionModel` as base of PodSecurityAdmissionTemplate model
- Provide a generic mechanism for model save to tweak the object that's saved
- In SteveDescriptionModel ensure the object that's saved has the correct description
- Save worked for other users of class ... as they saved via norman rather than steve
Tweaks
- Removed duplicate PSACT definition
- Fixed width of PSACT table name / description columns
- Normally we fetch resources and then watch using the revision in that responce
- If we get a resource.stop event we'll then try to find the current revision from the store
- This looks at the type cache's revision and revisions in each resource
- Bug
- the type cache's revision is set updated, which leads to very old revisions being used
- This can cause a `too old` error... which is handeld by refetchin the whole list
- Fix
- Ensure we set the collection's revision when we're likely to need it again
- Tested
- the different find/load store actions/mutations
- this includes incremental loading
- when we recieve a `too old` socket watch error we kick off a resync which will watch with a valid revision
- we'll get a resource.stop event following the previous error. socket is in error though so we correctly abort
- the error for this was misleading
Can be trigger with a fake revision on cluster list
In subscription `watch`
```
if (!trigger && type === 'management.cattle.io.fleetworkspace') {
trigger = true;
revision = 1;
}
```
- changes cover create, change and remove
- resource.stop events happen
- we unusb
- after socket errors (that rancher sends, like revision `too old`)
- after resource type permissions change
- there would be a gap between resource.stop (fetch latest revision, wait 5 seconds) and resource.start
- this could lead to missed resource changes and stale info on screen
Linking a couple of pertinent changes
- forceWatch partially implemented - 14862b2924 (diff-42632b5ed3c30e60abade8a67748b16d45e0778091713dd71a46d4bbe9211d2c)
- too old originally removed https://github.com/rancher/dashboard/pull/3743/files
- this was implemented before the backend fixed their spam
Note - resource.stop can be forced with CATTLE_WATCH_TIMEOUT_SECONDS=300 (on v1 will resource.stop every 5 mins)
Note - Too old can be forced by editing resource.stop with
// const revision = type === '' ? undefined : 1;
// dispatch('watch', { ...obj, revision });
- fix issue where ..
- state 1 - X machines + Y fake machines = total
- state 2 - X+1 machines + Y-1 fake machines = same total
- same total meant sortable table `arrangedRows` value wasn't updating
- fix is to ensure the sort generation changes so `arrangedRows` doesn't return the cached rows
- this is the same method used for the project/namespace list
- Alternative fix to https://github.com/rancher/dashboard/pull/8064
- Assign the steve worker creator to the store via plugin
- This avoids package build errors (in harvester) due to the package build process missing web worker specific build config
- On the downside this means rancher/steve specific stuff is applied at a more global level
* Moves sockets into the advanced worker
* worker can die peacefully now, making switching between cluster work.
* Make waitFor generic, wire in to waitForTestFn
* General Changes
- Fixes for switching cluster
- includes using common getPerformanceSetting
- avoid new code to unsub before socket disconnect
- handle `watch` `stop` requests
- lots of TODO's (questions, work, checks, test, etc)
- use common
* Switch socket fixes
- isAdvancedWorker should only be true for cluster store
- advancedWorker to be wired in
* Fix socket id for cluster workers
- sockets use an incremented local var for id
- when we nuke the socket file within the worker this resets, so they all ahve id of 1
- work around this by applying the unix time
* Fix handling of new partical counts response
- seen in dex cluster explorer dashboard
- count cards would be removed when partial counts response received
* Make resourceWatcher the sole location for watch state
- getters canWatch, watchStarted now are worked around (they look at state in the UI thread)
- we now don't call resource.stop or restart.start in subscription
- tidied up `forgetType`
- moved clearFromQueue from steve mutations into subscription mutations (better location)
- added and removed some TODOs
- fixed watch (stop handler should be higher up, include force watch handling)
* pushes the csrf value into worker and adds it to fetch request headers.
* refactors batchChanges to address ref concerns and be more performant
* Maintain schema reference whilst updating
- This change mutates input in a function, which is bad...
- but ensures the reference isn't broken, which is needed to maintain similar functionality as before
* Fix waitForTestFn
- Seen when creating or viewing clusters
* On unwatch ensure any pending watch requests are removed from the queue
- the probably would have been a problem if the worker wasn't nuked
- however as the codes there lets make it safe
Also added `trace` feature in advanced worker, will probably bring out to other places as well
* Fix navigation from cluster manager world to any cluster
- Ensure that we handle the case where the advanced worker was created but the resource watcher wasn't
- ... but fix case where this was happening (aka ensure that a blank cluster context is ignored)
* Tidy some TODOs
* Add perf settings page
- This will help test normal flow (when advanced worker is disabled)
- Note - setting is now in a bag. This may help us better support further settings (enable client side pagination, etc)
```
advancedWorker: { enabled: false },
```
* FIX - Nav from cluster dashboard --> specific event --> cluster dashboard and events not re-subbed
- Ensure we block default handling of resource.start (keep state in resource watcher)
* Tidying up some TODOs
* Adds in a cache and uses it to validate SCHEMA messages before batching.
* Forgot to actually save CSRF to the resourceWatcher when instantiated.
* an empty resource in a batchChange to signal remove
* Move addSchemaIndexFields to and created removeSchemaIndexFields in new file
- this avoids bringing class files into the worker
* Fix disconnect/reconnect
- Remove `syncWatch` (do the watch/unwatch straight away)
- Test/Fix re-sub on reconnect
- Test/Fix growls on disconnect
* Tidying up some TODO's
- including clean of workerQueue on resource.stop (this is SUPER defensive)
* batchChanges will now handle aliases
* Fix pods list - WIP
- ensure podsByNamespace is updated on batchChange
TODO
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Fix pods list - fixes
- ensure podsByNamespace is updated on batchChange
Tested / Fixed
- the final update to the pod is ignored
- removing a namespace cleans the cache correctly
- disabling advanced worker still works
* Tidying TODOs
* Remove default same-origin header
- https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials
* Fixed TODO description
* Refactor subscribe, make it clear which vuex feature relates to what
* Lots of Fixes
- batchChanges fixes
- fix index is 0 issues (!/!!index)
- only `set` if we have to
- ensure we set the correct index after pushing to list
- ensure map is updated after reducing list size with limit
- podsByNamespace fixes
- ensure when ew replace... we don't use the same referenced object
- general service resource fixes
- ensure service's pods list stays up to date with store
* Multiple improvements/fixes
- resourceCache - store the hash instead of the whole object. This means longer load time be reduces memory footprint
- resourceWatcher
- don't re-sub on socket reconnect if watcher is in error
- don't sub if watcher is in error
- don't unwatch for 'failed to find schema' and 'too old' errors
- this clears the error, we won't to keep it to ensure we don't watch
- Remove #5997 comments, follow on work #7917
* toggle debug, remap alias types, cleaned up comments and console
* Unit tests for batchChanges
Much more scope for some crazy content
* Logging tweaks
- disable logging by default
- initWorker comes in too late to affect initial trace, so just rely on the `debug` to toggle at runtime
Co-authored-by: Richard Cox <richard.cox@suse.com>
- Steve socket times out watches every 30 minues and we get a `resource.stop` event
- Previously we attempted to re-watch with a dodgy revision causing a `too old` error and the dashboard then fetching all resources for that type
- Avoid this by tracking latest revision which we should be up to date with
- When there are over a configurable amount of resources to display in a list force the user to select a single namespace and use it to fetch resources related to the list
- Disabled by default, this can be enabled via the usual Global Settings --> Performance setting as usual
Functional Comments
- Gates for forcing the filter (count, resource type is namespaced, etc) apply only to the resources shown in the list.
- For example PV's aren't namespaced, so no enforced filtering. However they fetch PVC's which are namespaced
- For example we could have 10 resources to show in the list, but the resource types list component fetches 10000 other resources. The secondary resources are not taken in to account
- If we're under the threshold and have fetched all resources, if in that session we go over the threshold we won't fetch NS specific resources (because we have them all already)
- If we're over the threshold and have fetched namespaced resources, if in that session we go under the threshold we will fetch all resources
- If we're over the threshold and have fetched namespaced resources, going to a page that needs them all will result in us fetching them all (for instance from `events` to `cluster dashboard`)
- Deselecting a namespace and selecting it again should not kick off another http request
General Commit Comments
- The threshold to enforce the filter is set at 1500 as per manual fresh and incremental loading
- Optimised some code in ResourceList, resource-fetch and $loadingResources
* Convert Rancher settings into Typescript and add interface
* Allow Rancher settings to be validated
* Add Rancher Settings min password length validation
* Replace settings number input with labeled input of type number for validation; Added missing required, focus, locale and labels attributes
* Add min/max/between value/length cases to global validation cases
* Correct validation syntax
* Add tests for the global settings
* Correct naming and assign directly rules to the inputs
* Create initial tests for CreateEditView
* Prevent Settings view to break if no setting is found for given ID
* Add max password length validation
* Add i18n to settings validation
* Add form validation to the CRUD component
* Prevent form to fail for resource types without validation
* Add test for no validation cases
* Remove form validation in favor of local view logic, due complexity issues
* Correct validator linting issue
* Correct i18n; Add types; Correct min/max/between validations i18n and combine the last
* Add translation type
* Correct validation translation types and definitions
* Replace custom validations with predefined rules
* Reintroduce form validation in abstracted configuration to pass settings through
* Add tests for new generic form validations
* Correct between values and length validation
* Split tests to use pre-existing rulesets due complexity and different cases
* Cleanup jsdoc in form validation
* Cleanup form validation mixin
* Add global settings test for generating rules from config
* Replace value.value with value for validating the resource
* Correct validation call and test instantiation
* Add note about value.value exception
* Disable faulty test due lacks of information
* Replace min/max value validation with between
* Add missing type for settings getter
* Move type folder within shell
* Move settings logic from config to utils
Remove resources from the store if they meet certain criteria. This will reduce the memory footprint of the dashboard and load on the backend (less watchers for large collections)
- GC is disabled by default and can be enabled via the Global Settings --> Performance tab
- User can configure
- The age in milliseconds in which a resource has to exceed in order to be gc'd
- The count which a resource has to exceed in order to be gc'd
- GC occures in stores that have it enabled
- ATM this is just the `cluster` store... but could be enabled for dashboard-store's such as the harvester one (one liner plus optional `gcIgnoreTypes` override for ignoring types)
- GC will be kicked off in two cases
- Route Change for a logged in state
- At a given interval
- Resource type _not_ GC'd if
- The store is ignoring the type
- For example the `cluster` store doesn't want to gc things like `schema` and `count`
- We're going to a page for that resource (list, detail, etc)
- For example don't GC pods if we're going to a pods page
- The last time the resource was accessed was recently
- We store the resource accessed time via hooking into actions and getters
- Setting the last accessed time will cause watchers of that type to trigger (only an issue for duplicate watchers)... but importantly not watchers of other types
- The resource is being used in the current page/context
- We store the route changed time and compare it to the resource accessed time
- There's too few resources
- We might as well keep them to avoid a network request to re-populate
// TODO:
- Should additional features be added to preferences
- if GC on route change is enabled
- if GC on interval is enabled, and how often it runs
- Sensible default preferences
- Remove some logging