The project does not recommend using insecure ports. Even
unauthenticated TLS is an improvement since it provides confidentiality.
If you relied upon this, please update to secure serving options.
Kubernetes-commit: de302c73e9558c192fde1cd7d6dcbea7eb76e950
Serve watch without resourceVersion from cache and introduce a WatchFromStorageWithoutResourceVersion feature gate to allow serving watch from storage
Kubernetes-commit: a1605fb3dd2d45474751ec3de30633ac1ec15c41
before:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 3.792s
after:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 1.783s
Kubernetes-commit: d21b86d53a3c4c42e41f8374e537c721251a00d2
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
Kubernetes-commit: f5d945eb43c7bf8036a4bad8c22448e1146a7498
apiserver/storage/cacher: decrease of running time of TestWaitUntilWatchCacheFreshAndForceAllEvents
Kubernetes-commit: 3a75a8c8d9e6a1ebd98d8572132e675d4980f184
Fix non-recursive list returning "resource version too high" error when consistent list from cache is enabled
Kubernetes-commit: 3409f0594c94185010217f3e5156c1de9f08b405
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s
Kubernetes-commit: 926122c035a4f47a880db24d1a0be7ec129dd44d
The individual cases can be safely run in parallel.
Before
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 10.787s
After:
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 4.857s
Kubernetes-commit: 3ecbb4dee00a5dd1e43e24a5952c2a90ef507ef1
It turns out that kube has a custom timeout for tests of 3 minutes.
The tests in the cacher package are utilizing nearly the
entire time and are being terminated, resulting in failing jobs.
Before the change, the TestWatchSemantics took ~43s to run. With this simple change, it now takes ~18s.
When we created the tests, we didn't measure the running time and assumed that waiting 1 second on a watch channel
to make sure no more events are received was sufficient.
This PR decreases the waiting time to 300 milliseconds.
Modern computers can perform many tasks within that time.
In addition to that, the tests are serial in nature, meaning that there is no other
actor that could add items to the database, which could result in receiving new items.
After the change the total running time decreased by 17%.
Before the tests needed ~176s after they need ~146s.
The changes also improved TestWatchSemanticInitialEventsExtended.
Kubernetes-commit: 5a74c8e2202044b664efce4be5d86d700e74506f
since we don't provide compatibility guarantees for the storage
package it is okay to simply remove unused function.
Kubernetes-commit: a40f25f8e6516d1a59169cf88db8b3850a8c48c7
Stop using defer as parallel subtest will might result in main test
finishing before subtest.
Fatal when same flag is set twice.
Kubernetes-commit: 9fcf279e2b91e7549190a433373f256fb5aebe85
The code in ConvertToType checked for conversion into typeValue (=
"kubernetes.URL") instead of conversion into quantityTypeValue (=
"kubernetes.Quantity") and thus most likely failed with an incorrect "type
conversion error".
Kubernetes-commit: 02b4e99c9f0afa4ef9fa0283670c1515e40a5278