since we don't provide compatibility guarantees for the storage
package it is okay to simply remove unused function.
Kubernetes-commit: a40f25f8e6516d1a59169cf88db8b3850a8c48c7
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
Kubernetes-commit: f5d945eb43c7bf8036a4bad8c22448e1146a7498
The individual cases can be safely run in parallel.
Before
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 10.787s
After:
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 4.857s
Kubernetes-commit: 3ecbb4dee00a5dd1e43e24a5952c2a90ef507ef1
It turns out that kube has a custom timeout for tests of 3 minutes.
The tests in the cacher package are utilizing nearly the
entire time and are being terminated, resulting in failing jobs.
Before the change, the TestWatchSemantics took ~43s to run. With this simple change, it now takes ~18s.
When we created the tests, we didn't measure the running time and assumed that waiting 1 second on a watch channel
to make sure no more events are received was sufficient.
This PR decreases the waiting time to 300 milliseconds.
Modern computers can perform many tasks within that time.
In addition to that, the tests are serial in nature, meaning that there is no other
actor that could add items to the database, which could result in receiving new items.
After the change the total running time decreased by 17%.
Before the tests needed ~176s after they need ~146s.
The changes also improved TestWatchSemanticInitialEventsExtended.
Kubernetes-commit: 5a74c8e2202044b664efce4be5d86d700e74506f
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s
Kubernetes-commit: 926122c035a4f47a880db24d1a0be7ec129dd44d
Stop using defer as parallel subtest will might result in main test
finishing before subtest.
Fatal when same flag is set twice.
Kubernetes-commit: 9fcf279e2b91e7549190a433373f256fb5aebe85
changes the test to populate the underlying data store with
more data to trigger potential ordering issues.
Kubernetes-commit: 20ded275705a6e11c1113cbeedad4de94e2dc666
until #115478(use streaming against the etcd storage)
is resolved the cacher need a way to disable the streaming.
Kubernetes-commit: 41e706600aea7468f486150d951d3b8948ce89d5
Extract the logic to determine withRev to a separate method for better readability.
Signed-off-by: Siyuan Zhang <sizhang@google.com>
Kubernetes-commit: 624169c5b50ee8a6e9a761e9488134985334817e
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: 7e35823690df01bd019a88d3346bd3ac820afaca
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: bbca4a4b9add0f6c58e132500fd89dd39ee077f4