For case of SendInitialEvents, a buffer of objects is created. That
process takes a significant amount of memory and CPU when the resource
is of a large volume. Many objects may be not relevant when key is provided.
This commit applies key when composing the buffer for SendInitialEvents.
Signed-off-by: Eric Lin <exlin@google.com>
Kubernetes-commit: d9c6c8aa5047d724e0ebc8907f5fee4b10012ae3
The internal informer populates the RV as soon as it conducts
The first successful sync with the underlying store.
The cache must wait until this first sync is completed to be deemed ready.
Since we cannot send a bookmark when the lastProcessedResourceVersion is 0,
we poll aggressively for the first list RV before entering the dispatch loop.
Kubernetes-commit: a20abdb1f425b215ce969ef7114281741fce249d
There is no benefit of having RWMutex as we have one reader and multiple
writers. In such cases RWMutex has worse performance than Mutex.
Kubernetes-commit: 544ea424826ef60d703c5f4fb91b2c6a95f303aa
Signal is not needed as we never need to wake up when the waiting
is lowered, only when increased.
Kubernetes-commit: e6b54149bb42d58301e34872ebbcf2ea4bcfb474
Ticker behaves differently from what we want, we need a stable period
interval, but ticker doesn't provide that. From NewTicker docstring:
```
The ticker will adjust the time interval or drop ticks to make up for slow receivers.
```
Unfortunatelly there is no way to test it as the FakeClock doesn't
follow the real ticker behavior.
Kubernetes-commit: 7c0e9cda461e176959866b9c2d03b00e817e9b76
before:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 3.792s
after:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 1.783s
Kubernetes-commit: d21b86d53a3c4c42e41f8374e537c721251a00d2
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
Kubernetes-commit: f5d945eb43c7bf8036a4bad8c22448e1146a7498
The individual cases can be safely run in parallel.
Before
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 10.787s
After:
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 4.857s
Kubernetes-commit: 3ecbb4dee00a5dd1e43e24a5952c2a90ef507ef1
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s
Kubernetes-commit: 926122c035a4f47a880db24d1a0be7ec129dd44d
Stop using defer as parallel subtest will might result in main test
finishing before subtest.
Fatal when same flag is set twice.
Kubernetes-commit: 9fcf279e2b91e7549190a433373f256fb5aebe85
until #115478(use streaming against the etcd storage)
is resolved the cacher need a way to disable the streaming.
Kubernetes-commit: 41e706600aea7468f486150d951d3b8948ce89d5
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: 7e35823690df01bd019a88d3346bd3ac820afaca
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: bbca4a4b9add0f6c58e132500fd89dd39ee077f4