For request like '/api/v1/watch/namespaces/*', don't set scope.namespace.
Because the func `addWatcher` add a watcher to allWatchers with the value `scope.namespace` not empty.
But the function `dispatchEvent` dispatch event with an empty namespace.
Signed-off-by: xyz-li <hui0787411@163.com>
Kubernetes-commit: 818fabe37b3fd7cebe36a43244120388977373cd
For case of SendInitialEvents, a buffer of objects is created. That
process takes a significant amount of memory and CPU when the resource
is of a large volume. Many objects may be not relevant when key is provided.
This commit applies key when composing the buffer for SendInitialEvents.
Signed-off-by: Eric Lin <exlin@google.com>
Kubernetes-commit: d9c6c8aa5047d724e0ebc8907f5fee4b10012ae3
The internal informer populates the RV as soon as it conducts
The first successful sync with the underlying store.
The cache must wait until this first sync is completed to be deemed ready.
Since we cannot send a bookmark when the lastProcessedResourceVersion is 0,
we poll aggressively for the first list RV before entering the dispatch loop.
Kubernetes-commit: a20abdb1f425b215ce969ef7114281741fce249d
There is no benefit of having RWMutex as we have one reader and multiple
writers. In such cases RWMutex has worse performance than Mutex.
Kubernetes-commit: 544ea424826ef60d703c5f4fb91b2c6a95f303aa
Signal is not needed as we never need to wake up when the waiting
is lowered, only when increased.
Kubernetes-commit: e6b54149bb42d58301e34872ebbcf2ea4bcfb474
Ticker behaves differently from what we want, we need a stable period
interval, but ticker doesn't provide that. From NewTicker docstring:
```
The ticker will adjust the time interval or drop ticks to make up for slow receivers.
```
Unfortunatelly there is no way to test it as the FakeClock doesn't
follow the real ticker behavior.
Kubernetes-commit: 7c0e9cda461e176959866b9c2d03b00e817e9b76
before:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 3.792s
after:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 1.783s
Kubernetes-commit: d21b86d53a3c4c42e41f8374e537c721251a00d2
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
Kubernetes-commit: f5d945eb43c7bf8036a4bad8c22448e1146a7498
The individual cases can be safely run in parallel.
Before
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 10.787s
After:
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 4.857s
Kubernetes-commit: 3ecbb4dee00a5dd1e43e24a5952c2a90ef507ef1
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s
Kubernetes-commit: 926122c035a4f47a880db24d1a0be7ec129dd44d
Stop using defer as parallel subtest will might result in main test
finishing before subtest.
Fatal when same flag is set twice.
Kubernetes-commit: 9fcf279e2b91e7549190a433373f256fb5aebe85