until #115478(use streaming against the etcd storage)
is resolved the cacher need a way to disable the streaming.
Kubernetes-commit: 41e706600aea7468f486150d951d3b8948ce89d5
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: 7e35823690df01bd019a88d3346bd3ac820afaca
It's possible that the watcher is already not in the structure (e.g. in case of
simultaneous Stop() and terminateAllWatchers(), but it is safe to call stopLocked()
on a watcher multiple times.
Kubernetes-commit: bbca4a4b9add0f6c58e132500fd89dd39ee077f4
// AnnotateInitialEventsEndBookmark adds a special annotation to the given object
// which indicates that the initial events have been sent.
//
// Note that this function assumes that the obj's annotation
// field is a reference type (i.e. a map).
Kubernetes-commit: 47d9a47a08856613e2e6ae6aa8a1bdeb1e281f97
Request bookmark every 100ms when there is at least one request blocked on revision not present in watch cache.
Kubernetes-commit: 39bb8f4bb1d013937aceac6c387563ffe13545c5
Doing this allows us to implement some more nuanced cacher manipulations
to be used in testing. For ex: implementing a test-only compaction method
for the watch cache.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 6d66fbc6b670f1120a9041873bb8d1a0655bbefc
This commit prepares for when cacher tests are moved here
from the `tests` package. Tests in that package redeclare
some of the testing utils that exist here, so dedup-ing them.
This commit also adapts to any changes in test util signatures.
There are still some utils that can be reused but currently are
highly specific to some tests. (ex: watch_cache_test.go)
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 70978e4af619819787a4eb544ffd732aa7954d76
Since cachingObject has the encoded data cached and they are not
supposed to change. It's memory efficient to just copy the slice
references.
Signed-off-by: Eric Lin <exlin@google.com>
Kubernetes-commit: 3085b57869a2a7bf5290ab97facaf17fedfa88a0
If the cacher hasn't seen any event (when lastProcessedResourceVersion is zero) and
the bookmarkTimer has ticked then we shouldn't popExpiredWatchers. This is
because the watchers wont' be re-added and will miss future bookmark events when
the cacher finally receives an event via the c.incoming chan.
Kubernetes-commit: 6db4cbfde7babfb34f5cd1059c769ec2d870f12a
* cacher: remove locking from watcherBookmarkTimeBuckets
it turns out that the watcherBookmarkTimeBuckets
is called from only three places/methods: startDispatching, finishDispatching and Watch.
All these methods acquire c.Lock() before touching watcherBookmarkTimeBuckets.
Thus we could remove explicit locking in
watcherBookmarkTimeBuckets since the access is already synced.
* cacher: rename watcherBookmarkTimeBuckets methods to indicate that proper synchronisation must be used
Kubernetes-commit: eab66a687b282266f0520b79166f7f55828ffd28
waitUntilWatchCacheFreshAndForceAllEvents must be called without
a read lock held otherwise the watchcache won't be able to make
progress (i.e. the watchCache.processEvent method that requries acquiring an exclusive lock)
the deadlock can happen only when the alpha watchlist feature flag is on
and the client specifically requests streaming.
Kubernetes-commit: 476e407ffd2ab393840d3f7a9fd01b71698738a3
* ftr(watch-cache): add benchmarks
* ftr(kube-apiserver): faster watch-cache getlist
* refine: testcase name
* - refine var name make it easier to convey meaning
- add comment to explain why we need to apply for a slice of runtime.Object instead of making a slice of ListObject.Items directly.
Kubernetes-commit: 75f17eb38fc8bbcb360d43dffce6e27a7159d43f
this method waits until cache is at least
as fresh as given requestedWatchRV if sendInitialEvents was requested.
Additionally, it instructs the caller whether it should ask for
all events from the cache (full state) or not.
Kubernetes-commit: 21fb98105043d1a15ef48089ef231931851d2d15
if old less than new, Inc function should be called for `watchCacheCapacityIncreaseTotal` instead of `watchCacheCapacity`
Signed-off-by: joey <zchengjoey@gmail.com>
Kubernetes-commit: 96b9531f3e3f489e47493297987eee14d2a08855
* cacher allow context cancellation if not ready
Replace the sync.Cond variable with a channel so we can use the
context cancellation signal.
Co-authored-by: Wojciech Tyczy<C5><84>ski <wojtekt@google.com>
Change-Id: I2f75313a6337feee440ece4c1e873c32a12560dd
* wait again on pending state
Change-Id: I1ad79253a5a5d56a4d9611125825b1f7ad552be8
---------
Co-authored-by: Wojciech Tyczy<C5><84>ski <wojtekt@google.com>
Kubernetes-commit: 3b17aece1fa492e98aa82b948597b3641961195f
The original design was to honour strong consistency
semantics for when the RV is unset, i.e. serve the
watch by doing a quorum read.
However, the implementation did not match the intent,
in that, the Cacher did not distinguish between set
and unset RV. This commit rectifies that behaviour by
serving the watch from the underlying storage if the
RV is unset.
This commit subsequently also adds a test for the same.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 610b67031c79c6c38964631d27dd59df357c6d2e
This commit allows injecting errors for the
Watch() method of the dummy storage impl.
As a consequence of this, a race is introduced
between when the injected error is written and
read whenever a Watch() is invoked using the
dummy storage. This commit adds locking in order
to mitigate this.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 2593671337ad449f51b9dcc0b63aa190dd07ab68
Use the group resource instead of objectType in watch cache metrics,
because all CustomResources are grouped together as
*unstructured.Unstructured, instead of 1 entry per type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: d08b69e8d35a5aa73a178c508f9b0e1ad74b882d
All CustomResources are treated as *unstructured.Unstructured, leading
the watch cache to log anything related to CRs as Unstructured. This
change uses the schema.GroupResource instead of object type for all type
related log messages in the watch cache, resulting in distinct output
for each CR type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: 397533a4c2df9639ff4422c907d06fae195a1835
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Kubernetes-commit: a9593d634c6a053848413e600dadbf974627515f
expiredBookmarkWatchers allows us to schedule the next bookmark event after dispatching not before as it was previously.
It opens a new functionality in which a watcher might decide to change when the next bookmark should be delivered based on some internal state.
Kubernetes-commit: 0576f6a011cba8f0c8550fd3dd31111376c9dcd0
Using a Pod type in a GetList() call in a test
can panic at worst and error out at best. Here,
neither happened because the error condition
being tested for (cacher being stopped or not)
gets returned before the list pointer can be
enforced.
This commit changes the above to use PodList.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 487761f4e2543114db158f0d59e598dedc481882
The means by which we extract and parse the version of an API object is
not specific to etcd3. In order to allow for a generic suite of tests
against any storage.Interface imlpementation, we need this logic to live
outside of the etcd3 package, or import cycles will exist.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Kubernetes-commit: 3939f3003e9605c06f65e64d1fc6f94b294f9d97
The cacher blocks requests until it is ready, however, the
ready variable doesn't differentiate if the cacher was stopped.
The cacher is using a condition variable based on sync.Cond to
handle the readiness, however, this was not taking into account
if it was not ready because it was waiting to be ready or it was
stopped.
Add a new condition to the condition variable to handle the
stop condition, and returning an error to signal the goroutines
that they should stop waiting and bail out.
Kubernetes-commit: 2cb3a56e83ae33464edb174b1b6373ba50600759
* Remove linter warnings.
* Cancel contexts to avoid leaks.
* Rename a few XXXThreadUnsafe to XXXLocked to
maintain consistency.
* A few are still called XXXThreadUnsafe mainly
because those are safe to be called from the
perspective that only one gorotuine will access
them - not really called under a lock.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: c3081b48759db1f05a446f2acca7e05c4511ce2e
- Modify GetAllEventsSinceThreadUnsafe to return a watchCacheInterval
- Modify Watch() to compute a watchCacheInterval rather than a slice
of all "initEvents" and pass this interval to process()
- Use interval::Next() to obtain events to process rather than obtain
them all at once
- Modify tests accordingly to use interval
- On invalidation, stop processing and stop the watch.
- Make indexValidator injectable for testing
- Add unit test for verifying the behaviour of stopping the watch.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 7f2aa7ad3a61a52d0a780f904b291d063399c28a
watchCacheInterval serves as an abstraction over a source
of watchCacheEvents. It maintains a window of events over
an underlying source and these events can be served using
the exposed Next() API. The main intent for doing things
this way is to introduce an upper bound of memory usage
for starting a watch and reduce the maximum possible time
interval for which the lock would be held while events are
copied over.
The source of events for the interval is typically either
the watchCache circular buffer, if events being retrieved
need to be for resource versions > 0 or the underlying
implementation of Store, if resource version = 0.
Furthermore, an interval can be either valid or invalid at
any given point of time. The notion of validity makes sense
only in cases where the window of events in the underlying
source can change over time - i.e. for watchCache circular
buffer. When the circular buffer is full and an event needs
to be popped off, watchCache::startIndex is incremented. In
this case, an interval tracking that popped event is valid
only if it has already been copied to its internal buffer.
However, for efficiency we perform that lazily and we mark
an interval as invalid iff we need to copy events from the
watchCache and we end up needing events that have already
been popped off. This translates to the following condition:
watchCacheInterval::startIndex >= watchCache::startIndex.
When this condition becomes false, the interval is no longer
valid and should not be used to retrieve and serve elements
from the underlying source.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 347607e97139959f33024a691d0561b1479aeeef
Split process() function into processEvents() and process().
This is done in anticipation of GetAllEventsSinceThreadUnsafe()
returning an entity using which events can be constructed and
not the events itself.
Subsequently, this commit also moves updating resource version
for initEvents from Watch() to the processEvents() func.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: aab7cd3d8a66f425022ca5b2a2bd0d3019efe526