this method waits until cache is at least
as fresh as given requestedWatchRV if sendInitialEvents was requested.
Additionally, it instructs the caller whether it should ask for
all events from the cache (full state) or not.
Kubernetes-commit: 21fb98105043d1a15ef48089ef231931851d2d15
if old less than new, Inc function should be called for `watchCacheCapacityIncreaseTotal` instead of `watchCacheCapacity`
Signed-off-by: joey <zchengjoey@gmail.com>
Kubernetes-commit: 96b9531f3e3f489e47493297987eee14d2a08855
* cacher allow context cancellation if not ready
Replace the sync.Cond variable with a channel so we can use the
context cancellation signal.
Co-authored-by: Wojciech Tyczy<C5><84>ski <wojtekt@google.com>
Change-Id: I2f75313a6337feee440ece4c1e873c32a12560dd
* wait again on pending state
Change-Id: I1ad79253a5a5d56a4d9611125825b1f7ad552be8
---------
Co-authored-by: Wojciech Tyczy<C5><84>ski <wojtekt@google.com>
Kubernetes-commit: 3b17aece1fa492e98aa82b948597b3641961195f
The original design was to honour strong consistency
semantics for when the RV is unset, i.e. serve the
watch by doing a quorum read.
However, the implementation did not match the intent,
in that, the Cacher did not distinguish between set
and unset RV. This commit rectifies that behaviour by
serving the watch from the underlying storage if the
RV is unset.
This commit subsequently also adds a test for the same.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 610b67031c79c6c38964631d27dd59df357c6d2e
This commit allows injecting errors for the
Watch() method of the dummy storage impl.
As a consequence of this, a race is introduced
between when the injected error is written and
read whenever a Watch() is invoked using the
dummy storage. This commit adds locking in order
to mitigate this.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 2593671337ad449f51b9dcc0b63aa190dd07ab68
Use the group resource instead of objectType in watch cache metrics,
because all CustomResources are grouped together as
*unstructured.Unstructured, instead of 1 entry per type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: d08b69e8d35a5aa73a178c508f9b0e1ad74b882d
All CustomResources are treated as *unstructured.Unstructured, leading
the watch cache to log anything related to CRs as Unstructured. This
change uses the schema.GroupResource instead of object type for all type
related log messages in the watch cache, resulting in distinct output
for each CR type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: 397533a4c2df9639ff4422c907d06fae195a1835
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Kubernetes-commit: a9593d634c6a053848413e600dadbf974627515f
expiredBookmarkWatchers allows us to schedule the next bookmark event after dispatching not before as it was previously.
It opens a new functionality in which a watcher might decide to change when the next bookmark should be delivered based on some internal state.
Kubernetes-commit: 0576f6a011cba8f0c8550fd3dd31111376c9dcd0
Using a Pod type in a GetList() call in a test
can panic at worst and error out at best. Here,
neither happened because the error condition
being tested for (cacher being stopped or not)
gets returned before the list pointer can be
enforced.
This commit changes the above to use PodList.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 487761f4e2543114db158f0d59e598dedc481882
The means by which we extract and parse the version of an API object is
not specific to etcd3. In order to allow for a generic suite of tests
against any storage.Interface imlpementation, we need this logic to live
outside of the etcd3 package, or import cycles will exist.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Kubernetes-commit: 3939f3003e9605c06f65e64d1fc6f94b294f9d97
The cacher blocks requests until it is ready, however, the
ready variable doesn't differentiate if the cacher was stopped.
The cacher is using a condition variable based on sync.Cond to
handle the readiness, however, this was not taking into account
if it was not ready because it was waiting to be ready or it was
stopped.
Add a new condition to the condition variable to handle the
stop condition, and returning an error to signal the goroutines
that they should stop waiting and bail out.
Kubernetes-commit: 2cb3a56e83ae33464edb174b1b6373ba50600759
* Remove linter warnings.
* Cancel contexts to avoid leaks.
* Rename a few XXXThreadUnsafe to XXXLocked to
maintain consistency.
* A few are still called XXXThreadUnsafe mainly
because those are safe to be called from the
perspective that only one gorotuine will access
them - not really called under a lock.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: c3081b48759db1f05a446f2acca7e05c4511ce2e
- Modify GetAllEventsSinceThreadUnsafe to return a watchCacheInterval
- Modify Watch() to compute a watchCacheInterval rather than a slice
of all "initEvents" and pass this interval to process()
- Use interval::Next() to obtain events to process rather than obtain
them all at once
- Modify tests accordingly to use interval
- On invalidation, stop processing and stop the watch.
- Make indexValidator injectable for testing
- Add unit test for verifying the behaviour of stopping the watch.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 7f2aa7ad3a61a52d0a780f904b291d063399c28a
watchCacheInterval serves as an abstraction over a source
of watchCacheEvents. It maintains a window of events over
an underlying source and these events can be served using
the exposed Next() API. The main intent for doing things
this way is to introduce an upper bound of memory usage
for starting a watch and reduce the maximum possible time
interval for which the lock would be held while events are
copied over.
The source of events for the interval is typically either
the watchCache circular buffer, if events being retrieved
need to be for resource versions > 0 or the underlying
implementation of Store, if resource version = 0.
Furthermore, an interval can be either valid or invalid at
any given point of time. The notion of validity makes sense
only in cases where the window of events in the underlying
source can change over time - i.e. for watchCache circular
buffer. When the circular buffer is full and an event needs
to be popped off, watchCache::startIndex is incremented. In
this case, an interval tracking that popped event is valid
only if it has already been copied to its internal buffer.
However, for efficiency we perform that lazily and we mark
an interval as invalid iff we need to copy events from the
watchCache and we end up needing events that have already
been popped off. This translates to the following condition:
watchCacheInterval::startIndex >= watchCache::startIndex.
When this condition becomes false, the interval is no longer
valid and should not be used to retrieve and serve elements
from the underlying source.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 347607e97139959f33024a691d0561b1479aeeef