Use the group resource instead of objectType in watch cache metrics,
because all CustomResources are grouped together as
*unstructured.Unstructured, instead of 1 entry per type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: d08b69e8d35a5aa73a178c508f9b0e1ad74b882d
All CustomResources are treated as *unstructured.Unstructured, leading
the watch cache to log anything related to CRs as Unstructured. This
change uses the schema.GroupResource instead of object type for all type
related log messages in the watch cache, resulting in distinct output
for each CR type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Kubernetes-commit: 397533a4c2df9639ff4422c907d06fae195a1835
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Kubernetes-commit: a9593d634c6a053848413e600dadbf974627515f
expiredBookmarkWatchers allows us to schedule the next bookmark event after dispatching not before as it was previously.
It opens a new functionality in which a watcher might decide to change when the next bookmark should be delivered based on some internal state.
Kubernetes-commit: 0576f6a011cba8f0c8550fd3dd31111376c9dcd0
Using a Pod type in a GetList() call in a test
can panic at worst and error out at best. Here,
neither happened because the error condition
being tested for (cacher being stopped or not)
gets returned before the list pointer can be
enforced.
This commit changes the above to use PodList.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 487761f4e2543114db158f0d59e598dedc481882
The means by which we extract and parse the version of an API object is
not specific to etcd3. In order to allow for a generic suite of tests
against any storage.Interface imlpementation, we need this logic to live
outside of the etcd3 package, or import cycles will exist.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
Kubernetes-commit: 3939f3003e9605c06f65e64d1fc6f94b294f9d97
The cacher blocks requests until it is ready, however, the
ready variable doesn't differentiate if the cacher was stopped.
The cacher is using a condition variable based on sync.Cond to
handle the readiness, however, this was not taking into account
if it was not ready because it was waiting to be ready or it was
stopped.
Add a new condition to the condition variable to handle the
stop condition, and returning an error to signal the goroutines
that they should stop waiting and bail out.
Kubernetes-commit: 2cb3a56e83ae33464edb174b1b6373ba50600759
* Remove linter warnings.
* Cancel contexts to avoid leaks.
* Rename a few XXXThreadUnsafe to XXXLocked to
maintain consistency.
* A few are still called XXXThreadUnsafe mainly
because those are safe to be called from the
perspective that only one gorotuine will access
them - not really called under a lock.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: c3081b48759db1f05a446f2acca7e05c4511ce2e
- Modify GetAllEventsSinceThreadUnsafe to return a watchCacheInterval
- Modify Watch() to compute a watchCacheInterval rather than a slice
of all "initEvents" and pass this interval to process()
- Use interval::Next() to obtain events to process rather than obtain
them all at once
- Modify tests accordingly to use interval
- On invalidation, stop processing and stop the watch.
- Make indexValidator injectable for testing
- Add unit test for verifying the behaviour of stopping the watch.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 7f2aa7ad3a61a52d0a780f904b291d063399c28a
watchCacheInterval serves as an abstraction over a source
of watchCacheEvents. It maintains a window of events over
an underlying source and these events can be served using
the exposed Next() API. The main intent for doing things
this way is to introduce an upper bound of memory usage
for starting a watch and reduce the maximum possible time
interval for which the lock would be held while events are
copied over.
The source of events for the interval is typically either
the watchCache circular buffer, if events being retrieved
need to be for resource versions > 0 or the underlying
implementation of Store, if resource version = 0.
Furthermore, an interval can be either valid or invalid at
any given point of time. The notion of validity makes sense
only in cases where the window of events in the underlying
source can change over time - i.e. for watchCache circular
buffer. When the circular buffer is full and an event needs
to be popped off, watchCache::startIndex is incremented. In
this case, an interval tracking that popped event is valid
only if it has already been copied to its internal buffer.
However, for efficiency we perform that lazily and we mark
an interval as invalid iff we need to copy events from the
watchCache and we end up needing events that have already
been popped off. This translates to the following condition:
watchCacheInterval::startIndex >= watchCache::startIndex.
When this condition becomes false, the interval is no longer
valid and should not be used to retrieve and serve elements
from the underlying source.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: 347607e97139959f33024a691d0561b1479aeeef
Split process() function into processEvents() and process().
This is done in anticipation of GetAllEventsSinceThreadUnsafe()
returning an entity using which events can be constructed and
not the events itself.
Subsequently, this commit also moves updating resource version
for initEvents from Watch() to the processEvents() func.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes-commit: aab7cd3d8a66f425022ca5b2a2bd0d3019efe526
instead of using a goroutine refreshing the budget, obtain
the value from the last time the budget was accessed.
Kubernetes-commit: dd2c38306000eeb1720afc8346165a6caab09259