This is modelled after some of the semantics available in controller-runtime's `Result` type, which allows reconcilers to indicate a desire to reprocess a key either immediately or after a delay.
I worked around our lack of this when I reworked Tekton's timeout handling logic by exposing a "snooze" function to the Reconciler that wrapped `EnqueueAfter`, but this feels like a cleaner API for folks to use in general, and is consistent with our `NewPermanentError` and `NewSkipKey` functions in terms of influencing the queuing behaviors in our reconcilers via wrapped errors.
You can see some discussion of this here: https://knative.slack.com/archives/CA4DNJ9A4/p1627524921161100
Most resources stamped out by knative controllers have OwnerReferences synthesized via `kmeta.NewOwnerRef`, which requires the parent resource to implement `kmeta.OwnerRefable` for accessing the `GroupVersionKind`.
However, where we setup informer watches on those same children resources we have essentially relied on direct synthesis of the `Group[Version]Kind` where we could instead provide an empty instance of the controller resource and leverage `GetGroupVersionKind` to provide the GVK used for filtration.
So where before folks would write:
```golang
FilterFunc: controller.FilterControllerGK(v1alpha1.WithKind("MyType"))
```
They may now write:
```golang
FilterFunc: controller.FilterController(&v1alpha1.MyType{})
```
The latter is preferable in part because it is more strongly typed, but also shorter.
This change introduces a new `controller.NewSkipKey` method to designate certain reconciliations as "skipped".
The primary motivation for this is to squelch useless logging on non-leader replicas, which currently report success with trivial latency.
I have plumbed this through existing reconcilers and the code-gen so most things downstream should get this for free. In places where a key is observed, I do not mark the reconcile as skipped as the reconciler did some processing for which the awareness of side-effects and reported latency may be interesting.
* Enable golint and exclude some other generated or additional dirs
Also remove `test` ignore, since it's covered by path ignore rule.
* meh
* fixes
* more
* progressing
* further
* like a boss
* TestEnqueue: remove unnecessary calls to Sleep
The rate limiter applies only when multiple items are put onto the
workqueue, which is not the case in those tests.
Execution: ~7.6s -> ~2.1s
* TestEnqueueAfter: remove assumptions on execution times
Instead of sleeping for a conservative amount of time, keep watching the
state of the workqueue in a goroutine, and notify the test logic as soon
as the item is observed.
Execution: ~1s -> ~0.05s
* TestEnqueueKeyAfter: remove assumptions on execution times
Instead of sleeping for a conservative amount of time, keep watching the
state of the workqueue in a goroutine, and notify the test logic as soon
as the item is observed.
Execution: ~1s -> ~0.05s
* TestStartAndShutdownWithErroringWork: remove sleep
Instead of sleeping for a conservative amount of time, keep watching the
state number of requeues in a goroutine, and notify the test logic as
soon as the expected threshold is reached.
Logs, for an idea of timings
----------------------------
Started workers
Processing from queue bar (depth: 0)
Reconcile error {"error": "I always error"}
Requeuing key bar due to non-permanent error (depth: 0)
Reconcile failed. Time taken: 104µs {"knative.dev/key": "bar"}
Processing from queue bar (depth: 0)
Reconcile error {"error": "I always error"}
Requeuing key bar due to non-permanent error (depth: 0)
Reconcile failed. Time taken: 48.2µs {"knative.dev/key": "bar"}
Execution: ~1s -> ~0.01s
* TestStart*/TestRun*: reduce sleep time
There is no need to sleep for that long. If an error was returned, it
would activate the second select case immediately.
Execution: ~1s -> ~0.05s
* TestImplGlobalResync: reduce sleep time
We know the fast lane is empty in this test, so we can safely assume
immediate enqueuing of all items on the slow lane.
Logs, for an idea of timings
----------------------------
Started workers
Processing from queue foo/bar (depth: 0)
Reconcile succeeded. Time taken: 11.5µs {"knative.dev/key": "foo/bar"}
Processing from queue bar/foo (depth: 1)
Processing from queue fizz/buzz (depth: 0)
Reconcile succeeded. Time taken: 9.7µs {"knative.dev/key": "fizz/buzz"}
Reconcile succeeded. Time taken: 115µs {"knative.dev/key": "bar/foo"}
Shutting down workers
Execution: ~4s -> ~0.05s
* review: Replace for/select with PollUntil
* review: Remove redundant duration multiplier
* review: Replace defer with t.Cleanup
The test assumes the threads would schedule in particular way, but they don't.
But what we really care to check is that we thread in the proper RL and it works.
We don't need to check that underlying queue impl works, that's done in its own tests.
So just verify these two things.
* Allow creating controller with custom RateLimiter.
Which was possible before via field modification.
Not switching to a builder pattern mostly for speed of resolution.
Happy to consider alternatives.
* Add tests for new functionality.
Specifically, these test that the Wait() function is notified about
the item, and that the RateLimiter is passed through to the queue.
* Add Options. Gophers love Options.
* Even moar controller GenericOptions.
* Attempt to appease lint, don't create struct for typecheck.
* GenericOptions -> ControllerOptions
* Public struct fields.
* Use two lane queue instead of the regular workqueue
- we need to poll for len in the webhook tests because we have async propagation now, and check at the wrong time will be not correct.
- otherwise just a drop in replacement.
* update test
* cmt
* tests hardened
* Remove key in tags to reduce metrics count
Issue: https://github.com/knative/serving/issues/8609
Signed-off-by: Lance Liu <xuliuxl@cn.ibm.com>
* remove tag key for OpenCensus
Signed-off-by: Lance Liu <xuliuxl@cn.ibm.com>
* Enable HA by default.
This consolidates the core of sharedmain around the new leaderelection logic, which will now be **enabled by default**.
This can now be disabled with `--disable-ha` or by passing `sharedmain.WithHADisabled(ctx)` to `sharedmain.MainWithConfig`.
* vagababov comments, build failure
* Open an issue for enabledComponents removal.
* Move the configmap watcher startup.
This race was uncovered by the chaos duck on knative/serving! When we have enabled a feature flag, e.g. multi-container, and the webhook pods are restarted, there is a brief window where the webhook is up and healthy before the configmaps have synchronized and the new webhook pod realizes the feature is enabled.
* Drop the import alias
* Add an option to skip automated status updates in a reconciler.
This option is necessary to be able to create reconcilers like Serving's labeler, that is purely adding labels to resources. If that fails, the new automated observed generation handling changes the status and that gets written to the API currently, which is not desired.
* Flip the bool.