This is mostly for symmetry since many folks that validate PodSpecable types often also want to validate `Pod`, so while `Pod` isn't as often a duck-type, the main value of this is exposing similar mechanisms to #2279 for `corev1.Pod` without folks needing to define their own `corev1.Pod` clone.
Today you can't use `duckv1.WithPod` to author webhooks because it doesn't implement the `Validate` or `SetDefaults` methods, and that makes sense since they are (by definition) a generic encapsulation of a number of types.
However, if we could infuse `ctx` with appropriate callbacks for `Validate` and `SetDefaults` then folks can use our types, but infuse `ctx` with callbacks that perform the appropriate validation.
What I have in mind is something along these lines:
```go
return defaulting.NewAdmissionController(ctx,
// Name of the resource webhook.
"foo.bar.dev",
// The path on which to serve the webhook.
"/defaulting",
// The resources to default.
map[schema.GroupVersionKind]resourcesemantics.GenericCRD{
appsv1.SchemeGroupVersion.WithKind("Deployment"): &duckv1.WithPod{},
appsv1.SchemeGroupVersion.WithKind("ReplicaSet"): &duckv1.WithPod{},
appsv1.SchemeGroupVersion.WithKind("StatefulSet"): &duckv1.WithPod{},
appsv1.SchemeGroupVersion.WithKind("DaemonSet"): &duckv1.WithPod{},
batchv1.SchemeGroupVersion.WithKind("Job"): &duckv1.WithPod{},
},
// A function that infuses the context passed to Validate/SetDefaults with custom metadata.
func(ctx context.Context) context.Context{
return duckv1.WithPodSpecDefaulter(ctx, myFancyLogic)
},
// Whether to disallow unknown fields.
false,
)
```
It is roughly equivalent for validation.
This patch changes `DefaultErrorRetryChecker` to `ErrorRetryChecker`
type before adding in the interface array.
Without it, the type of DefaultErrorRetryChecker is still `func(error) (bool, error)`.
* Avoid double-resyncs without leader election.
tl;dr Without leader election enabled, we've been suffering from double resyncs, this fixes that.
Essentially when the informers are started, prior to starting the controllers, they fill the controller's workqueue with ~the world. Once we start the controllers, the leader election code calls `Promote` on a `Bucket` so that it starts processing them as their leader, and then is requeues every key that falls into that `Bucket`.
When leader election is disabled, we use the `UniversalBucket`, which always owns everything. This means that the first pass through the workqueue was already hitting every key, so re-enqueuing every key results in superfluous processing of those keys.
This change makes the `LeaderAwareFuncs` elide the call to `PromoteFunc` when the `enq` function passed is `nil`, and makes the `unopposedElector` have a `nil` `enq` field when we explicitly pass it the `UniversalBucket` because leader election is disabled.
* Add more unit test coverage
* Bump apimachinery
* Update code-generator
* Update API and client, some progress
* Hack the generator to work at all
* Hack the PodDisruptionBudget extension to fulfill the interfaces
* Bump apiextensions as well
* Fix conflict
* Better condition
* Roll back unnecessary codegen change
* Fix PodDisruptionBudget extensions
* Panic on not-yet-implemented like others
With this change, I was able to pass some downstream e2e tests that check for event emission with this path enabled, which previously failed.
The relevant hand-rolled bits in Kubernetes client-go is here: 35bf219cc6/kubernetes/typed/core/v1/event_expansion.go (L49)
I opted to avoid copy/pasting the generated code, and instead used a trick that let me call into the generated code.
Going through this exercise also (likely) uncovered a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/104495
* Introduce `NewContext`, deprecate `NewImplFull`.
Our generated `NewImpl` methods have long taken `context.Context`, but despite many iterations the forms we expose from our `controller` package never have. This change contains several elements:
1. Expose a new `NewContext` method that takes `context.Context` in addition to the current `NewImplFull` signature.
2. Call `NewContext` instead of the deprecated `NewImpl` from our generated controller code.
3. Call `NewContext` from all our webhook reconcilers.
* Add a Tracker to controller.Impl to cut down on downstream boilerplate.
I noticed in a few places downstream that reconcilers were creating both trackers and Addressable resolvers, which seems superfluous. As part of examining the way we use the tracker, I'm experimenting with changing this to just take a tracker.
* This commit contains the actual changes to support dynamic client injection.
* Incorporate n3wscott review nits
* This includes the code-generation for the dynamic client-based injection.
This also includes a number of manually created files of the form `*_expansion.go`, which are needed to satisfy some of the Kubernetes manual type "expansions". At present, these are all simply `panic("NYI")`, but may become more than that in the future.
* Use consistent case for "Deprecated" comments
Not the most important thing ever, but the canonical string to use for
Deprecated warnings is case sensitive, and also it's nice to be
consistent.
* Add nolint comment
I noticed doing some tinkering that these were running on every iteration, but the inputs don't change. This sinks the calls to below the loop, and reduces things to a single call for each.
This is modelled after some of the semantics available in controller-runtime's `Result` type, which allows reconcilers to indicate a desire to reprocess a key either immediately or after a delay.
I worked around our lack of this when I reworked Tekton's timeout handling logic by exposing a "snooze" function to the Reconciler that wrapped `EnqueueAfter`, but this feels like a cleaner API for folks to use in general, and is consistent with our `NewPermanentError` and `NewSkipKey` functions in terms of influencing the queuing behaviors in our reconcilers via wrapped errors.
You can see some discussion of this here: https://knative.slack.com/archives/CA4DNJ9A4/p1627524921161100