* fix(service-mirror): don't restart cluster watch upon Link status updates
Every time there's an update to a Link resource the service mirror restarts the cluster watch after cleaning up any existing worker. We recently introduced a status stanza in Link that gets updated upon every mirroring of a service, which was unnecessarily triggering a cluster watcher restart. For a sufficiently high number of services getting mirrored at once this was causing severe contention on the controller, delaying mirroring up to a halt.
This change fixes the situation by only considering changes in the Link Spec for restarting the cluster watch.
* Lower log level
* Extract the resource event handler functions into a separate file, and add unit test making sure the add/update/delete functions are called, and that in particular the update function is _not_ called when updating a Link status.
Followup to #12844
This new field defines the default policy for Servers, i.e. if a request doesn't match the policy associated to a Server then this policy applies. The values are the same as for `proxy.defaultInboundPolicy` and the `config.linkerd.io/default-inbound-policy` annotation (all-unauthenticated, all-authenticated, cluster-authenticated, cluster-unauthenticated, deny), plus a new value "audit". The default is "deny", thus remaining backwards-compatible.
This field is also exposed as an additional printer column.
We introduced an ExternalWorkload CRD for mesh expansion. This change
follows up by adding bindings for Rust and Go code.
For Go code:
* We add a new schema and ExternalWorkload types
* We also update the code-gen script to generate informers
* We add a new informer type to our abstractions built on-top of
client-go, including a function to check if a client has access to the
resource.
For Rust code:
* We add ExternalWorkload bindings to the policy controller.
---------
Signed-off-by: Matei David <matei@buoyant.io>
* Use metadata API in the proxy and tap injectors
Part of #9485
This adds a new `MetadataAPI` similar to the current `k8s.API` hosting informers, but backed by k8s' `metadatainformer` shared informers, which retrieves only the objects metadata, resulting in less memory consumption by its clients. Currently this is only implemented for the proxy and tap injectors. Usage by the destination controller will be implemented as a follow-up.
## Existing API enhancements
Shared objects and logic required by API and MetadataAPI have been moved to the new `k8s.go`, `api_resource.go` and `prometheus.go` files. That includes the `isValidRSParent()` function whose arg is now more generic.
## Unit tests
`/controller/k8s/api_test.go` now also instantiates a MetadataAPI, used in the augmented `TestGetObjects()` and `TestGetOwnerKindAndName()` tests. The `resources` struct was introduced to capture the common fields among tests and simplify `newMockAPI()`'s signature.
## Other Changes
The injector no longer watches for Pods. It only requires watching workloads that own resources (and also watch namespaces), so Pod is not required.
## Testing Memory Consumption
Install linkerd, inject emojivoto and check the injector memory consumption with `kubectl -n linkerd top pod linkerd-proxy-injector-xxx`. It'll start consuming about 16Mi. Then ramp up emojivoto's `voting` deployment replicas to 2000. After 5 minutes memory will stabilize around 32Mi using the current branch. Using the latest edge, it'll stabilize around 110Mi.