This matches the logic we have for the Authorization header as well
as the impersonation headers.
Signed-off-by: Monis Khan <mok@microsoft.com>
Kubernetes-commit: e9866d2794675aa8dc82ba2637ae45f9f3a27dff
Note that this fixes a bug in the existing `toBytes` implementation
which does not correctly set the capacity on the returned slice.
Signed-off-by: Monis Khan <mok@microsoft.com>
Kubernetes-commit: aa80f8fb856bb2b645c90457f9b1dd75e4e57c73
This commit includes all the changes needed for APIServer. Instead of modifying the existing signatures for the methods which either generate or return stopChannel, we generate a context from the channel and use the generated context to be passed to the controllers which are started in APIServer. This ensures we don't have to touch APIServer dependencies.
Kubernetes-commit: 8b84a793b39fed2a62af0876b2eda461a68008c9
Right now, `_, ok := provider.(Notifier); !ok` can mean one of two
things:
1. The provider does not support notification because the provided
content is static.
2. The implementor of the provider hasn't gotten around to implementing
Notifier yet.
These have very different implications. We should not force consumers of
these interfaces to have to figure out the static of Notifier across
sometimes numerous different implementations. Instead, we should force
implementors to implement Notifier, even if it's a noop.
Change-Id: Ie7a26697a9a17790bfaa58d67045663bcc71e3cb
Kubernetes-commit: 9b7d654a08d694d20226609f7075b112fb18639b
it turns out that setting a timeout on HTTP client affect watch requests made by the delegated authentication component.
with a 10 second timeout watch requests are being re-established exactly after 10 seconds even though the default request timeout for them is ~5 minutes.
this is because if multiple timeouts were set, the stdlib picks the smaller timeout to be applied, leaving other useless.
for more details see a937729c2c/src/net/http/client.go (L364)
instead of setting a timeout on the HTTP client we should use context for cancellation.
Kubernetes-commit: d690d71d27c78f2f7981b286f5b584455ff30246
Currently webhook retry backoff parameters are hard coded, we want
to have the ability to configure the backoff parameters for webhook
retry logic.
Kubernetes-commit: 53a1307f68ccf6c9ffd252eeea2b333e818c1103
(we enable metrics and pprof by default, but that doesn't mean
we should have full cluster-admin access to use those endpoints)
Change-Id: I20cf1a0c817ffe3b7fb8e5d3967f804dc063ab03
remove pprof but add read access to detailed health checks
Change-Id: I96c0997be2a538aa8c689dea25026bba638d6e7d
add base health check endpoints and remove the todo for flowcontrol, as there is an existing ticket
Change-Id: I8a7d6debeaf91e06d8ace3cb2bd04d71ef3e68a9
drop blank line
Change-Id: I691e72e9dee3cf7276c725a12207d64db88f4651
Kubernetes-commit: f57611970c0790b325719e0279bfae334537e2de
This change removes support for basic authn in v1.19 via the
--basic-auth-file flag. This functionality was deprecated in v1.16
in response to ATR-K8S-002: Non-constant time password comparison.
Similar functionality is available via the --token-auth-file flag
for development purposes.
Signed-off-by: Monis Khan <mok@vmware.com>
Kubernetes-commit: df292749c9d063b06861d0f4f1741c37b815a2fa
Downstreams assume process restarts when counters decrement. Currently,
the "active" label is expected to decrement but the "ok" and "error"
labels are intended to be handled as counters. This is unneccesary and
hard to deal with. This changes consolidate "blocking" and "in_flight"
tracking into a single guage, which allows fetch completion to be a pure
counter.
Kubernetes-commit: dc5934f58456d95b0264665871c0c48e16ee6469
request_total is fully accumulating, fetch_total is mostly accumulating
except for the active label.
Kubernetes-commit: a84e883e4b39f6a040d479b5be89b0750f4e7bf1
b.N is adjusted by pkg/testing using an internal heuristic:
> The benchmark function must run the target code b.N times. During
> benchmark execution, b.N is adjusted until the benchmark function
> lasts long enough to be timed reliably.
Using b.N to seed other parameters makes the benchmark behavior
difficult to reason about. Before this change, thread count in the
CachedTokenAuthenticator benchmark is always 5000, and batch size is
almost always 1 when I run this locally. SimpleCache and StripedCache
benchmarks had similarly strange scaling.
After modifying CachedTokenAuthenticator to only adjust iterations based
on b.N, the batch chan was an point of contention and I wasn't able to
see any significant CPU consumption. This was fixed by using
ParallelBench to do the batching, rather than using a chan.
Kubernetes-commit: 43d34882c9b3612d933b97b6e470fd8d36fe492b
It is possible to configure the token cache to cache failures. We
allow 1 MB of headers per request, meaning a malicious actor could
cause the cache to use a large amount of memory by filling it with
large invalid tokens. This change hashes the token before using it
as a key. Measures have been taken to prevent precomputation
attacks. SHA 256 is used as the hash to prevent collisions.
Signed-off-by: Monis Khan <mkhan@redhat.com>
Kubernetes-commit: 9a547bca8e6e15273bfafd3496aa6524fd7d35bd