add a new mode for graceful termination with the new server run option
'shutdown-send-retry-after'
- shutdown-send-retry-after=true: we initiate shutdown of the
HTTP Server when all in-flight request(s) have been drained. during
this window all incoming requests are rejected with status code
429 and the following response headers:
- 'Retry-After: N' - client should retry after N seconds
- 'Connection: close' - tear down the TCP connection
- shutdown-send-retry-after=false: we initiate shutdown of the
HTTP Server as soon as shutdown-delay-duration has elapsed. This
is in keeping with the current behavior.
Kubernetes-commit: 3182b69e970bd1fd036ff839fdf811f14e790244
When API Priority and Fairness is enabled, the inflight limits must
add up to something positive.
This rejects the configuration that prompted
https://github.com/kubernetes/kubernetes/issues/102885
Update help for max inflight flags
Kubernetes-commit: 0762f492c5b850471723a305cfa7390e44851145
While the apiserver audit options merely use the lumberjack logger in
order to write the appropriate log files, this library has very loose
permissions by default for these files [1]. However, this library will
respect the permissions that the file has, if it exists already. This is
also the most tested scenario in the library [2].
So, let's follow the pattern marked in the library's tests and
pre-create the audit log file with an appropriate mode.
[1] https://github.com/natefinch/lumberjack/blob/v2.0/lumberjack.go#L280
[2] https://github.com/natefinch/lumberjack/blob/v2.0/linux_test.go
Signed-off-by: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Kubernetes-commit: 42df7bc5b3aa26bf545b6392b557833c7162c472
Right now, `_, ok := provider.(Notifier); !ok` can mean one of two
things:
1. The provider does not support notification because the provided
content is static.
2. The implementor of the provider hasn't gotten around to implementing
Notifier yet.
These have very different implications. We should not force consumers of
these interfaces to have to figure out the static of Notifier across
sometimes numerous different implementations. Instead, we should force
implementors to implement Notifier, even if it's a noop.
Change-Id: Ie7a26697a9a17790bfaa58d67045663bcc71e3cb
Kubernetes-commit: 9b7d654a08d694d20226609f7075b112fb18639b
it turns out that setting a timeout on HTTP client affect watch requests made by the delegated authentication component.
with a 10 second timeout watch requests are being re-established exactly after 10 seconds even though the default request timeout for them is ~5 minutes.
this is because if multiple timeouts were set, the stdlib picks the smaller timeout to be applied, leaving other useless.
for more details see a937729c2c/src/net/http/client.go (L364)
instead of setting a timeout on the HTTP client we should use context for cancellation.
Kubernetes-commit: d690d71d27c78f2f7981b286f5b584455ff30246
SARs
healthz, readyz, and livez are canonical names for checks that the kubelet does. By default, allow access to them in the options. Callers can adjust the defaults if they have a reason to require checks.
system:masters has full power, so the authorization check is unnecessary and just uses an extra call for in-cluster access. Callers can adjust the defaults if they have a reason to require checks.
Kubernetes-commit: cebce291ddcb8490a705c79623c0b4f13faef6e7
Previously no timeout was set. Requests without explicit timeout might potentially hang forever and lead to starvation of the application.
When no timeout was specified a default one will be applied.
Kubernetes-commit: 7340c3498ac23f46fc8b6bff4d5ac664a9c64a3b
Currently webhook retry backoff parameters are hard coded, we want
to have the ability to configure the backoff parameters for webhook
retry logic.
Kubernetes-commit: 53a1307f68ccf6c9ffd252eeea2b333e818c1103
previously no timeout was set. Requests without explicit timeout might potentially hang forever and lead to starvation of the application.
Kubernetes-commit: 2160cbc53fdd27a3cbc1b361e523abda4c39ac42