Without secure node auth enabled, commands like `kubectl logs` may fail
with certain configurations.
Previously, we checked if anonymousAuth was enabled on the kubelet
before securing node communication, but this isn't really relevant. We
can still authenticate even if anonymous access is allowed.
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
When encryptionConfig is enabled, but the secret is missing, there is no
visible errors anywhere. kube-apiserver just goes into a crashloop
without any complains. This PR adds warnings both on the client side and
through nodeup.
When the PublicJWKS feature-flag is set, we expose the apiserver JWKS
document publicly (including enabling anonymous access). This is a
stepping stone to a more hardened configuration where we copy the JWKS
document to S3/GCS/etc.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
kube-apiserver doesn't expose the healthcheck via a dedicated
endpoint, instead relying on anonyomous-access being enabled. That
has previously forced us to enable the unauthenticated endpoint on
127.0.0.1:8080.
Instead we now run a small sidecar container, which
proxies /healthz and /readyz requests (only) adding appropriate
authentication using a client certificate.
This will also enable better load balancer checks in future, as these
have previously been hampered by the custom CA certificate.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
Writing to a hostPath from a non-root container requires file
ownership changes, which is difficult to roll out today. See
discussion in #8454
We were primarily using the logfile for e2e diagnostics, so we're
going to look into collecting the information via other means instead.
We also haven't yet shipped this logfile in a released version (though
we have shipped it in beta releases)
The alpha version of encryption at rest used the following flag:
`--experimental-encryption-provider-config`. As of kubernetes 1.13,
`--encryption-provider-config` should be used instead.
klog can now support logging both to a file and to streams, so we get the output both in docker & logfiles.
A few gotchas:
* The output previously was all on stdout, now it on stderr. That is more correct
* If something writes to stdout or stderr outside of klog, it will no longer end up in the logfile.
* There's some oddities still to be ironed out about the flag syntax https://github.com/kubernetes/klog/issues/60
For the master pods (apiserver, controller manager, scheduler) this is
unlikely to ever matter (the masters aren't expected to run out of
resources and need to evict things) but evictions of kube-proxy from worker
nodes are easy to trigger in clusters with PodPriority enabled. Since these
are static pods the configuration is also somewhat difficult to change.
In 1.12 (kops & kubenetes):
* We default etcd-manager on
* We default to etcd3
* We default to full TLS for etcd (client and peer)
* We stop allowing external access to etcd