Since AWS does not resolve instance hostnames to ipv6, ipv6-only pods that talk to kubelet API has to use node IP, not hostname. Thus we need to add IPs to kubelet server cert.
API servers also have access to secret store, so there is no need to go through kops-controller.
This lets API server only depend on etcd from the CP nodes, which should make it easier to scale out API servers under pressure
Without secure node auth enabled, commands like `kubectl logs` may fail
with certain configurations.
Previously, we checked if anonymousAuth was enabled on the kubelet
before securing node communication, but this isn't really relevant. We
can still authenticate even if anonymous access is allowed.
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
Upstream bind mounts /var/lib/kubelet with exec, dev and suid
permissions, because emptyDirs end up inheriting these permissions.
Similarly, /home/kubernetes/flexvolume needs exec permission to
support flexdrivers.
Previously as an optimization we would start the kubelet from
protokube, after we had mounted the disks. This helped avoid e.g. the
apiserver going into backoff waiting for etcd.
However, this no longer achieves anything with etcd-manager - nothing
happens on this front until after we start the kubelet anyway.
Doing this both takes protokube out of the dependency sequence here
(slightly faster boot time), but also removes the systemd dependency
from the protokube image. (So we can get a smaller image, perhaps
even distroless)
This requires passing a cloud object in additional places throughout the validation package and originating mostly from cmd/kops
This means that some kops commands now require valid cloud provider credentials, but I don't think this is an issue because the vast majority of use-cases already require the same cloud provider credentials in order to interact with the state store.