API servers also have access to secret store, so there is no need to go through kops-controller.
This lets API server only depend on etcd from the CP nodes, which should make it easier to scale out API servers under pressure
Without secure node auth enabled, commands like `kubectl logs` may fail
with certain configurations.
Previously, we checked if anonymousAuth was enabled on the kubelet
before securing node communication, but this isn't really relevant. We
can still authenticate even if anonymous access is allowed.
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
Upstream bind mounts /var/lib/kubelet with exec, dev and suid
permissions, because emptyDirs end up inheriting these permissions.
Similarly, /home/kubernetes/flexvolume needs exec permission to
support flexdrivers.
Previously as an optimization we would start the kubelet from
protokube, after we had mounted the disks. This helped avoid e.g. the
apiserver going into backoff waiting for etcd.
However, this no longer achieves anything with etcd-manager - nothing
happens on this front until after we start the kubelet anyway.
Doing this both takes protokube out of the dependency sequence here
(slightly faster boot time), but also removes the systemd dependency
from the protokube image. (So we can get a smaller image, perhaps
even distroless)
This requires passing a cloud object in additional places throughout the validation package and originating mostly from cmd/kops
This means that some kops commands now require valid cloud provider credentials, but I don't think this is an issue because the vast majority of use-cases already require the same cloud provider credentials in order to interact with the state store.
As of k8s 1.16, the node-role label is protected for security reasons.
We will introduce a controller to set those labels generically.
However, we need these labels to run the controller (only) on master
nodes.
To solve this bootstrapping problem, we use protokube to apply the
master role node labels to the master node only. This isn't a
security problem because we assume that protokube on the master is
highly trusted - we are still administering labels centrally.
Then kops-controller can use this label to target the master nodes,
and run a central label controller.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
Otherwise we end up with a circular dependency where we don't run the
node-authorizer until /var/lib/kubelet has been bind-mounted, but it
can't be bind-mounted until it exists.
This bind-mounting happens on Google's ContainerOS, which is why it
isn't always seen.
We still need the reflect helpers, but we allow for clients to
register their own pretty-printers, which avoids the package
dependency for our pretty-printer. We register our pretty printers in
an init function in the relevant package (in this case,
upup/pkg/fi/printers.go)
Fix#5551
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
A while back options to permit secure kube-apiserver to kubelet api was https://github.com/kubernetes/kops/pull/2831 using the server.cert and server.key as testing grouns. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it.
- updated the security.md to reflect the changes
- issue a new client kubelet-api certificate used to secure authorize comms between api and kubelet
- fixed any formatting issues i came across on the journey