Only clear the flag if there is a docker config file, so that we can
continue to set the storage flag on older COS images. We could be
smarter about checking if the storage driver is actually set in the
docker config, but for now we just start by logging it.
Cilium was using the same code as Calico to retrieve etcd certs, new
builder is not Calico-specific.
calico name of certs is retained to ensure backward compatibility
Signed-off-by: Maciej Kwiek <maciej@covalent.io>
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
We also have to stop passing the flag on ContainerOS, because it's set
in /etc/docker/default.json and it's now an error to pass the flag.
That in turn means we move those options to code, which are the last of
those legacy config options. (We still have a few tasks declaratively
defined though)
A previous PR https://github.com/kubernetes/kops/pull/5221/ introduced the --enable-admission-plugins for >= 1.10.0 as recommended, it does however cause an issue if you already have AdmissionControl is specified in the Spec as both flags get rendered
- fixing the the 'Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed.' error