klog can now support logging both to a file and to streams, so we get the output both in docker & logfiles.
A few gotchas:
* The output previously was all on stdout, now it on stderr. That is more correct
* If something writes to stdout or stderr outside of klog, it will no longer end up in the logfile.
* There's some oddities still to be ironed out about the flag syntax https://github.com/kubernetes/klog/issues/60
For the master pods (apiserver, controller manager, scheduler) this is
unlikely to ever matter (the masters aren't expected to run out of
resources and need to evict things) but evictions of kube-proxy from worker
nodes are easy to trigger in clusters with PodPriority enabled. Since these
are static pods the configuration is also somewhat difficult to change.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
- fixing the the 'Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed.' error
The current kube manifest redirect all the logs into host located log files, this PR uses the tee command to pipe into both local logs (retaining the current) and docker stdout (which will be picked up by the journald or which every logging your using. Note also permits as to now need the logs via the kubectl command.
- renamed some of the files to make things cleaner
- redirecting the logs from the kubernetes components into local file and stdout
- cleaned up any vetting or linting error i came across