a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
* Stop setting the Name tag on a shared subnet/vpc
* Stop setting the legacy KubernetesCluster tag on a shared subnet/vpc
that is new enough (>=1.6); we rely on the shared tags instead
* Set tags on shared subnets; i.e. we _do_ set the shared tag on a
shared subnet; that is important for ELBs
* Set tags on shared VPCs; i.e. we _do_ set the shared tag on a shared
VPC; that is not used but consistent with subnets.
* Add tests for shared subnet
This lets us use the new shared cluster tags, for shared networking
objects - in particular subnets.
We continue to add the existing tags also, for compatability. When we
add direct management of shared networks, we will likely address that.
- Previous method would have caused issues with the way tags are used
for filtering resources.
- Updated docs and comments to only refer to instance groups, rather
than all AWS resources
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.
* Zones are now subnets
* Utility subnet is no longer part of Zone
* Bastion InstanceGroup type added instead
* Etcd clusters defined in terms of InstanceGroups, not zones
* AdminAccess split into SSHAccess & APIAccess
* Dropped unused Multizone flag