Application credentials allows you to export a purpose-specific set of
credentials for a user instead of exposing user login credentials.
Especially useful when using LDAP or similar for Openstack users.
Also lets you rotate credentials more easily since multiple application
credentials can be provisioned per user.
Update pkg/model/bootstrapscript.go
Co-authored-by: Ciprian Hacman <ciprianhacman@gmail.com>
As of k8s 1.16, the node-role label is protected for security reasons.
We will introduce a controller to set those labels generically.
However, we need these labels to run the controller (only) on master
nodes.
To solve this bootstrapping problem, we use protokube to apply the
master role node labels to the master node only. This isn't a
security problem because we assume that protokube on the master is
highly trusted - we are still administering labels centrally.
Then kops-controller can use this label to target the master nodes,
and run a central label controller.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
In 1.12 (kops & kubenetes):
* We default etcd-manager on
* We default to etcd3
* We default to full TLS for etcd (client and peer)
* We stop allowing external access to etcd
- removed all the systemd unit creation and use the volume mount code from kubele (SafeFormatAndMount)
- added some documentation to highlight the feature and show how it might be used in both ebs and ephemeral storage
- updated the api specs and machinery
- adding the dependecies on the services when the volume mounts are enable (should probably false this if they don't effect the docker filesystem)
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.