The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
Adds changes to support clustered etcd:
* Configure node names in DNS
* Parse annotations on the volume to infer the etcd configuration
Using annotations on the volumes to control what manifests launch feels
pretty powerful. Though we could also just write the manifest to a
central location (e.g. S3) and then sync them into the kubelet
directory.
This also means we no longer have to directly spawn kubelet - we can now
just write the manifests.
Working towards self-hosting of k8s, we will likely have to add some
features to kubelet, such as independent mounting of disks or copying of
resources from S3. protokube lets us develop those features prior to
moving them into kubelet.
In particular, today we need to mount an EBS volume on the master prior
to starting kubelet, if we want to run the master in an ASG.
protokube is a service that runs on boot, and it tries to mount the
master volume. Once it mounts the master volume, it runs kubelet.
Currently it runs kubelet by looking at a directory
/etc/kubernetes/bootstrap; the intention is that we could actually have
multiple versions of kubelet in here (or other services) and then we
could automatically roll-back from a failed update.