We were doing this implicitly previously by creating the etcd records.
As etcd-manager doesn't need to create gossip records, we instead
create a zone explicilty.
The etcd-manager will (ideally) take over etcd management. To provide a
nice migration path, and because we want etcd backups, we're creating a
standalone image that just backs up etcd in the etcd-manager format.
This isn't really ready for actual usage, but should be harmless because
it runs as a sidecar container.
Fix up the local IP address discovery logic, to recognize new
en-interfaces, and to better log what it is doing. Plug it in for
baremetal installations.
The current kube manifest redirect all the logs into host located log files, this PR uses the tee command to pipe into both local logs (retaining the current) and docker stdout (which will be picked up by the journald or which every logging your using. Note also permits as to now need the logs via the kubectl command.
- renamed some of the files to make things cleaner
- redirecting the logs from the kubernetes components into local file and stdout
- cleaned up any vetting or linting error i came across
Automatic merge from submit-queue. .
ETCD container mount /etc/hosts file
This PR just volume mounts the /etc/hosts file from the masters into the etcd-server containers. I'm not 100% sure if this is the right approach to fixing this problem, but I was running into the issue of the etcd servers no longer being able to discover each other after performing a rolling-update. I'm running this version of protokube locally and it's fixed my issues of the etcd containers not being able to discover each other after an update.
I saw that the kube-proxy manifest was already volume mounting the /etc/hosts file, so I figured this was kosher.
- removing the StorageType on the etcd cluster spec (sticking with the Version field only)
- changed the protokube flag back to -etcd-image
- users have to explicitly set the etcd version now; the latest version in gcr.io is 3.0.17
- reverted the ordering on the populate spec
The current implementation is running v2.2.1 which is two year old and end of life. This PR add the ability to use etcd and set the versions if required. Note at the moment the image is still using the gcr.io registry image. As note, much like TLS their presently is not 'automated' migration path from v2 to v3.
- the feature is gated behine the storageType of the etcd cluster, bot clusters events and main must use the same storage type
- the version for v2 is unchanged and pinned at v2.2.1 with v2 using v3.0.17
- @question: we shoudl consider allowing the use to override the images though I think this should be addresses more generically, than one offs here and then. I know chris is working on a asset registry??
- added the master option back the protokube, updating the nodeup model and protokube code
- removed any comments no related to the PR as suggested
- reverted the ordering of the mutex in the AWSVolumes in protokube
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
- new property is only used when KubernetesVersion is 1.6 or greater
- taints are passed to kubelet via --register-with-taints flag
- Set a default NoSchedule taint on masters
- Set --register-schedule=true when --register-with-taints is used
- Changed the log message in taints.go to be less alarming if taints are
found - since they are expected on 1.6.0+ clusters
- Added Taints section to the InstanceGroup docs
- Only default taints are allowed in the spec pre-1.6
- Custom taint validation happens as soon as IG specs are edited.
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
* Change protokube to do `systemctl start kubelet` every sync round
** .. which takes a change to the systemd unit for protokube to mount in D-Bus
* Don't start kubelet in nodeup
If the instance restarted but lost the volume mount, there might be a
short or indefinite delay before protokube can mount the volume again.
But the etcd manifest would probably still be in
/etc/kubernetes/manifests from the previous run.
To ensure that kubelet doesn't run etcd until the volume is actually
mounted, we use a symlink to a directory on the volume itself. Thus
kubelet can't start etcd until we put the volume there. We can also
delete the symlink before mounting, so we have full control.
Issue #73
Adds changes to support clustered etcd:
* Configure node names in DNS
* Parse annotations on the volume to infer the etcd configuration
Using annotations on the volumes to control what manifests launch feels
pretty powerful. Though we could also just write the manifest to a
central location (e.g. S3) and then sync them into the kubelet
directory.
This also means we no longer have to directly spawn kubelet - we can now
just write the manifests.
Working towards self-hosting of k8s, we will likely have to add some
features to kubelet, such as independent mounting of disks or copying of
resources from S3. protokube lets us develop those features prior to
moving them into kubelet.
In particular, today we need to mount an EBS volume on the master prior
to starting kubelet, if we want to run the master in an ASG.
protokube is a service that runs on boot, and it tries to mount the
master volume. Once it mounts the master volume, it runs kubelet.
Currently it runs kubelet by looking at a directory
/etc/kubernetes/bootstrap; the intention is that we could actually have
multiple versions of kubelet in here (or other services) and then we
could automatically roll-back from a failed update.