- removing the StorageType on the etcd cluster spec (sticking with the Version field only)
- changed the protokube flag back to -etcd-image
- users have to explicitly set the etcd version now; the latest version in gcr.io is 3.0.17
- reverted the ordering on the populate spec
The current implementation is running v2.2.1 which is two year old and end of life. This PR add the ability to use etcd and set the versions if required. Note at the moment the image is still using the gcr.io registry image. As note, much like TLS their presently is not 'automated' migration path from v2 to v3.
- the feature is gated behine the storageType of the etcd cluster, bot clusters events and main must use the same storage type
- the version for v2 is unchanged and pinned at v2.2.1 with v2 using v3.0.17
- @question: we shoudl consider allowing the use to override the images though I think this should be addresses more generically, than one offs here and then. I know chris is working on a asset registry??
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
When looking for a zone, match by name, but also only match private
zones if running with --dns private, or public zones with --dns public.
We log if we find a zone that matches by name but not by type.
Requires https://github.com/kubernetes/kubernetes/pull/40197
Issue #1522
Issue #1468
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.
* Zones are now subnets
* Utility subnet is no longer part of Zone
* Bastion InstanceGroup type added instead
* Etcd clusters defined in terms of InstanceGroups, not zones
* AdminAccess split into SSHAccess & APIAccess
* Dropped unused Multizone flag
It looks like it got lost in a refactor. Add a unit test, and move
initialization to code (and have the code self-check as well).
Also we can now have a fairly long code comment about the reasons why
this is such a mess...
Fix#371
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.
This is the only remaining "magic" setting, which changes externally
- when new versions of k8s are released. (The AMIs are locked in the
instance groups)
We now record the current version of KubernetesVersion in the cluster
spec. It can still easily be changed if needed, but this also means we
don't need to make "fake" changes.