We tend to build cloud, call some method, and then build cloud over
again. It would be easier to just pass the first one along.
Passing along cloud would also make it easier to mock cloud.
Use provider-agnostic node definition for cas instead of aws auto-discovery
Validate clusterAutoscalerSpec
Add spec documentation
Add cas docs
Make CRDs
Apply suggestions from code review
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
Add enabled flag to cas config
Apply suggestions from code review
Co-authored-by: Guy Templeton <guyjtempleton@googlemail.com>
Add support for custom cas image
Support more k8s versions
Use full image names
When the PublicJWKS feature-flag is set, we expose the apiserver JWKS
document publicly (including enabling anonymous access). This is a
stepping stone to a more hardened configuration where we copy the JWKS
document to S3/GCS/etc.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
This is causing problems with the Kubernetes 1.19 code-generator.
A nil entry in these slices wouldn't be valid anyways, so this should have no impact.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
We still need the reflect helpers, but we allow for clients to
register their own pretty-printers, which avoids the package
dependency for our pretty-printer. We register our pretty printers in
an init function in the relevant package (in this case,
upup/pkg/fi/printers.go)
Fix#5551
We also have to stop passing the flag on ContainerOS, because it's set
in /etc/docker/default.json and it's now an error to pass the flag.
That in turn means we move those options to code, which are the last of
those legacy config options. (We still have a few tasks declaratively
defined though)
- removing the StorageType on the etcd cluster spec (sticking with the Version field only)
- changed the protokube flag back to -etcd-image
- users have to explicitly set the etcd version now; the latest version in gcr.io is 3.0.17
- reverted the ordering on the populate spec
The current implementation is running v2.2.1 which is two year old and end of life. This PR add the ability to use etcd and set the versions if required. Note at the moment the image is still using the gcr.io registry image. As note, much like TLS their presently is not 'automated' migration path from v2 to v3.
- the feature is gated behine the storageType of the etcd cluster, bot clusters events and main must use the same storage type
- the version for v2 is unchanged and pinned at v2.2.1 with v2 using v3.0.17
- @question: we shoudl consider allowing the use to override the images though I think this should be addresses more generically, than one offs here and then. I know chris is working on a asset registry??
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
When looking for a zone, match by name, but also only match private
zones if running with --dns private, or public zones with --dns public.
We log if we find a zone that matches by name but not by type.
Requires https://github.com/kubernetes/kubernetes/pull/40197
Issue #1522
Issue #1468
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.