Generate and upload keys.json + discovery.json to public store
Don't enable anonymous auth on publicjwks
Remove tests that won't work using FS VFS anymore
When the PublicJWKS feature-flag is set, we expose the apiserver JWKS
document publicly (including enabling anonymous access). This is a
stepping stone to a more hardened configuration where we copy the JWKS
document to S3/GCS/etc.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
kube-apiserver doesn't expose the healthcheck via a dedicated
endpoint, instead relying on anonyomous-access being enabled. That
has previously forced us to enable the unauthenticated endpoint on
127.0.0.1:8080.
Instead we now run a small sidecar container, which
proxies /healthz and /readyz requests (only) adding appropriate
authentication using a client certificate.
This will also enable better load balancer checks in future, as these
have previously been hampered by the custom CA certificate.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
Previously when setting the external cloud controller manager
configuration the core components `kubelet`, `apiserver` and
`kubecontroller-manager` were configured to use the external cloud
controller manager. Without setting the feature flag
EnableExternalCloudController this lead to a cluster in which the
masters had the cloud controller taint
`node.cloudprovider.kubernetes.io/uninitialized` which prevents
essential pods, like dns-controller to not be scheduled and leaves a
cluster where worker nodes can't connect to the api server because they
cannot resolve its hostname.
Image names from 1.16 on include an architecture suffix,
e.g. "-amd64"; the generic alias continues to work when pulling, but
when loading from a tarball (i.e. running in CI) we must use the
per-architecture name.
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.