- switching to using code rather than a template for the systemd unit creation as requested in review
- as part of the review, changing the name of the ca from tls-ca to tls-client-ca
- changing the api from DisableAddressCheck to EnableAddressCheck and defaulting to true if no set
- fixing up the test for node-authorizer and shifting the parsing of the certificates as suggested in reviews to a method
- including the config only when there is something to include i.e. no nulls please
- fixing up the pod security policies for system:nodes groups, needs a mapping to permit manifests
For aws, the nodes are verified in the following manner
the pkcs7 signed identity document available only from the node metadata service is passed in the request.
the instance is checked as running, inside the region, part of the cluster via a tag and then check that the node is not already registered.
assuming everyone is cool a bootstrap token is given (this at present is auto-approval, though think about it I meant add it as an optional thing)
I've tried to break up the commits in logical order make reviewing easier.
- adding the node authorization api specification
- adding the node authorization api specification 5dbf275
- consuming the node authorization api spec in nodeup binary 6e32f24
- adding the options builder to fill in the model
- adding the spec into the bootstrap config
- adding the node authorization service into kops ad5c654. note there is not code cross over, it's a completely independent service and could technically live in another repo
- adding the dependencies for the node authorization service 2e3f279
- adding the node authorization addon deployment manifest ff31be1
This commit does the following two changes:
1. Changes the default calico mtu to 8198.
2. Enables setting the mtu explicitly in the config as:
```
networking:
calico:
mtu: 2048
```
Testing done:
1. Created cluster on AWS with networking set to calico. No additional details were provided. Verified that the mtu was set to 8198. Also verified that the FELIX_IPINIPMTU environment variable was set to 8198.
2. Created a cluster explicitly setting the calico mtu to 2048. Verified that the mtu for the 'cali*' interfaces inside the pods was set to 2048. Also, verified that the FELIX_IPINIPMTU environment variable was set to 2048.
3. make test passed.
Closes#4042
We delete old AWS LaunchConfigurations when we see that we have more
than 3. We add a feature flag KeepLaunchConfigurations to disable this
functionality, for backwards compatability.
Fixes#329
The wait for this is very long (90s) by default, which is long enough that many users may assume things are hanging if we don't say what they're waiting for.
- Note sure how I missed this, but the options builder is run before the validation which will always cause and issue (we need to add a warning instead)
- for now, given if the user is already using the AdmissionControllers it's fixed later in the chain we will only check the Disabled plugins for now
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
We also have to stop passing the flag on ContainerOS, because it's set
in /etc/docker/default.json and it's now an error to pass the flag.
That in turn means we move those options to code, which are the last of
those legacy config options. (We still have a few tasks declaratively
defined though)