Cilium 1.7 requires K8s 1.12 minimum. Changed the templates so that we
can have different cilium versions for different k8s versions.
This also mean that this addon will behave similar to other addons wrt
upgrades. Cilium used to add a fixed version to the cluster spec on cluster creation so
upgrades were slightly more manual. Now, for new clusters, upgrades will
happen implicitly with kops updates unless the .Version is added
manually to the cluster spec.
* Force cilium-operator run on master nodes
* Add option for setting cilium ipam mode
* If cilium ipam mode is eni, add additional permissions to master nodes
* Allow NonMasqueradeCIDR overlap with NetworkCIDR when Cilium ENI is enabled
The authenticator binary uses glog which requires write access to the filesystem under /tmp
On the scratch image /tmp doesnt exist which caused a crash loop:
```
time="2020-02-14T02:06:00Z" level=info msg="creating event broadcaster"
time="2020-02-14T02:06:00Z" level=info msg="setting up event handlers"
W0214 02:06:00.358119 1 client_config.go:539] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
log: exiting because of error: log: cannot create log: open /tmp/aws-iam-authenticator.ip-X-X-X-X.aws-iam-authenticator.log.WARNING.20200214-020600.1: no such file or directory
```
Switching to debian-stretch fixed the issue although it could really be any of the other images in the release [0]
[0] https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/tag/v0.5.0
* Cilium need to talk to the internal cluster API on public IPs instead of the internal service
* Tell people explicitly they have to disable kubeproxy so it won't conflict with nodeport
Writing to a hostPath from a non-root container requires file
ownership changes, which is difficult to roll out today. See
discussion in #8454
We were primarily using the logfile for e2e diagnostics, so we're
going to look into collecting the information via other means instead.
We also haven't yet shipped this logfile in a released version (though
we have shipped it in beta releases)
The migration was first made in 1.16.0-alpha.1, so that means 2 releases have been out that set the replicas to zero.
This removal negatively impacts anyone that created a cluster from kops HEAD between 1.15.0 and 1.16.0-alpha.1, and then upgraded kops directly to the 1.16.0 release that includes this commit, without having first upgraded to either of the alphas.
That seems like a reasonably small enough audience that this is safe to remove now.
Perhaps we mention in the release notes that anyone using HEAD or one of the alpha releases needs to `kubectl delete -n kube-system deployment kops-controller`
This makes it easier to get the kops-controller logs from e2e tests since it they only dump log files from systemd services and /var/log files [0]
[0] ec0fe6bd36/kubetest/dump.go (L50-L74)