https://github.com/lyft/cni-ipvlan-vpc-k8s
This cni solution is slightly different in that it doesn't require running a daemonset
It requires:
* a config file in /etc/cni/net.d
* the binaries in /opt/cni/bin
* adding the --node-ip param to the kubelet
This code is modeled after the AmazonVPC cni bits.
I've left the setup of the required subnets as an exercise to the reader.
fixes#2606
Most part of the changes are similar to current supported CNI networking
provider. Kube-router also support IPVS bassed service proxy which can
be used as replacement for kube-proxy. So the manifest for kube-router
included with this patch enables kube-router to provide pod-to-pod
networking, IPVS based service proxy and ingress pod firewall.
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.
* Integrating Canal (Flannel + Calico) for CNI
Initial steps to integrate Canal as a CNI provider for kops
Removed CNI in help as per chrislovecnm
* Integration tests, getting closer to working
- Added some integration tests for Canal
- Finding more places Canal needed to be added
- Sneaking in update to Calico Policy Controller
* Add updated conversion file
* turned back on canal integration tests
* fixed some rebase issues
* Fixed tests and flannel version
* Fixed canal yaml, and some rebasing errors
- Added some env vars to the install-cni container to get the proper
node name handed off
* Added resource limits
- set resource limits on containers for Canal
- Ran through basic calico tutorials to verify functionality
* Updating Calico parts to Calico 2.0.2