We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
We recognize our primary location via string-matching, and we then
have a hard-coded list of mirrors for that location.
Didn't prove easy to make this much better, but we can hopefully do so
iteratively (e.g. fetch mirrors via URL)
https://github.com/lyft/cni-ipvlan-vpc-k8s
This cni solution is slightly different in that it doesn't require running a daemonset
It requires:
* a config file in /etc/cni/net.d
* the binaries in /opt/cni/bin
* adding the --node-ip param to the kubelet
This code is modeled after the AmazonVPC cni bits.
I've left the setup of the required subnets as an exercise to the reader.
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
fixes#2606
Most part of the changes are similar to current supported CNI networking
provider. Kube-router also support IPVS bassed service proxy which can
be used as replacement for kube-proxy. So the manifest for kube-router
included with this patch enables kube-router to provide pod-to-pod
networking, IPVS based service proxy and ingress pod firewall.
* Integrating Canal (Flannel + Calico) for CNI
Initial steps to integrate Canal as a CNI provider for kops
Removed CNI in help as per chrislovecnm
* Integration tests, getting closer to working
- Added some integration tests for Canal
- Finding more places Canal needed to be added
- Sneaking in update to Calico Policy Controller
* Add updated conversion file
* turned back on canal integration tests
* fixed some rebase issues
* Fixed tests and flannel version
* Fixed canal yaml, and some rebasing errors
- Added some env vars to the install-cni container to get the proper
node name handed off
* Added resource limits
- set resource limits on containers for Canal
- Ran through basic calico tutorials to verify functionality
* Updating Calico parts to Calico 2.0.2
Adding the option to install Calico with the `--networking calico`
argument. This will currently deploy Calico v2.0 to the cluster.
Documentation has also been updated with information about Calico and
where one can find more information or help.
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.