We've done this in the API already, but we had a single CAStore
interface that did Keysets and SSHCredentials. Separate out
SSHCredentials into SSHCredentialStore, and start using API objects as
our primary representation.
Automatic merge from submit-queue.
Add a default NodeLabel with the InstanceGroup name
As requested in https://github.com/kubernetes/kops/issues/2999, this change just auto-populates new InstanceGroup specs with a default node label containing the name of the instance group. It would be really useful for those of us managing environments with multiple instance groups.
It allows an admin to easily view the instance groups using kubectl:
```
kubectl get nodes --label-columns kops.k8s.io/instancegroup
NAME STATUS AGE VERSION INSTANCEGROUP
ip-172-20-108-120.eu-west-1.compute.internal Ready,node 3m v1.7.4 xtra-large
ip-172-20-117-133.eu-west-1.compute.internal Ready,master 14m v1.7.4 master-eu-west-1c
ip-172-20-32-139.eu-west-1.compute.internal Ready,master 14m v1.7.4 master-eu-west-1a
ip-172-20-32-92.eu-west-1.compute.internal Ready,node 12m v1.7.4 nodes
ip-172-20-67-184.eu-west-1.compute.internal Ready,master 13m v1.7.4 master-eu-west-1b
```
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
Automatic merge from submit-queue
Flannel: change default backend type
We support udp, which has to the default for backwards-compatibility,
but also new clusters will now use vxlan.
This will allow us to set CIDRs for nodeport access, which in turn will
allow e2e tests that require nodeport access to pass.
Then add a feature-flagged flag to `kops create cluster` to allow
arbitrary setting of spec values; currently the only value supported is
cluster.spec.nodePortAccess
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
fixes#2606
Most part of the changes are similar to current supported CNI networking
provider. Kube-router also support IPVS bassed service proxy which can
be used as replacement for kube-proxy. So the manifest for kube-router
included with this patch enables kube-router to provide pod-to-pod
networking, IPVS based service proxy and ingress pod firewall.
We don't want to "accidentally" enable HA. When users specify multiple
zones, but don't specify a master-count or master-zones, we interpret
that as master-count=1
* Add support of CoreDNS for vSphere provider.
* Add instructions about how to setup CoreDNS for vSphere provider.
* Address comments for CoreDNS support code.
Accept vSphere's server, datacenter, cluster setting by flags
"vsphere-server", "vsphere-datacenter", and "vsphere-resource-pool".
Username and password can be set by environment variables:
"VSPHERE_USERNAME" and "VSPHERE_PASSWORD".
- new property is only used when KubernetesVersion is 1.6 or greater
- taints are passed to kubelet via --register-with-taints flag
- Set a default NoSchedule taint on masters
- Set --register-schedule=true when --register-with-taints is used
- Changed the log message in taints.go to be less alarming if taints are
found - since they are expected on 1.6.0+ clusters
- Added Taints section to the InstanceGroup docs
- Only default taints are allowed in the spec pre-1.6
- Custom taint validation happens as soon as IG specs are edited.
- Previous method would have caused issues with the way tags are used
for filtering resources.
- Updated docs and comments to only refer to instance groups, rather
than all AWS resources
- --cloud-labels will be applied to every kops-created resource
- Also ran apimachinery to regenerated the conversions for the new
Cluster.ClusterLabels property.
* Integrating Canal (Flannel + Calico) for CNI
Initial steps to integrate Canal as a CNI provider for kops
Removed CNI in help as per chrislovecnm
* Integration tests, getting closer to working
- Added some integration tests for Canal
- Finding more places Canal needed to be added
- Sneaking in update to Calico Policy Controller
* Add updated conversion file
* turned back on canal integration tests
* fixed some rebase issues
* Fixed tests and flannel version
* Fixed canal yaml, and some rebasing errors
- Added some env vars to the install-cni container to get the proper
node name handed off
* Added resource limits
- set resource limits on containers for Canal
- Ran through basic calico tutorials to verify functionality
* Updating Calico parts to Calico 2.0.2
* The master zones are the default set of zones unless explicitly set
* The master count is the number of master zones unless explicitly set
* We then round-robin around the zones
* We append a suffix -1, -2, -3 if there are more masters than zones
* We trim prefixes to keep etcd member names short
Fix#1653
Rather than always setting it (incorrectly in many cases), we infer it
from the subnets.
Users can still set it, we just don't default it to a value we then
ignore.
Fix#1582
We were trying to call but the result was subtly different (because of
different defaulting.) The two code paths makes testing hard, so just
have one code path.
bastion-<clustername> is not necessarily in the same hosted zone, nor is
bastion-<dnszone>, and bastion-<dnszone> is not necessarily unique
across clusters.
Adding the option to install Calico with the `--networking calico`
argument. This will currently deploy Calico v2.0 to the cluster.
Documentation has also been updated with information about Calico and
where one can find more information or help.
By testing with data from various schema versions, we effectively check
that they are equivalent.
Also this uncovered a few places where we were not strictly ordering
things - add some sorts in there.
* Zones are now subnets
* Utility subnet is no longer part of Zone
* Bastion InstanceGroup type added instead
* Etcd clusters defined in terms of InstanceGroups, not zones
* AdminAccess split into SSHAccess & APIAccess
* Dropped unused Multizone flag
- Fixing topology.md (linting after review)
- Adding error message for a neglected --networking cni on private topologies
- Adding troubleshooting to documentation
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.
This means it only needs to be specified during `kops create`. We
remove the option from `kops update` for consistency.
This will shortly be manageable using the secrets functionality.
Fix#221
We separate out the `create cluster` operation from the `update cluster`
operation. Now create cluster only creates the spec (unless you pass
--yes), and is only for new clusters.
`update cluster` works on new or existing clusters, and should be called
to apply changes.
`update` is not the best name, because it means something different in
kubectl, but I think it's a good start.
StateStore was highly orientated towards a VFS system; replace it with a
Registry abstraction that is more object based.
We also rationalize much of the CLI (cmd) command logic also.
Users can still get HA master by explicitly specifying a list of
`--master-zones`.
But HA master is not as well tested, is slower, needs more machines etc
and we probably shouldn't silently force it as the default.
Fix#33