Bastion, nodes, and api servers get limit of 1
API servers tend to run pods requiring metadata access. The hop limit
depends on CNI, but all should work with a limit of 3.
* refactor TargetLoadBalancer to use DNSTarget interface instead of LoadBalancer
* add LoadBalancerClass fields into api
* make api machinery
* WIP: Implemented API loadbalancer class, allowing NLB and ELB support on AWS for new clusters.
* perform vendoring related tasks and apply fixes identified from hack/
dissallow spotinst + nlb
remove reflection in status_discovery.go
Add precreated additional security groups to the Master nodes in case of NLB
Remove support for attaching individual instances to NLB; only rely on ASG attachments
Don't specify Classic loadbalancer in GCE integration test
* add utility function to the kops model context to make LoadBalancer comparisons simpler
* use DNSTarget interface when locating DNSName of API ELB
* wip: create target group task
* Consolidate TargetGroup tasks
* Use context helper for determining api load balancer type to avoid nil pointers
* Update NLB creation to use target group ARN from separate task rather than creating a TG in-line
* Address staticcheck and bazel failures
* Removing NLB Attachment tasks because they're not used since we switched to defining them as a part of the ASGs
* Address PR review feedback
* Only set LB Class field for AWS clusters, fix nil pointer
* Move target group attributes from NLB task to TG task, removing unused attributes
* Add terraform and cloudformation support for NLBs, listeners, and target groups
* Update integration test for NLB support
* Fix NLB name format to pass terraform validation
* Preserve security group rule names when switching ELB to NLB to reduce destructive terraform changes
* Use elbv2 enums and address some TODOs
* Set healthcheck values in target group
* Find TG tags, fix NLB name detection
* Fix more spurious changes reported by lifecycle integration test
* Fix spotinst validation, more code cleanup
* Address more PR feedback
* ReconcileTargetGroups unit test + more code simplification
* Addressing PR feedback Renaming task 1. awstasks.LoadBalancer -> awstasks.ClassicLoadBalancer
* Addressing PR feedback Renaming task: ELBName() -> CLBName() / LinkToELB() -> LinkToCLB()
* Addressing PR feedback: Various text changes
* fix export of kubecfg
* address TargetGroup should have the same name as the NLB
* should address error when fetching tags due to missing ARN
* Update expected and crds
* Add feature table to NLB docs
* Address more feedback and remove some TODOs that arent applicable anymore
* Update spotinst validation error message
Co-authored-by: Peter Rifel <pgrifel@gmail.com>
This means we no longer have to individually hard-code the `kops set`
fields, however we use the "language" we're now demonstrated.
We add tests to ensure we have parity with our existing (hard-coded)
setter logic.
No problem with COS per-se, but these versions have the newer docker,
which includes the --storage flag. We fixed that in master in #5258,
but older versions of kops - including the currently released version
1.9.1 - don't have the fix.
Revert to fix the problem immediately, but opened #5358 to track a
more realistic fix.
Automatic merge from submit-queue.
Add --subnets and --utility-subnets to kops create cluster
This change adds two new options to `kops create cluster`
When specifying `--vpc`, `--subnets` can be specified as an unordered array of subnet ids. Kops will then look up the zones of the subnets to find which zone to add the subnet id to.
If `--topology private` is also specified, `--utility-subnets` can similarly be specified.
~If a zone was specified but a subnet wasn't given that matches the zone, then the subnet will be allocated a CIDR with the current behaviour.~ This case fails validation here 7bd0a6a703/pkg/apis/kops/validation/validation.go (L151)
I can add unit tests and docs changes if required, but I am keen to get feedback before I proceed much further.
I have only added support for AWS.
I have tested this by running a command similar to this:
```bash
kops create cluster \
--zones=us-east-1a,us-east-1b,us-east-1c \
--topology private \
--master-zones=us-east-1a,us-east-1b,us-east-1c \
--vpc $vpc_id \
--subnets subnet-111111,subnet-222222,subnet-333333 \
--utility-subnets subnet-444444,subnet-555555,subnet-666666 \
$cluster_hosted_zone_name
```
And the cluster spec was as expected.
This will allow us to set CIDRs for nodeport access, which in turn will
allow e2e tests that require nodeport access to pass.
Then add a feature-flagged flag to `kops create cluster` to allow
arbitrary setting of spec values; currently the only value supported is
cluster.spec.nodePortAccess
- new property is only used when KubernetesVersion is 1.6 or greater
- taints are passed to kubelet via --register-with-taints flag
- Set a default NoSchedule taint on masters
- Set --register-schedule=true when --register-with-taints is used
- Changed the log message in taints.go to be less alarming if taints are
found - since they are expected on 1.6.0+ clusters
- Added Taints section to the InstanceGroup docs
- Only default taints are allowed in the spec pre-1.6
- Custom taint validation happens as soon as IG specs are edited.
- --cloud-labels will be applied to every kops-created resource
- Also ran apimachinery to regenerated the conversions for the new
Cluster.ClusterLabels property.
We probably should use a canned channel, but in the interim this is
probably the best option, otherwise every time we update the stable
channel we break the tests.
* The master zones are the default set of zones unless explicitly set
* The master count is the number of master zones unless explicitly set
* We then round-robin around the zones
* We append a suffix -1, -2, -3 if there are more masters than zones
* We trim prefixes to keep etcd member names short
Fix#1653
Rather than always setting it (incorrectly in many cases), we infer it
from the subnets.
Users can still set it, we just don't default it to a value we then
ignore.
Fix#1582
bastion-<clustername> is not necessarily in the same hosted zone, nor is
bastion-<dnszone>, and bastion-<dnszone> is not necessarily unique
across clusters.
By testing with data from various schema versions, we effectively check
that they are equivalent.
Also this uncovered a few places where we were not strictly ordering
things - add some sorts in there.