We need to open up the ipip protocol, which wasn't previously enabled.
Future work could construct the firewall rules in a common library,
and then adapt them to the various clouds.
* 'sv' may have 'nil' or other unexpected value as its corresponding error variable may be not 'nil'
* Apply suggestions from code review
Co-authored-by: Ciprian Hacman <ciprian@hakman.dev>
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
* refactor TargetLoadBalancer to use DNSTarget interface instead of LoadBalancer
* add LoadBalancerClass fields into api
* make api machinery
* WIP: Implemented API loadbalancer class, allowing NLB and ELB support on AWS for new clusters.
* perform vendoring related tasks and apply fixes identified from hack/
dissallow spotinst + nlb
remove reflection in status_discovery.go
Add precreated additional security groups to the Master nodes in case of NLB
Remove support for attaching individual instances to NLB; only rely on ASG attachments
Don't specify Classic loadbalancer in GCE integration test
* add utility function to the kops model context to make LoadBalancer comparisons simpler
* use DNSTarget interface when locating DNSName of API ELB
* wip: create target group task
* Consolidate TargetGroup tasks
* Use context helper for determining api load balancer type to avoid nil pointers
* Update NLB creation to use target group ARN from separate task rather than creating a TG in-line
* Address staticcheck and bazel failures
* Removing NLB Attachment tasks because they're not used since we switched to defining them as a part of the ASGs
* Address PR review feedback
* Only set LB Class field for AWS clusters, fix nil pointer
* Move target group attributes from NLB task to TG task, removing unused attributes
* Add terraform and cloudformation support for NLBs, listeners, and target groups
* Update integration test for NLB support
* Fix NLB name format to pass terraform validation
* Preserve security group rule names when switching ELB to NLB to reduce destructive terraform changes
* Use elbv2 enums and address some TODOs
* Set healthcheck values in target group
* Find TG tags, fix NLB name detection
* Fix more spurious changes reported by lifecycle integration test
* Fix spotinst validation, more code cleanup
* Address more PR feedback
* ReconcileTargetGroups unit test + more code simplification
* Addressing PR feedback Renaming task 1. awstasks.LoadBalancer -> awstasks.ClassicLoadBalancer
* Addressing PR feedback Renaming task: ELBName() -> CLBName() / LinkToELB() -> LinkToCLB()
* Addressing PR feedback: Various text changes
* fix export of kubecfg
* address TargetGroup should have the same name as the NLB
* should address error when fetching tags due to missing ARN
* Update expected and crds
* Add feature table to NLB docs
* Address more feedback and remove some TODOs that arent applicable anymore
* Update spotinst validation error message
Co-authored-by: Peter Rifel <pgrifel@gmail.com>
This should be much easier to start and to get under testing; it only
works with a load balancer, it sets the apiserver into anonymous-auth
allowed, it grants the anonymous auth user permission to read our jwks
tokens. But it shouldn't need a second bucket or anything of that
nature.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
This adds any labels defined in the Cluster spec's CloudLabels to the tags of the following AWS resource types:
Elastic IP
Internet Gateway
NAT Gateway
Route Table
Security Group
Subnet
VPC DHCP Options
VPC
Add tests for no ssh key functionality
Add docs for setting no ssh key
Disable sshKey rendering for cloudformation if nosshkey is set
Fix broken test
make goimports
Fix
Formatting fix
Update kubernetes version for tests
Update expected test output
Fix imports in mesh.pb.go
Run hack/update-expected.sh
Change digital ocean logic to handle *string for SSHKeyName
Fix expected output
Missed a few
With etcd-manager the DNS names should only be used by the
etcd-manager pod itself, so we don't need to share /etc/hosts with the
host.
By not sharing we avoid:
(1) the temptation to address etcd directly
(2) problems of concurrent updates to /etc/hosts being hard from within a container (because locking is difficult across bind mounts)
Introducing with kubernetes 1.17 to avoid changing behavior of existing versions.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
In 1.12 (kops & kubenetes):
* We default etcd-manager on
* We default to etcd3
* We default to full TLS for etcd (client and peer)
* We stop allowing external access to etcd
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
* Stop setting the Name tag on a shared subnet/vpc
* Stop setting the legacy KubernetesCluster tag on a shared subnet/vpc
that is new enough (>=1.6); we rely on the shared tags instead
* Set tags on shared subnets; i.e. we _do_ set the shared tag on a
shared subnet; that is important for ELBs
* Set tags on shared VPCs; i.e. we _do_ set the shared tag on a shared
VPC; that is not used but consistent with subnets.
* Add tests for shared subnet