We want to be able to use "dns=none" (without peer-to-peer gossip)
even for clusters that have the k8s.local extension. These were
previously called "gossip clusters", but really that is an
implementation; what actually matters to users is that they don't rely
on writing records into a DNS zone (such as Route53).
If we're not masquerading the pod IPs, we need an explicit firewall
rule for the pods to reach the kube-apiserver. Normally this is
permitted anyway, but if the apiserver has a locked-down CIDR range
(as the e2e tests do) then we need our own rule.
* Add ILBs, broadly following the AWS model. The following new
capabilities are added for clusters in GCP:
* Cluster's spec.api.loadBalancer can be set to 'type: internal' on
GCP.
* Therefore, GCP can now create:
* regional backend services
* regional (non-legacy) healthchecks
* firewall rules with "internal" load-balancing scheme
* firewall rules with dot-notation-specified IP addresses
* Cluster's spec.api.loadBalancer's 'subnets' field functions
as in the AWS model.
A few incidental changes are included, either because this change
touched the relevant code or because my use case happened to trigger the
issues that are fixed here.
* Cluster's spec.networkID field can be prefixed by project to use
GCP's common cross-project networking model.
* The presumption is that all specified subnets belong to this
network and therefore this project.
* Add missing operation wait on forwarding rule creation.
* Some Terraform output improvements:
* Permit no-ACL files in GCS buckets in Terraform output.
* Enable marginally better cross-resource reference in Terraform outputs
* Add project to network + subnetwork literals in Terraform output.
* Add terraform output to backend services and health checks.
Testing:
* Add mocks for backend services and health checks.
* Add minimal integration test - copied from gce_private and ilb added.
* Add update cluster goldens.
Co-authored-by: Travis Reid <travis_reid@apple.com>
If we're using a cluster-level service-account, we shouldn't try to
set bucket permissions on a per-IG level.
For compatibility with the existing behavior, we simply don't set any
permissions in this case.