These are needed by protokube to create the kops-controller DNS record to allow nodes to bootstrap.
See these logs: https://storage.googleapis.com/kubernetes-jenkins/logs/e2e-kops-grid-scenario-public-jwks/1345956556562239488/artifacts/ip-172-20-48-1.sa-east-1.compute.internal/protokube.log
```
I0104 05:03:51.264472 6482 dnscache.go:74] querying all DNS zones (no cached results)
I0104 05:03:51.264570 6482 route53.go:53] AWS request: route53 ListHostedZones
W0104 05:03:51.389485 6482 dnscontroller.go:124] Unexpected error in DNS controller, will retry: error querying for zones: error querying for DNS zones: AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io/i-05b1db10d1a5b8637 is not authorized to perform: route53:ListHostedZones
```
and the nodeup logs on nodes that couldn't join the cluster:
```
Jan 04 04:55:53.500187 ip-172-20-38-84 nodeup[2070]: W0104 04:55:53.500117 2070 executor.go:131] error running task "BootstrapClient/BootstrapClient" (9m52s remaining to succeed): Post "https://kops-controller.internal.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io:3988/bootstrap": dial tcp: lookup kops-controller.internal.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io on 127.0.0.53:53: no such host
```
add io2 case and correct IOPS minimum value check
add gp3 case
add io2 and gp3 parameter ratio validation logic
add volumeThroughput parameter for disks that support it
add volumeThroughput components throughout ebs structs
add volumeThroughput to versioned api
updated api machinery and crds
apimachinery update
When using an AWS NLB in front of the Kubernetes API servers, we can't
attach the EC2 security groups nominated in the Cluster
"spec.api.loadBalancer.additionalSecurityGroups" field directly to the
load balancer, as NLBs don't have associated security groups. Instead,
we intend to attach those nominated security groups to the machines
that will receive network traffic forwarded from the NLB's
listeners. For the API servers, since that program runs only on the
master or control plane machines, we need only attach those security
groups to the machines that will host the "kube-apiserver" program, by
way of the ASG launch templates that come from kOps InstanceGroups of
role "master."
We were mistakenly including these security groups in launch templates
derived from InstanceGroups of all of our three current roles:
"bastion," "master," and "node." Instead, skip InstanceGroups of the
"bastion" and "node" roles and only target those of role "master."
kubetest2 doesn't download a kubectl client from the same version being tested, so the mismatch is causing test failures.
Until we can download kubectl we'll use the same minor version as /usr/local/bin/kubectl baked into the prow image