This commit also introduces support for adding token projection volumes for well-known SAs.
Slightly less complicated than explicitly parsing the objects for a manifest
Ensure apiserver role can only be used on AWS (because of firewalling)
Apply api-server label to CP as well
Consolidate node not ready validation message
Guard apiserver nodes with a feature flag
Rename Apiserver role to APIServer
Add an integration test for apiserver nodes
Rename Apiserver role to APIServer
Enumerate all roles in rolling update docs
Apply suggestions from code review
Co-authored-by: Steven E. Harris <seh@panix.com>
Should not be needed; dns-controller should run on the control-plane
node so there should not be a bootstrapping problem with the nodes.
Reverts #10529
These are needed by protokube to create the kops-controller DNS record to allow nodes to bootstrap.
See these logs: https://storage.googleapis.com/kubernetes-jenkins/logs/e2e-kops-grid-scenario-public-jwks/1345956556562239488/artifacts/ip-172-20-48-1.sa-east-1.compute.internal/protokube.log
```
I0104 05:03:51.264472 6482 dnscache.go:74] querying all DNS zones (no cached results)
I0104 05:03:51.264570 6482 route53.go:53] AWS request: route53 ListHostedZones
W0104 05:03:51.389485 6482 dnscontroller.go:124] Unexpected error in DNS controller, will retry: error querying for zones: error querying for DNS zones: AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io/i-05b1db10d1a5b8637 is not authorized to perform: route53:ListHostedZones
```
and the nodeup logs on nodes that couldn't join the cluster:
```
Jan 04 04:55:53.500187 ip-172-20-38-84 nodeup[2070]: W0104 04:55:53.500117 2070 executor.go:131] error running task "BootstrapClient/BootstrapClient" (9m52s remaining to succeed): Post "https://kops-controller.internal.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io:3988/bootstrap": dial tcp: lookup kops-controller.internal.e2e-kops-scenario-public-jwks.test-cncf-aws.k8s.io on 127.0.0.53:53: no such host
```
This should be much easier to start and to get under testing; it only
works with a load balancer, it sets the apiserver into anonymous-auth
allowed, it grants the anonymous auth user permission to read our jwks
tokens. But it shouldn't need a second bucket or anything of that
nature.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
* Force cilium-operator run on master nodes
* Add option for setting cilium ipam mode
* If cilium ipam mode is eni, add additional permissions to master nodes
* Allow NonMasqueradeCIDR overlap with NetworkCIDR when Cilium ENI is enabled
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
Although amazon-vpc-cni-k8s adds tag to ENI, kops does not add the
permission. Hence it does not work by default.
This patch adds the permission for CreateTag on ENI to
amazon-vpc-cni-k8s's nodes policy.
https://github.com/lyft/cni-ipvlan-vpc-k8s
This cni solution is slightly different in that it doesn't require running a daemonset
It requires:
* a config file in /etc/cni/net.d
* the binaries in /opt/cni/bin
* adding the --node-ip param to the kubelet
This code is modeled after the AmazonVPC cni bits.
I've left the setup of the required subnets as an exercise to the reader.
Without this permission, controller-manager gets the following error:
failed to ensure load balancer for service XXX: Error trying to
deregister targets in target group:
"AccessDenied: User: arn:aws:sts::XXX:assumed-role/masters...
is not authorized to perform: elasticloadbalancing:DeregisterTargets
on resource: arn:aws:elasticloadbalancing:XXX
The etcd-manager will (ideally) take over etcd management. To provide a
nice migration path, and because we want etcd backups, we're creating a
standalone image that just backs up etcd in the etcd-manager format.
This isn't really ready for actual usage, but should be harmless because
it runs as a sidecar container.
The current implementation when Etcd TLS was added does not support using calico as the configuration and client certificates are not present. This PR updates the calico manifests and adds the distribution of the client certificate
Automatic merge from submit-queue
add autoscaling:DescribeLaunchConfigurations permission
As of 0.6.1, Cluster Autoscaler supports [scaling node groups from/to 0](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws#scaling-a-node-group-to-0), but requires the `autoscaling:DescribeLaunchConfigurations` permission.
It'd be great to have this in kops since this permission needs to be re-added back to the master policy every time the cluster is updated.
Automatic merge from submit-queue
Limit the IAM EC2 policy for the master nodes
Related to: https://github.com/kubernetes/kops/pull/3158
The EC2 policy for the master nodes are quite open currently, allowing them to create/delete/modify resources that are not associated with the cluster the node originates from. I've come up with a potential solution using condition keys to validate that the `ec2:ResourceTag/KubernetesCluster` matches the cluster name.