Automatic merge from submit-queue
remove --cluster-cidr from kube-router's manifest.
Kube-router was using --cluster-cidr flag to get the subnet allocated
for pod CIDR's. But now kube-router has the ability internally to infer
the CIDR allocated for the pod's by getting the information from
kubernetes API server node spec's
Automatic merge from submit-queue
Inline Component Configuration Fix
The current implementation does not ignore any possible interpolation of bash in the content. This PR wrapped the various spec content in 'EOF' to ignore all. All tested on a working cluster.
- updated the tests to reflect the changes
- wrapped the component configuration in 'eof' to ensure interpolation is ignored
- dropped the t.Log debug line
The current implementation does not ignore any possible interpolation of bash in the content. This PR wrapped the various spec content in 'EOF' to ignore all.
- updated the tests to reflect the changes
- wrapped the component configuration in 'eof' to ensure interpolation is ignored
Automatic merge from submit-queue
Docker Default Ulimits
The current implementation does not permit us to set the default ulimit on docker daemon (currently a requirement for our elasticsearch). This PR add the DefaultUlimit option to the DockerConfig
The current implementation does not permit us to set the default ulimit on docker daemon (currently a requirement for our logstash). This PR add the DefaultUlimit option to the DockerConfig
Kube-router was using --cluster-cidr flag to get the subnet allocated
for pod CIDR's. But now kube-router has the ability internally to infer
the CIDR allocated for the pod's by getting the information from
kubernetes API server node spec's
Automatic merge from submit-queue
Allow the strict IAM policies to be optional
The stricter IAM policies could potentially cause regression for some edge-cases, or may rely on nodeup image changes that haven't yet been deployed / tagged officially (currently the case on master branch since PR https://github.com/kubernetes/kops/pull/3158 was merged in).
This PR just wraps the new IAM policy rules around a cluster spec flag, `EnableStrictIAM`, so will default to the original behaviour (where the S3 policies were completely open). Could also be used to wrap PR https://github.com/kubernetes/kops/pull/3186 if it progresses any further.
- Or we could reject this and have the policies always strict! :)
Automatic merge from submit-queue
Cluster / InstanceGroup File Assets
@chrislovecnm @justinsb ...
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @Note, nothing is enforced, so unless you've specified anything everything is as the same
- updated the cluster_spec.md to reflect the changes
- permit users to place inline files into the cluster and instance group specs
- added the ability to template the files, the Cluster and InstanceGroup specs are passed into context
- cleaned up and missed comment, unordered imports etc along the journey
notes: In addition to this; need to look at the detecting the changes in the cluster and instance group spec. Think out loud perhaps using a last_known_configuration annotation, similar to kubernetes
Automatic merge from submit-queue
Create cluster requirements for DigitalOcean
Initial changes required to create a cluster state. Running `kops update cluster --yes` does not work yet.
Note that DO has already adopted cloud controller managers (https://github.com/digitalocean/digitalocean-cloud-controller-manager) so we set `--cloud-provider=external`. This will end up being the case for aws, gce and vsphere over the next couple of releases.
https://github.com/kubernetes/kops/issues/2150
```bash
$ kops create cluster --cloud=digitalocean --name=dev.asykim.com --zones=tor1
I0821 18:47:06.302218 28623 create_cluster.go:845] Using SSH public key: /Users/AndrewSyKim/.ssh/id_rsa.pub
I0821 18:47:06.302293 28623 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet tor1
Previewing changes that will be made:
I0821 18:47:11.457696 28623 executor.go:91] Tasks: 0 done / 27 total; 27 can run
I0821 18:47:12.113133 28623 executor.go:91] Tasks: 27 done / 27 total; 0 can run
Will create resources:
Keypair/kops
Subject o=system:masters,cn=kops
Type client
Keypair/kube-controller-manager
Subject cn=system:kube-controller-manager
Type client
Keypair/kube-proxy
Subject cn=system:kube-proxy
Type client
Keypair/kube-scheduler
Subject cn=system:kube-scheduler
Type client
Keypair/kubecfg
Subject o=system:masters,cn=kubecfg
Type client
Keypair/kubelet
Subject o=system:nodes,cn=kubelet
Type client
Keypair/kubelet-api
Subject cn=kubelet-api
Type client
Keypair/master
Subject cn=kubernetes-master
Type server
AlternateNames [100.64.0.1, 127.0.0.1, api.dev.asykim.com, api.internal.dev.asykim.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]
ManagedFile/dev.asykim.com-addons-bootstrap
Location addons/bootstrap-channel.yaml
ManagedFile/dev.asykim.com-addons-core.addons.k8s.io
Location addons/core.addons.k8s.io/v1.4.0.yaml
ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-k8s-1.6
Location addons/dns-controller.addons.k8s.io/k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
Location addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-k8s-1.6
Location addons/kube-dns.addons.k8s.io/k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
Location addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-limit-range.addons.k8s.io
Location addons/limit-range.addons.k8s.io/v1.5.0.yaml
ManagedFile/dev.asykim.com-addons-storage-aws.addons.k8s.io
Location addons/storage-aws.addons.k8s.io/v1.6.0.yaml
Secret/admin
Secret/kube
Secret/kube-proxy
Secret/kubelet
Secret/system:controller_manager
Secret/system:dns
Secret/system:logging
Secret/system:monitoring
Secret/system:scheduler
Must specify --yes to apply changes
Cluster configuration has been created.
Suggestions:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster dev.asykim.com
* edit your node instance group: kops edit ig --name=dev.asykim.com nodes
* edit your master instance group: kops edit ig --name=dev.asykim.com master-tor1
Finally configure your cluster with: kops update cluster dev.asykim.com --yes
```
Automatic merge from submit-queue
Add proxy client support
This PR adds support for the `--proxy-client-cert-file` and `--proxy-client-key-file` cmd line args that the apiserver accepts now.
/cc @chrislovecnm @blakebarnett
This enables external admission controller webhooks, api aggregation,
and anything else that relies on the
--proxy-client-cert-file/--proxy-client-key-file apiserver args.
Automatic merge from submit-queue
Improving etcd volume detection logic, ensuring that root volumes are not mounted
Fixes: https://github.com/kubernetes/kops/issues/3167
When an AWS account has functionality that adds an ec2 instance tags to a volume automatically, protokube can attempt to mount the root volume. This PR tightens the logic for detecting etcd volumes. Also, the two volumes that AWS defines as root volume devices are never mounted. Added a unit test, which required refactoring of the code into a separate method.
Automatic merge from submit-queue
Delete old tags when cloudLabels / labels / taints are removed
If you remove custom cloudLabels/labels/taints from the cluster configuration, kops does not correctly update the AWS resources to delete the tags. This seems to be because it only calls the AWS API method `CreateOrUpdateTags`, which won't remove tags that aren't in the supplied list.
The current behaviour is that every `kops update cluster` will show a tag difference but never successfully apply the changes (remove the extra tags).
This PR will perform a diff of the current and expected tags, and call the `DeleteTags` API if there are any tags to delete.