Automatic merge from submit-queue
Create Keyset API type
A Keyset holds a set of keypairs or other secret cluster material.
It is a set to support rotation of keys.
This will allow us to store secrets on kops-server (and also is a step towards
separating where we manage secrets from how we communicate them to running
clusters, which will allow bare-metal or KMS)
Starting with just the API objects.
Automatic merge from submit-queue
Adds DNSControllerSpec and WatchIngress flag
This PR is in reference to #2496, #2468 and the issues referenced in there relating to use of the watch-ingress flag.
This PR attempts to rectify this situation and gives users who want it, the option to turn on watch-ingress without forcing it on them. Also spits out a warning to the logs about potential side effects.
Includes notes in `docs/cluster_spec.md` to explain.
Automatic merge from submit-queue
Additional Kubelet Options
This PR add additional options to the kubelet spec allowing users to set the --runtime-request-timeout and -volume-stats-agg-period
In related to issue https://github.com/kubernetes/kops/issues/3265
Automatic merge from submit-queue
Kubelet Readonly Port
The current implementation does not permit the user to specify the kubelet read-only port (which unset defaults to 10255). For security reasons we need this port switched off i.e. 0. This PR retains the default behavior but adds the readOnlyPort as an option for those whom need to override.
```shell
podInfraContainerImage: gcr.io/google_containers/pause-amd64:3.0
podManifestPath: /etc/kubernetes/manifests
+ readOnlyPort: 0
registerSchedulable: false
requireKubeconfig: true
```
And tested on the box
```shell
core@ip-10-250-34-23 ~ $ egrep -o 'read-only-port=[0-9]+' /etc/sysconfig/kubelet
read-only-port=0
```
The current implementaton does not permit the user to specify the kubelet read-only port (which unset defaults to 10255). Note security reasons we need this port switched off i.e. 0. This PR retains the default behaviour but adds the readOnlyPort as an options for those whom need to override.
podInfraContainerImage: gcr.io/google_containers/pause-amd64:3.0
podManifestPath: /etc/kubernetes/manifests
+ readOnlyPort: 0
registerSchedulable: false
requireKubeconfig: true
Automatic merge from submit-queue
Allow user defined endpoint to host action for Canal
Adds ability to define `Networking.Canal.DefaultEndpointToHostAction` in the Cluster Spec. This allows you to customise the behaviour of traffic routing from a pod to the host (after calico iptables chains have been processed). `ACCEPT` is the default value and is left as-is.
`If you want to allow some or all traffic from endpoint to host, set this parameter to “RETURN” or “ACCEPT”. Use “RETURN” if you have your own rules in the iptables “INPUT” chain; Calico will insert its rules at the top of that chain, then “RETURN” packets to the “INPUT” chain once it has completed processing workload endpoint egress policy.`
Automatic merge from submit-queue
Docker Default Ulimits
The current implementation does not permit us to set the default ulimit on docker daemon (currently a requirement for our elasticsearch). This PR add the DefaultUlimit option to the DockerConfig
The current implementation does not permit us to set the default ulimit on docker daemon (currently a requirement for our logstash). This PR add the DefaultUlimit option to the DockerConfig
Automatic merge from submit-queue
Allow the strict IAM policies to be optional
The stricter IAM policies could potentially cause regression for some edge-cases, or may rely on nodeup image changes that haven't yet been deployed / tagged officially (currently the case on master branch since PR https://github.com/kubernetes/kops/pull/3158 was merged in).
This PR just wraps the new IAM policy rules around a cluster spec flag, `EnableStrictIAM`, so will default to the original behaviour (where the S3 policies were completely open). Could also be used to wrap PR https://github.com/kubernetes/kops/pull/3186 if it progresses any further.
- Or we could reject this and have the policies always strict! :)
Automatic merge from submit-queue
Cluster / InstanceGroup File Assets
@chrislovecnm @justinsb ...
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @Note, nothing is enforced, so unless you've specified anything everything is as the same
- updated the cluster_spec.md to reflect the changes
- permit users to place inline files into the cluster and instance group specs
- added the ability to template the files, the Cluster and InstanceGroup specs are passed into context
- cleaned up and missed comment, unordered imports etc along the journey
notes: In addition to this; need to look at the detecting the changes in the cluster and instance group spec. Think out loud perhaps using a last_known_configuration annotation, similar to kubernetes
Automatic merge from submit-queue
Create cluster requirements for DigitalOcean
Initial changes required to create a cluster state. Running `kops update cluster --yes` does not work yet.
Note that DO has already adopted cloud controller managers (https://github.com/digitalocean/digitalocean-cloud-controller-manager) so we set `--cloud-provider=external`. This will end up being the case for aws, gce and vsphere over the next couple of releases.
https://github.com/kubernetes/kops/issues/2150
```bash
$ kops create cluster --cloud=digitalocean --name=dev.asykim.com --zones=tor1
I0821 18:47:06.302218 28623 create_cluster.go:845] Using SSH public key: /Users/AndrewSyKim/.ssh/id_rsa.pub
I0821 18:47:06.302293 28623 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet tor1
Previewing changes that will be made:
I0821 18:47:11.457696 28623 executor.go:91] Tasks: 0 done / 27 total; 27 can run
I0821 18:47:12.113133 28623 executor.go:91] Tasks: 27 done / 27 total; 0 can run
Will create resources:
Keypair/kops
Subject o=system:masters,cn=kops
Type client
Keypair/kube-controller-manager
Subject cn=system:kube-controller-manager
Type client
Keypair/kube-proxy
Subject cn=system:kube-proxy
Type client
Keypair/kube-scheduler
Subject cn=system:kube-scheduler
Type client
Keypair/kubecfg
Subject o=system:masters,cn=kubecfg
Type client
Keypair/kubelet
Subject o=system:nodes,cn=kubelet
Type client
Keypair/kubelet-api
Subject cn=kubelet-api
Type client
Keypair/master
Subject cn=kubernetes-master
Type server
AlternateNames [100.64.0.1, 127.0.0.1, api.dev.asykim.com, api.internal.dev.asykim.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]
ManagedFile/dev.asykim.com-addons-bootstrap
Location addons/bootstrap-channel.yaml
ManagedFile/dev.asykim.com-addons-core.addons.k8s.io
Location addons/core.addons.k8s.io/v1.4.0.yaml
ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-k8s-1.6
Location addons/dns-controller.addons.k8s.io/k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
Location addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-k8s-1.6
Location addons/kube-dns.addons.k8s.io/k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
Location addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml
ManagedFile/dev.asykim.com-addons-limit-range.addons.k8s.io
Location addons/limit-range.addons.k8s.io/v1.5.0.yaml
ManagedFile/dev.asykim.com-addons-storage-aws.addons.k8s.io
Location addons/storage-aws.addons.k8s.io/v1.6.0.yaml
Secret/admin
Secret/kube
Secret/kube-proxy
Secret/kubelet
Secret/system:controller_manager
Secret/system:dns
Secret/system:logging
Secret/system:monitoring
Secret/system:scheduler
Must specify --yes to apply changes
Cluster configuration has been created.
Suggestions:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster dev.asykim.com
* edit your node instance group: kops edit ig --name=dev.asykim.com nodes
* edit your master instance group: kops edit ig --name=dev.asykim.com master-tor1
Finally configure your cluster with: kops update cluster dev.asykim.com --yes
```
This enables external admission controller webhooks, api aggregation,
and anything else that relies on the
--proxy-client-cert-file/--proxy-client-key-file apiserver args.
- removed the Mode field from the FileAsset spec
- removed the ability to template the content
- removed the need to specify the Path and instead default to /srv/kubernetes/assets/<name>
- change the FileAssets from []*FileAssets to []FileAssets
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @Note, nothing is enforced, so unless you've specified anything everything is as the same
- updated the cluster_spec.md to reflect the changes
- permit users to place inline files into the cluster and instance group specs
- added the ability to template the files, the Cluster and InstanceGroup specs are passed into context
- cleaned up and missed comment, unordered imports etc along the journey
Automatic merge from submit-queue
Cluster Hooks Enhancement
Cluster Hook Enhancement
The current implementation is presently limited to docker exec, without ordering or any bells and whistles. This PR extends the functionality of the hook spec by;
- adds ordering to the hooks, with users able to set the requires and before of the unit
- cleaned up the manifest code, added tests and permit setting a section raw
- added the ability to filter hooks via master and node roles
- updated the documentation to reflect the changes
- extending the hooks to permit adding hooks per instancegroup as well cluster
- @note, instanceGroup are permitted to override the cluster wide one for ease of testing
- on the journey tried to fix an go idioms such as import ordering, comments for global export etc
- @question: v1alpha1 doesn't appear to have Subnet fields, are these different version being used anywhere?
- removing the StorageType on the etcd cluster spec (sticking with the Version field only)
- changed the protokube flag back to -etcd-image
- users have to explicitly set the etcd version now; the latest version in gcr.io is 3.0.17
- reverted the ordering on the populate spec
The current implementation is running v2.2.1 which is two year old and end of life. This PR add the ability to use etcd and set the versions if required. Note at the moment the image is still using the gcr.io registry image. As note, much like TLS their presently is not 'automated' migration path from v2 to v3.
- the feature is gated behine the storageType of the etcd cluster, bot clusters events and main must use the same storage type
- the version for v2 is unchanged and pinned at v2.2.1 with v2 using v3.0.17
- @question: we shoudl consider allowing the use to override the images though I think this should be addresses more generically, than one offs here and then. I know chris is working on a asset registry??
- switched to using an array of roles rather than boolean flags for node selection
- fixed up the README to reflect the changes
- added the docker.service as a Requires to all docker exec hooks
- extending the hooks to permit adding hooks per instancegroup as well
- @note, instanceGroup are permitted to override the cluster wide one for ease of testing
- updated the documentation to reflect the changes
- on the journey tried to fix an go idioms such as import ordering, comments for global export etc
- @question: v1alpha1 doesn't appear to have Subnet fields, are these different version being used anywhere?
The present implementation of hooks only perform for docker exec, which isn't that flexible. This PR permits the user to greater customize systemd units on the instances
- cleaned up the manifest code, added tests and permit setting a section raw
- added the ability to filter hooks via master and node roles
- updated the documentation to reflect the changes
- cleaned up some of the vetting issues
The current implementation does not permit the user to order the hooks. This PR adds optional Requires, Before and Documentation to the HookSpec which is added the systemd unit if specified.
Another step towards working totally offline (which may never be fully
achievable, because of the need to hash assets). But should ensure that
when we update the stable channel, we are testing against that version
in the tests, otherwise it is easy to break master.