The current implementation does not permit us to set the default ulimit on docker daemon (currently a requirement for our logstash). This PR add the DefaultUlimit option to the DockerConfig
Automatic merge from submit-queue
Cluster / InstanceGroup File Assets
@chrislovecnm @justinsb ...
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @Note, nothing is enforced, so unless you've specified anything everything is as the same
- updated the cluster_spec.md to reflect the changes
- permit users to place inline files into the cluster and instance group specs
- added the ability to template the files, the Cluster and InstanceGroup specs are passed into context
- cleaned up and missed comment, unordered imports etc along the journey
notes: In addition to this; need to look at the detecting the changes in the cluster and instance group spec. Think out loud perhaps using a last_known_configuration annotation, similar to kubernetes
This enables external admission controller webhooks, api aggregation,
and anything else that relies on the
--proxy-client-cert-file/--proxy-client-key-file apiserver args.
- removed the Mode field from the FileAsset spec
- removed the ability to template the content
- removed the need to specify the Path and instead default to /srv/kubernetes/assets/<name>
- change the FileAssets from []*FileAssets to []FileAssets
The current implementation does not make it ease to fully customize nodes before kube install. This PR adds the ability to include file assets in the cluster and instaneGroup spec which can be consumed by nodeup. Allowing those whom need (i.e. me :-)) greater flexibilty around their nodes. @Note, nothing is enforced, so unless you've specified anything everything is as the same
- updated the cluster_spec.md to reflect the changes
- permit users to place inline files into the cluster and instance group specs
- added the ability to template the files, the Cluster and InstanceGroup specs are passed into context
- cleaned up and missed comment, unordered imports etc along the journey
Automatic merge from submit-queue
Cluster Hooks Enhancement
Cluster Hook Enhancement
The current implementation is presently limited to docker exec, without ordering or any bells and whistles. This PR extends the functionality of the hook spec by;
- adds ordering to the hooks, with users able to set the requires and before of the unit
- cleaned up the manifest code, added tests and permit setting a section raw
- added the ability to filter hooks via master and node roles
- updated the documentation to reflect the changes
- extending the hooks to permit adding hooks per instancegroup as well cluster
- @note, instanceGroup are permitted to override the cluster wide one for ease of testing
- on the journey tried to fix an go idioms such as import ordering, comments for global export etc
- @question: v1alpha1 doesn't appear to have Subnet fields, are these different version being used anywhere?
- removing the StorageType on the etcd cluster spec (sticking with the Version field only)
- changed the protokube flag back to -etcd-image
- users have to explicitly set the etcd version now; the latest version in gcr.io is 3.0.17
- reverted the ordering on the populate spec
The current implementation is running v2.2.1 which is two year old and end of life. This PR add the ability to use etcd and set the versions if required. Note at the moment the image is still using the gcr.io registry image. As note, much like TLS their presently is not 'automated' migration path from v2 to v3.
- the feature is gated behine the storageType of the etcd cluster, bot clusters events and main must use the same storage type
- the version for v2 is unchanged and pinned at v2.2.1 with v2 using v3.0.17
- @question: we shoudl consider allowing the use to override the images though I think this should be addresses more generically, than one offs here and then. I know chris is working on a asset registry??
- switched to using an array of roles rather than boolean flags for node selection
- fixed up the README to reflect the changes
- added the docker.service as a Requires to all docker exec hooks
- extending the hooks to permit adding hooks per instancegroup as well
- @note, instanceGroup are permitted to override the cluster wide one for ease of testing
- updated the documentation to reflect the changes
- on the journey tried to fix an go idioms such as import ordering, comments for global export etc
- @question: v1alpha1 doesn't appear to have Subnet fields, are these different version being used anywhere?
The present implementation of hooks only perform for docker exec, which isn't that flexible. This PR permits the user to greater customize systemd units on the instances
- cleaned up the manifest code, added tests and permit setting a section raw
- added the ability to filter hooks via master and node roles
- updated the documentation to reflect the changes
- cleaned up some of the vetting issues
The current implementation does not permit the user to order the hooks. This PR adds optional Requires, Before and Documentation to the HookSpec which is added the systemd unit if specified.
Automatic merge from submit-queue
Kubelet API Certificate
A while back options to permit secure kube-apiserver to kubelet api was [PR2381](https://github.com/kubernetes/kops/pull/2831) using the server.cert and server.key as testing grounds. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it.
- updated the security.md to reflect the changes
- issue a new client kubelet-api certificate used to secure authorize comms between api and kubelet
- fixed any formatting issues i came across on the journey
- fixed the various issues highlighted in https://github.com/kubernetes/kops/pull/3125
- changed the docuementation to make more sense
- changed the logic of the UseSecureKubelet to return early
A while back options to permit secure kube-apiserver to kubelet api was https://github.com/kubernetes/kops/pull/2831 using the server.cert and server.key as testing grouns. This PR formalizes the options and generates a client certificate on their behalf (note, the server{.cert,key} can no longer be used post 1.7 as the certificate usage is checked i.e. it's not using a client cert). The users now only need to add anonymousAuth: false to enable secure api to kubelet. I'd like to make this default to all new builds i'm not sure where to place it.
- updated the security.md to reflect the changes
- issue a new client kubelet-api certificate used to secure authorize comms between api and kubelet
- fixed any formatting issues i came across on the journey
Automatic merge from submit-queue
Etcd TLS Options
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
- added the master option back the protokube, updating the nodeup model and protokube code
- removed any comments no related to the PR as suggested
- reverted the ordering of the mutex in the AWSVolumes in protokube
The current implementation does not put any transport security on the etcd cluster. The PR provides and optional flag to enable TLS the etcd cluster
- cleaned up and fixed any formatting issues on the journey
- added two new certificates (server/client) for etcd peers and a client certificate for kubeapi and others perhaps (perhaps calico?)
- disabled the protokube service for nodes completely is not required; note this was first raised in https://github.com/kubernetes/kops/pull/3091, but figured it would be easier to place in here given the relation
- updated protokube codebase to reflect the changes, removing the master option as its no longer required
- added additional integretion tests for the protokube manifests;
- note, still need to add documentation, but opening the PR to get feedback
- one outstanding issue is the migration from http -> https for preexisting clusters, i'm gonna hit the coreos board to ask for the best options
Previously the configuration has been written after docker has been started and
was actually only applied after a reboot.
Manually reload system and restart docker to ensure the configuration has been
applied.
Automatic merge from submit-queue
Configure docker on CoreOS/ContainerOS
While the installation of docker should be skipped, docker should still be
configured to allow overriding the docker config using kops.
Fixes https://github.com/kubernetes/kops/issues/3057
//cc @aledbf
Automatic merge from submit-queue
Add `kops create secret dockerconfig` feature
This adds a well-known secret name `dockerconfig` which will automatically
be used if present to create `/root/.docker/config.json` on all nodes. This will
allow private registries to be used for kops hooks as well as any k8s images
without the need to define `imagePullSecrets` in every namespace.
closes https://github.com/kubernetes/kops/issues/2505
While the installation of docker should be skipped, docker should still be
configured to allow overriding the docker config using kops.
Fixes https://github.com/kubernetes/kops/issues/3057
This adds a well-known secret name `nodedockercfg` which will automatically
be used if present to create /root/.docker/config.json on all nodes. This will
allow private registries to be used for kops hooks as well as any k8s images
without the need to define `imagePullSecrets` in every namespace.
closes https://github.com/kubernetes/kops/issues/2505
- fixed any of the vettting / formatting issues that i'm came across on the update
- removed the commented out lines from the componentconfig as it make its increasingly difficult to find what is supported, what is not and the difference between them.
- added SerializeImagePulls, RegisterSchedulable to kubelet (by default they are ignored)
- added FeatureGates to the kube-proxy
Out of interest can someone point me to where these multi-versioned componentconfig are being used?
As present a number of secrets are downloaded to the /src/kubernetes directory regardless of role (master, node). This limits the
the node role to only donwload the ca.crt. The rest are for master nodes only
- removes basic_auth.csv, ca.key, known_tokens.csv, server.cert and server.key leaving only the ca.crt
fixes#2606
Most part of the changes are similar to current supported CNI networking
provider. Kube-router also support IPVS bassed service proxy which can
be used as replacement for kube-proxy. So the manifest for kube-router
included with this patch enables kube-router to provide pod-to-pod
networking, IPVS based service proxy and ingress pod firewall.
Fixed an oops I created in #2494 where log rotation does not function
as expected.
The kube-apiserver first has to rename the existing audit log prior to a new one
being created. Renaming is not possible when the audit file is mounted
directly as the host path. kube-apiserver will return a 'Device or
resource busy' error when it tries to do so. So instead, we mount the
directory of the path instead of the file itself. Also remove the
creation of an empty audit log file as that is no longer necessary for
Docker to mount a directory.
"If an audit log file already exists, Kubernetes appends new audit logs
to that file. Otherwise, Kubernetes creates an audit log file at the
location you specified in audit-log-path. If the audit log file exceeds
the size you specify in audit-log-maxsize, Kubernetes will rename the
current log file by appending the current timestamp on the file name
(before the file extension) and create a new audit log file. Kubernetes
may delete old log files when creating a new log file; you can configure
how many files are retained and how old they can be by specifying the
audit-log-maxbackup and audit-log-maxage options."
Source: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
Tested this on Kubernetes 1.6 and with an audit log path specified to
be:
/var/log/kube-apiserver-audit.log
The kube-apiserver container has this mounted:
/dev/xvda1 on /var/log type ext4 (rw,relatime,data=ordered)
This commit exposes kube-apiserver's audit log to the host as a host
mapping.
PR #1872 gave the ability to users to define a custom log path for the
apiserver to write its audit logs to. Prior to this commit, the log file
would stay within the container's filesystem, and getting access to it from
outside the container was a nuisance.
This change allows a logging aggregator, like fluentd, to be able
to read and tail this log from outside the kube-apiserver container.
Stop using the networking-plugin-dir flag, and replace with the
cni-bin-dir and cni-conf-dir flags, set appropriately.
Thanks for spotting @prachetasp
Issue #2267
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
We build a statically linked version and distribute it with kops.
Note that our version of socat does not include libssl, but kubernetes
does not use it anyway.
* Detect CoreOS
* Move key manifests to code, to tolerate read-only mounts
* Misc refactorings so more code can be shared
* Change lots of ints to int32s in the models
* Run nodeup as a oneshot systemd service, rather than relying on
cloud-init behaviour which varies across distros
So the flow is that we recommend (or strongly recommend) a new kops
version when one is required for a new version, and then the new kops
version will recommend (or strongly recommend) a new k8s version.
We don't have a notion of multiple recommended k8s versions per kops
version - that is what channels are for.
Users are always free to disregard updates, even "required" ones by
setting a flag.