As discussed during yesterday's office hours.
This should be cherry-picked back to 1.16.
We'll need to bump the minor version in the 1.17 and master branches.
Once we officially drop support we could make this a sliding window that uses the `kops.Version` variable.
I think we could have deprecation warnings for the 2 oldest versions still supported by the kops version, announcing that the n+2 Kops version will no longer support the specified k8s version.
Like we recently did with kops-controller, this means we aren't
reliant on pushing a docker container for development. This should
also fix the e2e tests, which otherwise break whenever we make an
incompatible change to dns-controller.
We then `docker load` it when using a KOPS_BASE_URL.
This should simplify the development process (particularly once we
also do this for dns-controller; at that point we won't need a
registry at all).
This should also fix the problems in CI, where the kops-controller
image isn't available. We've been getting away with testing with the
previous version for dns-controller, which changes pretty slowly. But
that's not a good idea for kops-controller, which is likely to be more
critical and evolve more rapidly.
you don't need to add an ssh public key, this change allows that
combination to work on aws.
Basically, if a key name is set on the spec and there's no admin key
file, the key name will be used and the key will not be managed in
terraform.
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
We recognize our primary location via string-matching, and we then
have a hard-coded list of mirrors for that location.
Didn't prove easy to make this much better, but we can hopefully do so
iteratively (e.g. fetch mirrors via URL)
The launch configuration test exposed that our integration tests don't
retry for very long, and wait a long time in between retries.
Create a RunTasksOptions type to hold the parameters, in particular
max task time, and the amount of time we wait when all tasks have
failed.
We also have to stop passing the flag on ContainerOS, because it's set
in /etc/docker/default.json and it's now an error to pass the flag.
That in turn means we move those options to code, which are the last of
those legacy config options. (We still have a few tasks declaratively
defined though)
We previously needed them to allow list operations; however we now use a
keyset.yaml file instead of listing keys. That should be the sole use,
so we should no longer need this permission.
If not, we can re-enable the code easily.
Automatic merge from submit-queue.
work on using files assets
Basic MVP for file assests.
- using file assest builder
- able to upload files
- using URL structs instead of strings everywhere
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
We've done this in the API already, but we had a single CAStore
interface that did Keysets and SSHCredentials. Separate out
SSHCredentials into SSHCredentialStore, and start using API objects as
our primary representation.
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
This ensures that the cluster can read the kops state store files, even
if the GCS bucket is in a different project.
We automatically set up an IAM access policy that grants access.
Automatic merge from submit-queue.
Implement DigitalOcean Droplet FI Task
Implements cloudup fi tasks for DigitalOcean droplets. It makes a few assumptions to reduce the size of this PR, those will be addressed in future PRs.
Also does some cleanup in the DigitalOcean `dns` package.
The current implementation does not place any limitation on the dns annontation which the dns-controller can consume. In a multi-tenented environment was have to ensure certain safe guards are met, so users can't byt accident or intentionally alter our internal dns. Note; the current behaviour has not been changed;
- added the --watch-namespace option to the dns controller and WatchNamespace to the spec
- cleaned up area of the code where possible or related
- fixed an vetting issues that i came across on the journey
- renamed the dns-controller watcher files
- fixed any of the vettting / formatting issues that i'm came across on the update
- removed the commented out lines from the componentconfig as it make its increasingly difficult to find what is supported, what is not and the difference between them.
- added SerializeImagePulls, RegisterSchedulable to kubelet (by default they are ignored)
- added FeatureGates to the kube-proxy
Out of interest can someone point me to where these multi-versioned componentconfig are being used?
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
AttachISO task creates the user-data/meta-data cloud init files and creates cloud-init.iso file using "genisoimage" tool. It then uploads it to the datastore where the master/worker VM resides and inserts it into the cd-rom device of the master/worker VM. When the master/worker VM powers on, the cloud-init package in it runs the bootstrap script that downloads nodeup and runs it.
Also removed redundant VirtualMachineModelBuilder that does nothing.
Testing done:
1. Tested end to end that the master and worker VMs executes the cloud-init script successfully.
2, "make ci" is successful.
Accept vSphere's server, datacenter, cluster setting by flags
"vsphere-server", "vsphere-datacenter", and "vsphere-resource-pool".
Username and password can be set by environment variables:
"VSPHERE_USERNAME" and "VSPHERE_PASSWORD".
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
We build a statically linked version and distribute it with kops.
Note that our version of socat does not include libssl, but kubernetes
does not use it anyway.
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.