AWS will sometimes return an error like "resource not found" when a
DescribeTags or CreateTags call immediately follows creation of the
resource. Introduce a retry-loop when we get an error that is of the
appropriate type.
This avoids spurious changes, and also is more intuitive for the user -
whatever name the user gave it, if it resolves to the same image, that
is the name we will use.
AWS reformats them (inserting lots of whitespace), making a string
comparison incorrect. Instead we parse to JSON and do a
reflect.DeepEqual check; if they are the same then we pretend the actual
value was the expected value.
Both fix the calculation itself to match AWS's weird fingerprint
algorithm, and also fix the comparison logic by which we infer that if
the fingerprint matches, that the public key matches also.
We call the Render methods on Tasks by reflection, and some of them
don't care about the Target, but do care about the Context (e.g. the PKI
tasks, which only care about the CAStore)
Remove a bunch of inconsistencies so that the reflective walk is not
suprising, and also rename it to ReflectRecursive.
Then use this for dry-run change printing.
* GCE support only
* Key and secret generation
* "Direct mode" makes API calls
* "Dry run mode" previews the changes
* Terraform output (though key generation not working for master ip)
* cloud-init output (though debian image does not ship with cloud-init)
This change support running kubernetes master on Ubuntu Trusty.
It uses pure cloud-config and shell scripts, and completely gets
rid of saltstack or the release salt tarball.
The version of Salt we're running doesn't do a good job of detecting
systemd. Inspired by https://github.com/saltstack/salt/issues/13926,
I added a provider-force to the services.
With this change, salt-call -l debug state.highstate succeeds, even for
repeated invocations.
The issue was (probably) benign, but definitely caused noised (e.g. #11297)
To build the python image, BUILD_PYTHON_IMAGE should be set during make.
When the addon script is running, it will check if python is installed
on the machine, if not, it will use the python image that built previously.
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
Instead of hard coding kube-cert and /srv/kubernetes allow these to be
overwritten by environment variables. / is immutable on some systems
and so /srv is not a possible location to store data.