We previously needed them to allow list operations; however we now use a
keyset.yaml file instead of listing keys. That should be the sole use,
so we should no longer need this permission.
If not, we can re-enable the code easily.
Automatic merge from submit-queue.
work on using files assets
Basic MVP for file assests.
- using file assest builder
- able to upload files
- using URL structs instead of strings everywhere
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
We've done this in the API already, but we had a single CAStore
interface that did Keysets and SSHCredentials. Separate out
SSHCredentials into SSHCredentialStore, and start using API objects as
our primary representation.
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
This ensures that the cluster can read the kops state store files, even
if the GCS bucket is in a different project.
We automatically set up an IAM access policy that grants access.
Automatic merge from submit-queue.
Implement DigitalOcean Droplet FI Task
Implements cloudup fi tasks for DigitalOcean droplets. It makes a few assumptions to reduce the size of this PR, those will be addressed in future PRs.
Also does some cleanup in the DigitalOcean `dns` package.
The current implementation does not place any limitation on the dns annontation which the dns-controller can consume. In a multi-tenented environment was have to ensure certain safe guards are met, so users can't byt accident or intentionally alter our internal dns. Note; the current behaviour has not been changed;
- added the --watch-namespace option to the dns controller and WatchNamespace to the spec
- cleaned up area of the code where possible or related
- fixed an vetting issues that i came across on the journey
- renamed the dns-controller watcher files
- fixed any of the vettting / formatting issues that i'm came across on the update
- removed the commented out lines from the componentconfig as it make its increasingly difficult to find what is supported, what is not and the difference between them.
- added SerializeImagePulls, RegisterSchedulable to kubelet (by default they are ignored)
- added FeatureGates to the kube-proxy
Out of interest can someone point me to where these multi-versioned componentconfig are being used?
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
AttachISO task creates the user-data/meta-data cloud init files and creates cloud-init.iso file using "genisoimage" tool. It then uploads it to the datastore where the master/worker VM resides and inserts it into the cd-rom device of the master/worker VM. When the master/worker VM powers on, the cloud-init package in it runs the bootstrap script that downloads nodeup and runs it.
Also removed redundant VirtualMachineModelBuilder that does nothing.
Testing done:
1. Tested end to end that the master and worker VMs executes the cloud-init script successfully.
2, "make ci" is successful.
Accept vSphere's server, datacenter, cluster setting by flags
"vsphere-server", "vsphere-datacenter", and "vsphere-resource-pool".
Username and password can be set by environment variables:
"VSPHERE_USERNAME" and "VSPHERE_PASSWORD".
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
We build a statically linked version and distribute it with kops.
Note that our version of socat does not include libssl, but kubernetes
does not use it anyway.
User reports of kubelet flags not being passed; moved more to code.
Also found & fixed the likely root-cause issue: we have two copies of
the cluster spec and were not being precise about which one we wanted to
use at all times.
So the flow is that we recommend (or strongly recommend) a new kops
version when one is required for a new version, and then the new kops
version will recommend (or strongly recommend) a new k8s version.
We don't have a notion of multiple recommended k8s versions per kops
version - that is what channels are for.
Users are always free to disregard updates, even "required" ones by
setting a flag.
This helps us treat protokube as being paired with nodeup, and is a step
towards registry-less deployments (and isolated deployments) along with
moving away from our deprecated gcr.io usage.
* Zones are now subnets
* Utility subnet is no longer part of Zone
* Bastion InstanceGroup type added instead
* Etcd clusters defined in terms of InstanceGroups, not zones
* AdminAccess split into SSHAccess & APIAccess
* Dropped unused Multizone flag
* kopsrepo/master:
gcs-upload: Use a no-clobber copy instead
gcs-upload: Fix cache-control on other files as well
changes from code review
doc updates
unit tests with fakes
it is working in alpha
working on the start of validate
Starting work on node lookup and validation
starting porting node code
Fix retries for AutoScalingGroup pending delete
Apply gofmt to pkg directory
Avoid tests hitting kubernetes stable.txt HTTP file
Fix printing of max size on instance group
Disable kubelet from starting until after volume mounts
Fix Cluster parsing error message
bumping stable channel to k8s 1.4.6
support more zones(cn-north-1a/b) for cloud provider guess
This:
- reworks how retries are handled in fi/executor.go to a time-based scheme
- changes the single-task limit to 10m (from about 30s of no-progress)
- eliminates the inner IAM propagation retry for LaunchConfigurations,
because the task itself will just be redriven for a while. This also
eliminates any long-pole delay caused by this error (since task Run()
should be 'fast').
This isn't ideal, because it isn't versioned, but there is an important
bugfix - otherwise pods are allocated a .255 IP, which is reserved for
broadcast.
Issue #724
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.
A managed file is templated kops-side, but then stored in the S3 bucket
(aka state store)
This will be used to pass the channel containing the core addons.
Beginnings of a mock for the AWSCloud, so that hopefully we aren't
calling out to AWS at all in the tests. We will likely start mocking
the actual EC2 APIs in future, but this seems a good starting point.
Fix#425