Because the primary use-case is S3-style stores, we haven't really used
directories. If we have a use-case, we can always pass a boolean
parameter or create an alternative function.
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
This ensures that the cluster can read the kops state store files, even
if the GCS bucket is in a different project.
We automatically set up an IAM access policy that grants access.
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
In cases where the user is the bucket owner an initial call to
s3.GetBucketLocation will succeed. If it does return an error we
fall back to the bruteforce method.
This effectively makes the behaviour unchanged from previous versions
for bucket owners.
Minor refactor, the request was created one level up originally
because I had added two separate steps for initially determining
whether we have to use the bruteforce method.
However this is a premature optimisation and unnecessary due to the
concurrency behaviour we've got now.
The AWS API makes it difficult to retrieve S3 bucket locations from shared buckets
with bucket-policy based access delegations. This introduces a workaround for the
issue.
AWS is aware of the issue but for the time being they can not provide information
about when it will be fixed.
See #1247 for more information.
When sharing S3 buckets across accounts it may be necessary to override ACLs
per object to avoid locking out different accounts.
This commit lets users specify a `KOPS_STATE_S3_ACL` environment variable which
(if specified) overrides the ACL in the PutObject request.
Fixes#907