This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
This ensures that the cluster can read the kops state store files, even
if the GCS bucket is in a different project.
We automatically set up an IAM access policy that grants access.
Extending the current implementation of toolbox template to include multiple files and snippets. Note, i've removed the requirements for defaults as I think people should be forced to specifically pass them.
- fixing the vetting iseues to the method YamlToJson -> YAMLToJSON
- adding a safety check to ensure templates don't reference an unknown value
- extending the unit test to ensure the above works on main and snippets
- include the ability to specify multiple configuration files, useful for common.yaml and prod.yaml etc
Requested Changes - Toolbox Templating
Added the requested changes
- moved the templater into it's own package rather than using base util
- moved to using the sprig library for additional template function
- @note: i couldn't find a native way in sprig to do snippets, also the i've overloaded the indent as it appears to do the indent on all lines rather than on the newline, meaning i'd have to shift my first line back by the indent to get it to work, which seems ugly
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
In cases where the user is the bucket owner an initial call to
s3.GetBucketLocation will succeed. If it does return an error we
fall back to the bruteforce method.
This effectively makes the behaviour unchanged from previous versions
for bucket owners.
Minor refactor, the request was created one level up originally
because I had added two separate steps for initially determining
whether we have to use the bruteforce method.
However this is a premature optimisation and unnecessary due to the
concurrency behaviour we've got now.
The AWS API makes it difficult to retrieve S3 bucket locations from shared buckets
with bucket-policy based access delegations. This introduces a workaround for the
issue.
AWS is aware of the issue but for the time being they can not provide information
about when it will be fixed.
See #1247 for more information.
When sharing S3 buckets across accounts it may be necessary to override ACLs
per object to avoid locking out different accounts.
This commit lets users specify a `KOPS_STATE_S3_ACL` environment variable which
(if specified) overrides the ACL in the PutObject request.
Fixes#907