Refactor to start to centralize the env-var configuration for system
components, also start to add test coverage so we can be sure we
haven't broken things!
We need a region to start from to make AWS calls. us-east-1 works for
most credentials, but not for cn-north-1 credentials. Instead, we get
the current region from metadata when running on EC2; and we continue
to fall-back to us-east-1.
For CLI commands (kops) the user will still have to set AWS_REGION,
but for system binaries (nodeup, etcd-manager), this should default
appropriately.
Note that the region doesn't have to be the actual region of the
bucket, just a region we can access.
Issue #6098
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
* Bump gophercloud sha to f29afc2
* Add a prereq check for bazel and dep which is needed by `make dep-ensure`
* Document the process to add a vendored dependency
We still need the reflect helpers, but we allow for clients to
register their own pretty-printers, which avoids the package
dependency for our pretty-printer. We register our pretty printers in
an init function in the relevant package (in this case,
upup/pkg/fi/printers.go)
Fix#5551
a) The current implementation use's a static kubelet which doesn't not conform to the Node authorization mode (i.e. system:nodes:<nodename>)
b) As present the kubeconfig is static and reused across all the masters and nodes
The PR firstly introduces the ability for users to use bootstrap tokens and secondly when enabled ensure the kubelets for the masters as have unique usernames. Note, this PR does not attempt to address the distribution of the bootstrap tokens themselves, that's for cluster admins. One solution for this would be a daemonset on the masters running on hostNetwork and reuse dns-controller to annotated the pods and give as the DNS
Notes:
- the master node do not use bootstrap tokens, instead given they have access to the ca anyhow, we generate certificates for each.
- when bootstrap token is not enabled the behaviour will stay the same; i.e. a kubelet configuration brought down from the store.
- when bootstrap tokens are enabled, the Nodes sit in a timeout loop waiting for the configuration to appear (by third party).
- given the nodeup docker and manifests builders are executed before the kubelet builder, the assumption here is a unit file kicks of a custom container to bootstrap the rest.
- the current firewalls on between the master and nodes are fairly open so no need to open ports between the two
- much of the work was ported from @justinsb PR [here](https://github.com/kubernetes/kops/pull/4134/)
- we add a very presumptuous server and client certificates for use with an authorizer (node-bootstrap-internal.dns_zone)
I do have an additional PR which performs the entire thing. The process being a node_authorizer which runs on the master nodes via a daemonset, the service implements a series of authorizers (i.e. alwaysallow, aws, gce etc). For aws, the process is similar to how vault authorizes nodes [here](https://www.vaultproject.io/docs/auth/aws.html). Nodeup no then calls out to the node_authorizer on bootstrap and provisions the kubelet.
Because the primary use-case is S3-style stores, we haven't really used
directories. If we have a use-case, we can always pass a boolean
parameter or create an alternative function.
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
This ensures that the cluster can read the kops state store files, even
if the GCS bucket is in a different project.
We automatically set up an IAM access policy that grants access.
Extending the current implementation of toolbox template to include multiple files and snippets. Note, i've removed the requirements for defaults as I think people should be forced to specifically pass them.
- fixing the vetting iseues to the method YamlToJson -> YAMLToJSON
- adding a safety check to ensure templates don't reference an unknown value
- extending the unit test to ensure the above works on main and snippets
- include the ability to specify multiple configuration files, useful for common.yaml and prod.yaml etc
Requested Changes - Toolbox Templating
Added the requested changes
- moved the templater into it's own package rather than using base util
- moved to using the sprig library for additional template function
- @note: i couldn't find a native way in sprig to do snippets, also the i've overloaded the indent as it appears to do the indent on all lines rather than on the newline, meaning i'd have to shift my first line back by the indent to get it to work, which seems ugly
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
We move everything to the models. We feature-flag it, because we
probably want to change the names etc, and we aren't going to be able to
offer smooth upgrades until that is done.
In cases where the user is the bucket owner an initial call to
s3.GetBucketLocation will succeed. If it does return an error we
fall back to the bruteforce method.
This effectively makes the behaviour unchanged from previous versions
for bucket owners.
Minor refactor, the request was created one level up originally
because I had added two separate steps for initially determining
whether we have to use the bruteforce method.
However this is a premature optimisation and unnecessary due to the
concurrency behaviour we've got now.
The AWS API makes it difficult to retrieve S3 bucket locations from shared buckets
with bucket-policy based access delegations. This introduces a workaround for the
issue.
AWS is aware of the issue but for the time being they can not provide information
about when it will be fixed.
See #1247 for more information.
When sharing S3 buckets across accounts it may be necessary to override ACLs
per object to avoid locking out different accounts.
This commit lets users specify a `KOPS_STATE_S3_ACL` environment variable which
(if specified) overrides the ACL in the PutObject request.
Fixes#907