We tend to build cloud, call some method, and then build cloud over
again. It would be easier to just pass the first one along.
Passing along cloud would also make it easier to mock cloud.
We will be managing cluster addons using CRDs, and so we want to be
able to apply arbitrary objects as part of cluster bringup.
Start by allowing (behind a feature-flag) for arbitrary objects to be
specified.
Co-authored-by: John Gardiner Myers <jgmyers@proofpoint.com>
The client-go signature for most methods adds a context.Context
object, and also makes Options mandatory. Feed through a
context.Context through many of our methods (but use context.TODO to
stop it getting totally out of hand!)
We don't call klog.InitFlags yet, because that will cause a flag
redefinition error until we get everyone to stop using glog. That
will happen when we update to k8s 1.13.
File assets and the SHA files are uploaded to the new location. Files
when are users uses s3 are upload public read only. The copyfile task
uses only the existing SHA value.
This PR include major refactoring of the use of URLs. Strings are no
longer categnated, but converted into a URL struct and path.Join is
utlilized.
A new values.go file is included so that we can start refactoring more
code out of the "fi" package.
A
This lets us configure cross-project permissions while ourselves needing
minimal permissions, but also gives us a nice hook for future lockdown
of object-level permissions.
We modelled our VFS clientset (for API objects backed by a VFS path)
after the "real" clientsets, so now it is relatively easy to add a
second implementation that will be backed by a real clientset.
The snafu here is that we weren't really using namespaces previously.
Namespaces do seem to be the primary RBAC scoping mechanism though, so
we start using them with the real clientset.
The namespace is currently inferred from the cluster name. We map dots
to dashes, because of namespace limitations, which could yield
collisions, but we'll deal with this by simply preventing users from
creating conflicting cluster names - i.e. you simply won't be able to
create a.b.example.com and a-b.example.com
So the flow is that we recommend (or strongly recommend) a new kops
version when one is required for a new version, and then the new kops
version will recommend (or strongly recommend) a new k8s version.
We don't have a notion of multiple recommended k8s versions per kops
version - that is what channels are for.
Users are always free to disregard updates, even "required" ones by
setting a flag.
Not only is this lower-impact, but it also avoid a bug because the
subnets were considered "shared", and thus we would not manage the
route-table any more.
This is a breaking change for people using the API (sorry), but is
hopefully a simple search and replace:
"k8s.io/kops/upup/pkg/api"
-> api "k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/upup/pkg/api/registry"
-> "k8s.io/kops/pkg/apis/kops/registry"
This is the "correct" place for it in the k8s API infrastructure - we
are working towards a versioned API here.
Beginnings of a mock for the AWSCloud, so that hopefully we aren't
calling out to AWS at all in the tests. We will likely start mocking
the actual EC2 APIs in future, but this seems a good starting point.
Fix#425
The old upgrade command (which was only called as part of a kube-up ->
kops upgrade) is now `kops toolbox convert-imported`. The docs are
updated, but this is only normally called once per import so this should
not be high impact.
The upgrade command now looks for things that need upgrading. Currently
only `upgrade cluster` is implemented; it currently only checks the
KubernetesVersion. If KubernetesVersion is out of date, it will be
printed, and if --yes is specified the cluster spec will be set to the
next value.