This is needeed for bootstrapping the control plane,
because it's a CRD so can't be registered until the control plane is running.
It's also quite nice because we might want to review the contents of the
host CRD, e.g. to verify the key out-of-band.
This warning isnt particularly actionable - it is expected behavior for `kops create cluster` and any `kops update cluster` that experiences this (due to a broken cluster) will proceed as normal.
The user's subsequent `kops validate cluster` would surface any such errors
This supports workflows that modify the local kubeconfig for advanced configurations,
which were accidentally broken by trying to always generate the config.
Issue #17262
Previously this would always fail in a confusing way,
regardless of whether we had connectivity,
because we tried to connect to an empty-string host.
Now we are more explicit about the error,
and will at least try to connect directly.
The rolling-update requires the apiserver (when called without --cloudonly),
so reconcile should wait for apiserver to start responding.
Implement this by reusing "validate cluster", but filtering to only the instance groups
and pods that we expect to be online.
add kindnet as an experimental network addon
containerd adds the requirement to use the loopback cni plugin,
kindnet provides that capability and containerd does not require it
since containerd/containerd/pull/10238
Change-Id: I1397a90186885b02e98b5ffa444fe629c1046757
Some kOps actions require connecting to the cluster, but
we don't always have a kubeconfig available.
This commit adds a function to generate a client config on the fly
(including a certificate) when needed.
This should allow us to build our own rest config in future,
rather than relying on the kubeconfig being configured correctly.
To do this, we need to stop sharing the factory between the channels
and kops commands.
This all-in-one command is a replacement for having to run multiple commands,
while still respecting the version skew policy.
It does the same thing as `kops update cluster --reconcile`:
* Updates the control plane nodes
* Does a rolling update of the control plane nodes
* Updates "normal" nodes and bastion nodes
* Does a rolling update of these nodes
* Prunes old resources that are no longer used
Kubernetes 1.31 now stops nodes joining a cluster if the minor version
of the node is greater than the minor version of the control plane.
The addition of the instance-group-roles flag to update means that we
can now update / rolling-update the control plane first. However, we
must now issue four commands:
* Update control plane
* Rolling update control plane
* Update nodes
* Rolling update nodes
This adds a flag to automate this process. It is implemented by
executing those 4 steps in sequence.
Update is also smart enough to not update the nodes if this would
violate the skew policy, but we do this explicitly in the reconcile
command to be clearer and safer.
Currently it relies on us updating the channel version in two places,
but this makes `kops upgrade cluster` inconsistent with `kops update cluster`.
`kops update cluster` also tells us to run `kops upgrade cluster`,
which then might not recommend an upgrade.
This lets us use labels (or annotations), meaning we can experiment
with different clouds without changing the API.
We also add initial (experimental/undocumented) support for exposing a "Metal" provider.