Merge pull request #2520 from fabriziopandini/upgdate-kep0015

Update KEP0015 kubeadm join --master
This commit is contained in:
k8s-ci-robot 2018-08-21 06:41:52 -07:00 committed by GitHub
commit f938973edc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 108 additions and 99 deletions

View File

@ -1,11 +1,11 @@
# kubeadm join --master workflow # kubeadm join --control-plane workflow
## Metadata ## Metadata
```yaml ```yaml
--- ---
kep-number: 15 kep-number: 15
title: kubeadm join --master workflow title: kubeadm join --control-plane workflow
status: accepted status: accepted
authors: authors:
- "@fabriziopandini" - "@fabriziopandini"
@ -29,7 +29,7 @@ see-also:
<!-- TOC --> <!-- TOC -->
- [kubeadm join --master workflow](#kubeadm-join---master-workflow) - [kubeadm join --control-plane workflow](#kubeadm-join---control-plane-workflow)
- [Metadata](#metadata) - [Metadata](#metadata)
- [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents)
- [Summary](#summary) - [Summary](#summary)
@ -39,12 +39,12 @@ see-also:
- [Challenges and Open Questions](#challenges-and-open-questions) - [Challenges and Open Questions](#challenges-and-open-questions)
- [Proposal](#proposal) - [Proposal](#proposal)
- [User Stories](#user-stories) - [User Stories](#user-stories)
- [Create a cluster with more than one master nodes (static workflow)](#create-a-cluster-with-more-than-one-master-nodes-static-workflow) - [Create a cluster with more than one control plane instance (static workflow)](#create-a-cluster-with-more-than-one-control-plane-instance-static-workflow)
- [Add a new master node (dynamic workflow)](#add-a-new-master-node-dynamic-workflow) - [Add a new control-plane instance (dynamic workflow)](#add-a-new-control-plane-instance-dynamic-workflow)
- [Implementation Details](#implementation-details) - [Implementation Details](#implementation-details)
- [Initialize the Kubernetes cluster](#initialize-the-kubernetes-cluster) - [Initialize the Kubernetes cluster](#initialize-the-kubernetes-cluster)
- [Preparing for execution of kubeadm join --master](#preparing-for-execution-of-kubeadm-join---master) - [Preparing for execution of kubeadm join --control-plane](#preparing-for-execution-of-kubeadm-join---control-plane)
- [The kubeadm join --master workflow](#the-kubeadm-join---master-workflow) - [The kubeadm join --control-plane workflow](#the-kubeadm-join---control-plane-workflow)
- [dynamic workflow (advertise-address == `controlplaneAddress`)](#dynamic-workflow-advertise-address--controlplaneaddress) - [dynamic workflow (advertise-address == `controlplaneAddress`)](#dynamic-workflow-advertise-address--controlplaneaddress)
- [Static workflow (advertise-address != `controlplaneAddress`)](#static-workflow-advertise-address--controlplaneaddress) - [Static workflow (advertise-address != `controlplaneAddress`)](#static-workflow-advertise-address--controlplaneaddress)
- [Strategies for deploying control plane components](#strategies-for-deploying-control-plane-components) - [Strategies for deploying control plane components](#strategies-for-deploying-control-plane-components)
@ -60,11 +60,11 @@ see-also:
## Summary ## Summary
We are extending the kubeadm distinctive `init` and `join` workflow, introducing the We are extending the kubeadm distinctive `init` and `join` workflow, introducing the
capability to add more than one master node to an existing cluster by means of the capability to add more than one control plane instance to an existing cluster by means of the
new `kubeadm join --master` option (in alpha release the flag will be named --experimental-master) new `kubeadm join --control-plane` option (in alpha release the flag will be named --experimental-control-plane)
As a consequence, kubeadm will provide a best-practice, “fast path” for creating a As a consequence, kubeadm will provide a best-practice, “fast path” for creating a
minimum viable, conformant Kubernetes cluster with one or more master nodes and minimum viable, conformant Kubernetes cluster with one or more nodes hosting control-plane instances and
zero or more worker nodes; as better detailed in following paragraphs, please note that zero or more worker nodes; as better detailed in following paragraphs, please note that
this proposal doesn't solve every possible use case or even the full end-to-end flow automatically. this proposal doesn't solve every possible use case or even the full end-to-end flow automatically.
@ -89,46 +89,46 @@ capabilities like e.g. kubeadm upgrade for HA clusters.
user stories for creating an highly available Kubernetes cluster, but instead user stories for creating an highly available Kubernetes cluster, but instead
focuses on: focuses on:
- Defining a generic and extensible flow for bootstrapping a cluster with multiple masters, - Defining a generic and extensible flow for bootstrapping a cluster with multiple control plane instances,
the `kubeadm join --master` workflow. the `kubeadm join --control-plane` workflow.
- Providing a solution *only* for well defined user stories. see - Providing a solution *only* for well defined user stories. see
[User Stories](#user-stories) and [Non-goals](#non-goals). [User Stories](#user-stories) and [Non-goals](#non-goals).
- Enable higher-level tools integration - Enable higher-level tools integration
We expect higher-level and tooling will leverage on kubeadm for creating HA clusters; We expect higher-level and tooling will leverage on kubeadm for creating HA clusters;
accordingly, the `kubeadm join --master` workflow should provide support for accordingly, the `kubeadm join --control-plane` workflow should provide support for
the following operational practices used by higher level tools: the following operational practices used by higher level tools:
- Parallel node creation - Parallel node creation
Higher-level tools could create nodes in parallel (both masters and workers) Higher-level tools could create nodes in parallel (both nodes hosting control-plane instances and workers)
for reducing the overall cluster startup time. for reducing the overall cluster startup time.
`kubeadm join --master` should support natively this practice without requiring `kubeadm join --control-plane` should support natively this practice without requiring
the implementation of any synchronization mechanics by higher-level tools. the implementation of any synchronization mechanics by higher-level tools.
- Provide support both for dynamic and static bootstrap flow - Provide support both for dynamic and static bootstrap flow
At the time a user is running `kubeadm init`, they might not know what At the time a user is running `kubeadm init`, they might not know what
the cluster setup will look like eventually. For instance, the user may start with the cluster setup will look like eventually. For instance, the user may start with
only one master + n nodes, and then add further master nodes with `kubeadm join --master` only one control plane instance + n nodes, and then add further control plane instances with
or add more worker nodes with `kubeadm join` (in any order). This kind of workflow, where the `kubeadm join --control-plane` or add more worker nodes with `kubeadm join` (in any order).
user doesnt know in advance the final layout of the control plane instances, into this This kind of workflow, where the user doesnt know in advance the final layout of the control plane
document is referred as “dynamic bootstrap workflow”. instances, into this document is referred as “dynamic bootstrap workflow”.
Nevertheless, kubeadm should support also more “static bootstrap flow”, where a user knows Nevertheless, kubeadm should support also more “static bootstrap flow”, where a user knows
in advance the target layout of the controlplane instances (the number, the name and the IP in advance the target layout of the control plane instances (the number, the name and the IP
of master nodes). of nodes hosting control plane instances).
- Support different etcd deployment scenarios, and more specifically run master nodes components - Support different etcd deployment scenarios, and more specifically run control plane components
and the etcd cluster on the same machines (stacked control plane nodes) or run the etcd and the etcd cluster on the same machines (stacked control plane nodes) or run the etcd
cluster on dedicated machines. cluster on dedicated machines.
### Non-goals ### Non-goals
- Graduating an existing node to master. - Installing a control-plane instance on an existing workers node.
The nodes must be created as a master or as workers and then are supposed to stick to the assigned role The nodes must be created as a control plane instance or as workers and then are supposed to stick to the
for their entire life cycle. assigned role for their entire life cycle.
- This proposal doesn't include a solution for etcd cluster management (but nothing in this proposal should - This proposal doesn't include a solution for etcd cluster management (but nothing in this proposal should
prevent to address this in future). prevent to address this in future).
@ -142,8 +142,8 @@ capabilities like e.g. kubeadm upgrade for HA clusters.
explicitly prevent to reconsider this in future as well). explicitly prevent to reconsider this in future as well).
- This proposal doesn't provide an automated solution for transferring the CA key and other required - This proposal doesn't provide an automated solution for transferring the CA key and other required
certs from one master to the other. More specifically, this proposal doesn't address the ongoing certs from one control-plane instance to the other. More specifically, this proposal doesn't address
discussion about storage of kubeadm TLS assets in secrets and it it is not planned the ongoing discussion about storage of kubeadm TLS assets in secrets and it it is not planned
to provide support for clusters with TLS stored in secrets (but nothing in this to provide support for clusters with TLS stored in secrets (but nothing in this
proposal should explicitly prevent to reconsider this in future). proposal should explicitly prevent to reconsider this in future).
@ -159,13 +159,14 @@ capabilities like e.g. kubeadm upgrade for HA clusters.
- Create a cluster without knowing its final layout - Create a cluster without knowing its final layout
Supporting a dynamic workflow implies that some information about the cluster are Supporting a dynamic workflow implies that some information about the cluster are
not available at init time, like e.g. the number of master nodes, the IP of not available at init time, like e.g. the number of control plane instances, the IP of
master nodes etc. etc. nodes candidates for hosting control-plane instances etc. etc.
- _How to configure a Kubernetes cluster in order to easily adapt to future change - _How to configure a Kubernetes cluster in order to easily adapt to future change
of its own controlplane layout like e.g. add a master node, remove a master node?_ of its own control plane layout like e.g. add a new control-plane instance, remove a
control plane instance?_
- _What are the "pivotal" cluster settings that must be defined before initialising - _What are the "pivotal" cluster settings that must be defined before initializing
the cluster?_ the cluster?_
- _How to combine into a single UX support for both static and dynamic bootstrap - _How to combine into a single UX support for both static and dynamic bootstrap
@ -188,32 +189,32 @@ capabilities like e.g. kubeadm upgrade for HA clusters.
### User Stories ### User Stories
#### Create a cluster with more than one master nodes (static workflow) #### Create a cluster with more than one control plane instance (static workflow)
As a kubernetes administrator, I want to create a Kubernetes cluster with more than one As a kubernetes administrator, I want to create a Kubernetes cluster with more than one
master nodes*, of which I know in advance the name and the IP. control-plane instances, of which I know in advance the name and the IP.
\* A new "master node" is a new kubernetes node with \* A new "control plane instance" is a new kubernetes node with
`node-role.kubernetes.io/master=""` label and `node-role.kubernetes.io/master=""` label and
`node-role.kubernetes.io/master:NoSchedule` taint; a new instance of control plane `node-role.kubernetes.io/master:NoSchedule` taint; a new instance of control plane
components will be deployed on the new master node. components will be deployed on the new node.
As described in goals/non goals, in this first release of the proposal As described in goals/non goals, in this first release of the proposal
creating a new master node doesn't trigger the creation of a new etcd member on the creating a new control plane instance doesn't trigger the creation of a new etcd member on the
same machine. same machine.
#### Add a new master node (dynamic workflow) #### Add a new control-plane instance (dynamic workflow)
As a kubernetes administrator, (_at any time_) I want to add a new master node* to an existing As a kubernetes administrator, (_at any time_) I want to add a new control-plane instance* to
Kubernetes cluster. an existing Kubernetes cluster.
### Implementation Details ### Implementation Details
#### Initialize the Kubernetes cluster #### Initialize the Kubernetes cluster
As of today, a Kubernetes cluster should be initialized by running `kubeadm init` on a As of today, a Kubernetes cluster should be initialized by running `kubeadm init` on a
first master, afterward referred as the bootstrap master. first node, afterward referred as the bootstrap control plane.
in order to support the `kubeadm join --master` workflow a new Kubernetes cluster is in order to support the `kubeadm join --control-plane` workflow a new Kubernetes cluster is
expected to satisfy following conditions : expected to satisfy following conditions :
- The cluster must have a stable `controlplaneAddress` endpoint (aka the IP/DNS of the - The cluster must have a stable `controlplaneAddress` endpoint (aka the IP/DNS of the
@ -222,34 +223,34 @@ expected to satisfy following conditions :
All the above conditions/settings could be set by passing a configuration file to `kubeadm init`. All the above conditions/settings could be set by passing a configuration file to `kubeadm init`.
#### Preparing for execution of kubeadm join --master #### Preparing for execution of kubeadm join --control-plane
Before invoking `kubeadm join --master`, the user/higher level tools Before invoking `kubeadm join --control-plane`, the user/higher level tools
should copy control plane certificates from an existing master node, e.g. bootstrap master should copy control plane certificates from an existing control plane instance, e.g. the bootstrap control plane
> NB. kubeadm is limited to execute actions *only* > NB. kubeadm is limited to execute actions *only*
> in the machine where it is running, so it is not possible to copy automatically > in the machine where it is running, so it is not possible to copy automatically
> certificates from remote locations. > certificates from remote locations.
Please note that strictly speaking only ca, front-proxy-ca certificate and and service account key pair Please note that strictly speaking only ca, front-proxy-ca certificate and and service account key pair
are required to be equal among all masters. Accordingly: are required to be equal among all control plane instances. Accordingly:
- `kubeadm join --master` will check for the mandatory certificates and fail fast if - `kubeadm join --control-plane` will check for the mandatory certificates and fail fast if
they are missing they are missing
- given the required certificates exists, if some/all of the other certificates are provided - given the required certificates exists, if some/all of the other certificates are provided
by the user as well, `kubeadm join --master` will use them without further checks. by the user as well, `kubeadm join --control-plane` will use them without further checks.
- If any other certificates are missing, `kubeadm join --master` will create them. - If any other certificates are missing, `kubeadm join --control-plane` will create them.
> see "Strategies for distributing cluster certificates" paragraph for > see "Strategies for distributing cluster certificates" paragraph for
> additional info about this step. > additional info about this step.
#### The kubeadm join --master workflow #### The kubeadm join --control-plane workflow
The `kubeadm join --master` workflow will be implemented as an extension of the The `kubeadm join --control-plane` workflow will be implemented as an extension of the
existing `kubeadm join` flow. existing `kubeadm join` flow.
`kubeadm join --master` will accept an additional parameter, that is the apiserver advertise `kubeadm join --control-plane` will accept an additional parameter, that is the apiserver advertise
address of the joining node; as details in following paragraphs, the value assigned to address of the joining node; as detailed in following paragraphs, the value assigned to
this parameter depends on the user choice between a dynamic bootstrap workflow or a static this parameter depends on the user choice between a dynamic bootstrap workflow or a static
bootstrap workflow. bootstrap workflow.
@ -258,12 +259,12 @@ The updated join workflow will be the following:
1. Discovery cluster info [No changes to this step] 1. Discovery cluster info [No changes to this step]
> NB This step waits for a first instance of the kube-apiserver to become ready > NB This step waits for a first instance of the kube-apiserver to become ready
> (the bootstrap master); And thus it acts as embedded mechanism for handling the sequence > (the bootstrap control plane); And thus it acts as embedded mechanism for handling the sequence
> `kubeadm init` and `kubeadm join` actions in case of parallel node creation. > `kubeadm init` and `kubeadm join` actions in case of parallel node creation.
2. Executes the kubelet TLS bootstrap process [No changes to this step]: 2. Executes the kubelet TLS bootstrap process [No changes to this step]:
3. In case of `join --master` [New step] 3. In case of `join --control-plane` [New step]
1. Using the bootstrap token as identity, read the `kubeadm-config` configMap 1. Using the bootstrap token as identity, read the `kubeadm-config` configMap
in `kube-system` namespace. in `kube-system` namespace.
@ -271,13 +272,13 @@ The updated join workflow will be the following:
> This requires to grant access to the above configMap for > This requires to grant access to the above configMap for
> `system:bootstrappers` group. > `system:bootstrappers` group.
2. Check if the cluster is ready for joining a new master node: 2. Check if the cluster/the node is ready for joining a new control plane instance:
a. Check if the cluster has a stable `controlplaneAddress` a. Check if the cluster has a stable `controlplaneAddress`
a. Check if the cluster uses an external etcd a. Check if the cluster uses an external etcd
a. Checks if the mandatory certificates exists on the file system a. Checks if the mandatory certificates exists on the file system
3. Prepare the node for joining as a master node: 3. Prepare the node for hosting a control plane instance:
a. Create missing certificates (in any). a. Create missing certificates (in any).
> please note that by creating missing certificates kubeadm can adapt seamlessly > please note that by creating missing certificates kubeadm can adapt seamlessly
@ -300,42 +301,42 @@ The updated join workflow will be the following:
5. Apply master taint and label to the node. 5. Apply master taint and label to the node.
6. Update the `kubeadm-config` configMap with the information about the new master node 6. Update the `kubeadm-config` configMap with the information about the new control plane instance.
#### dynamic workflow (advertise-address == `controlplaneAddress`) #### dynamic workflow (advertise-address == `controlplaneAddress`)
There are many ways to configure an highly available cluster. There are many ways to configure an highly available cluster.
Among them, the approach best suited for a dynamic bootstrap workflow requires the Among them, the approach best suited for a dynamic bootstrap workflow requires the
user to set the `--apiserver-advertise-address` of each master, including the bootstrap master user to set the `--apiserver-advertise-address` of each kube-apiserver instance, including the in on the
itself, equal to the `controlplaneAddress` endpoint provided during kubeadm init bootstrap control plane, _equal to the `controlplaneAddress` endpoint_ provided during kubeadm init
(the IP/DNS of the external load balancer). (the IP/DNS of the external load balancer).
By using the same advertise address for all the IP masters, `kubeadm init` can create By using the same advertise address for all the kube-apiserver instances, `kubeadm init` can create
a unique API server serving certificate that could be shared across many masters nodes; a unique API server serving certificate that could be shared across many control plane instances;
no changes will be required to this certificate when adding/removing master nodes. no changes will be required to this certificate when adding/removing kube-apiserver instances.
Please note that: Please note that:
- if the user is not planning to distribute the apiserver serving certificate among masters, - if the user is not planning to distribute the apiserver serving certificate among control plane instances,
kubeadm will generate a new apiserver serving certificate “almost equal” to the certificate kubeadm will generate a new apiserver serving certificate “almost equal” to the certificate
created on the bootstrap master (it differs only for the domain name of the joining master) created on the bootstrap control plane (it differs only for the domain name of the joining node)
#### Static workflow (advertise-address != `controlplaneAddress`) #### Static workflow (advertise-address != `controlplaneAddress`)
In case of a static bootstrap workflow the final layout of the controlplane - the number, the In case of a static bootstrap workflow the final layout of the control plane - the number, the
name and the IP of master nodes - is know in advance. name and the IP of control plane nodes - is know in advance.
Given such information, the user can choose a different approach where each master has a Given such information, the user can choose a different approach where each kube-apiserver instance has a
specific apiserver advertise address different from the `controlplaneAddress`. specific apiserver advertise address different from the `controlplaneAddress`.
Please note that: Please note that:
- if the user is not planning to distribute the apiserver certificate among masters, kubeadm - if the user is not planning to distribute the apiserver certificate among control plane instances, kubeadm
will generate a new apiserver serving certificate with the required SANS will generate a new apiserver serving certificate with the SANS required for the joining control plane instance
- if the user is planning to distribute the apiserver certificate among masters, the - if the user is planning to distribute the apiserver certificate among control plane instances, the
operator is required to provide during `kubeadm init` the list of masters/the list of IP operator is required to provide during `kubeadm init` the list of the list of IP
addresses for all the masters as alternative names for the API servers certificate, thus addresses for all the kube-apiserver instances as alternative names for the API servers certificate, thus
allowing the proper functioning of all the API server instances that will join allowing the proper functioning of all the API server instances that will join
#### Strategies for deploying control plane components #### Strategies for deploying control plane components
@ -346,7 +347,7 @@ As of today kubeadm supports two solutions for deploying control plane component
2. Self-hosted control plane (currently alpha) 2. Self-hosted control plane (currently alpha)
The proposed solution for case 1. "Control plane deployed as static pods", assumes The proposed solution for case 1. "Control plane deployed as static pods", assumes
that the `kubeadm join --master` flow will take care of creating required kubeconfig that the `kubeadm join --control plane` flow will take care of creating required kubeconfig
files and required static pod manifests. files and required static pod manifests.
As stated above, supporting for Self-hosted control plane is non goal for this As stated above, supporting for Self-hosted control plane is non goal for this
@ -361,17 +362,17 @@ As of today kubeadm supports two solutions for storing cluster certificates:
The proposed solution for case 1. "Cluster certificates stored on file system", The proposed solution for case 1. "Cluster certificates stored on file system",
requires the user/the higher level tools to execute an additional action _before_ requires the user/the higher level tools to execute an additional action _before_
invoking `kubeadm join --master`. invoking `kubeadm join --control plane`.
More specifically, in case of cluster with "cluster certificates stored on file More specifically, in case of cluster with "cluster certificates stored on file
system", before invoking `kubeadm join --master`, the user/higher level tools system", before invoking `kubeadm join --control plane`, the user/higher level tools
should copy control plane certificates from an existing master node, e.g. bootstrap master should copy control plane certificates from an existing node, e.g. the bootstrap control plane
> NB. kubeadm is limited to execute actions *only* > NB. kubeadm is limited to execute actions *only*
in the machine where it is running, so it is not possible to copy automatically in the machine where it is running, so it is not possible to copy automatically
certificates from remote locations. certificates from remote locations.
Then, the `kubeadm join --master` flow will take care of checking certificates Then, the `kubeadm join --control plane` flow will take care of checking certificates
existence and conformance. existence and conformance.
As stated above, supporting for Cluster certificates stored in secrets is a non goal As stated above, supporting for Cluster certificates stored in secrets is a non goal
@ -379,11 +380,22 @@ for this proposal.
#### `kubeadm upgrade` for HA clusters #### `kubeadm upgrade` for HA clusters
Nothing in this proposal prevents implementation of `kubeadm upgrade` for HA cluster. The `kubeadm upgrade` workflow as of today is composed by two high level phases, upgrading the
control plane and upgrading nodes.
Further detail will be provided in a subsequent release of this KEP when all the detail The above hig-level workflow will remain the same also in case of clusters with more than
one control plane instances, but with a new sub-step to be executed on secondary control-plane instances:
1. Upgrade the control plane
1. Run `kubeadm upgrade apply` on a first control plane instance [No changes to this step]
1. Run `kubeadm upgrade node experimental-control-plane` on secondary control-plane instances [new step]
1. Upgrade nodes/kubelet [No changes to this step]
Further detail might be provided in a subsequent release of this KEP when all the detail
of the `v1beta1` release of kubeadm api will be available (including a proper modeling of the `v1beta1` release of kubeadm api will be available (including a proper modeling
of a multi master cluster). of many control plane instances).
## Graduation Criteria ## Graduation Criteria
@ -403,33 +415,30 @@ of a multi master cluster).
## Drawbacks ## Drawbacks
The kubeadm join --master workflow requires that some condition are satisfied at `kubeadm init` time, The `kubeadm join --control-plane` workflow requires that some condition are satisfied at `kubeadm init` time,
that is use a `controlplaneAddress` and use an external etcd. that is to use a `controlplaneAddress` and use an external etcd.
Strictly speaking, that's mean that the `kubeadm join --master` defined in this proposal supports
a dynamic workflow _only_ in some cases.
## Alternatives ## Alternatives
1) Execute `kubeadm init` on many nodes 1) Execute `kubeadm init` on many nodes
The approach based on execution of `kubeadm init` on each master was considered as well, The approach based on execution of `kubeadm init` on each node candidate for hosting a control plane instance
but not chosen because it seems to have several drawbacks: was considered as well, but not chosen because it seems to have several drawbacks:
- There is no real control on parameters passed to `kubeadm init` executed on secondary masters, - There is no real control on parameters passed to `kubeadm init` executed on different nodes,
and this might lead to unpredictable inconsistent configurations. and this might lead to unpredictable inconsistent configurations.
- The init sequence for secondary master won't go through the TLS bootstrap process, - The init sequence for above nodes won't go through the TLS bootstrap process,
and this might be perceived as a security concern. and this might be perceived as a security concern.
- The init sequence executes a lot of steps which are un-necessary on a secondary master; - The init sequence executes a lot of steps which are un-necessary (on an existing cluster); now those steps are
now those steps are mostly idempotent, so basically now no harm is done by executing mostly idempotent, so basically now no harm is done by executing them two or three times. Nevertheless, to
them two or three times. Nevertheless to maintain this contract in future could be complex. maintain this contract in future could be complex.
Additionally, by having a separated `kubeadm join --master` workflow instead of a single `kubeadm init` Additionally, by having a separated `kubeadm join --control-plane` workflow instead of a single `kubeadm init`
workflow we can provide better support for: workflow we can provide better support for:
- Steps that should be done in a slightly different way on a secondary master with respect - Steps that should be done in a slightly different way on a secondary control plane instances with respect
to the bootstrap master (e.g. updating the kubeadm-config map adding info about the new master instead to the bootstrap control plane (e.g. updating the kubeadm-config map adding info about the new control plane
of creating a new configMap from scratch). instance instead of creating a new configMap from scratch).
- Checking that the cluster/the kubeadm-config is properly configured for multi masters - Checking that the cluster/the kubeadm-config is properly configured for many control plane instances
- Blocking users trying to create multi masters with configurations we don't want to support as a sig - Blocking users trying to create secondary control plane instances on clusters with configurations
(e.g. HA with self-hosted control plane) we don't want to support as a SIG (e.g. HA with self-hosted control plane)