Reword “Creating a single control-plane cluster with kubeadm” (#18939)

* Consolidate words of caution about Pod network

* Tweak wording

- use tooltips
- fix a TODO hyperlink
- adopt style guidelines

* Revise prerequisites for kubeadm

* Rework page structure

- Replace some headings with anchor elements (preserving inbound links)
- Use a "discussion" section for the discussion part of the page.
- Make Feedback be a part of the What's Next section
- Skip mentioning Docker in a logging context; provide generic
  signposting instead.
- Update overview
- Document limitations and fix link to HA topology
- Fixes for styling

* Redo network plugin info

* Use glossary tooltips to introduce terms
This commit is contained in:
Tim Bannister 2020-02-19 20:49:45 +00:00 committed by GitHub
parent fefda3e4ea
commit 6a3c364706
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 159 additions and 133 deletions

View File

@ -12,7 +12,7 @@ weight: 10
{{% capture overview %}}
{{< feature-state state="alpha" >}}
{{< warning >}}Alpha features change rapidly. {{< /warning >}}
{{< caution >}}Alpha features can change rapidly. {{< /caution >}}
Network plugins in Kubernetes come in a few flavors:

View File

@ -8,63 +8,50 @@ weight: 30
{{% capture overview %}}
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">**kubeadm** helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification). Kubeadm also supports other cluster
lifecycle functions, such as upgrades, downgrade, and managing [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/).
<img src="https://raw.githubusercontent.com/kubernetes/kubeadm/master/logos/stacked/color/kubeadm-stacked-color.png" align="right" width="150px">The `kubeadm` tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
`kubeadm` also supports other cluster
lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
Because you can install kubeadm on various types of machine (e.g. laptop, server,
Raspberry Pi, etc.), it's well suited for integration with provisioning systems
such as Terraform or Ansible.
The `kubeadm` tool is good if you need:
kubeadm's simplicity means it can serve a wide range of use cases:
- A simple way for you to try out Kubernetes, possibly for the first time.
- A way for existing users to automate setting up a cluster and test their application.
- A building block in other ecosystem and/or installer tools with a larger
scope.
- New users can start with kubeadm to try Kubernetes out for the first time.
- Users familiar with Kubernetes can spin up clusters with kubeadm and test their applications.
- Larger projects can include kubeadm as a building block in a more complex system that can also include other installer tools.
kubeadm is designed to be a simple way for new users to start trying
Kubernetes out, possibly for the first time, a way for existing users to
test their application on and stitch together a cluster easily, and also to be
a building block in other ecosystem and/or installer tool with a larger
scope.
You can install _kubeadm_ very easily on operating systems that support
installing deb or rpm packages. The responsible SIG for kubeadm,
[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle), provides these packages pre-built for you,
but you may also build them from source for other OSes.
### kubeadm maturity
kubeadm's overall feature state is **GA**. Some sub-features, like the configuration
file API are still under active development. The implementation of creating the cluster
may change slightly as the tool evolves, but the overall implementation should be pretty stable.
Any commands under `kubeadm alpha` are by definition, supported on an alpha level.
### Support timeframes
Kubernetes releases are generally supported for nine months, and during that
period a patch release may be issued from the release branch if a severe bug or
security issue is found. Here are the latest Kubernetes releases and the support
timeframe; which also applies to `kubeadm`.
| Kubernetes version | Release month | End-of-life-month |
|--------------------|----------------|-------------------|
| v1.13.x | December 2018 | September 2019   |
| v1.14.x | March 2019 | December 2019   |
| v1.15.x | June 2019 | March 2020   |
| v1.16.x | September 2019 | June 2020   |
You can install and use `kubeadm` on various machines: your laptop, a set
of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
as Ansible or Terraform.
{{% /capture %}}
{{% capture prerequisites %}}
- One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS
- 2 GB or more of RAM per machine. Any less leaves little room for your
To follow this guide, you need:
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
- 2 GiB or more of RAM per machine--any less leaves little room for your
apps.
- 2 CPUs or more on the control-plane node
- Full network connectivity among all machines in the cluster. A public or
private network is fine.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster. You can use either a
public or a private network.
You also need to use a version of `kubeadm` that can deploy the version
of Kubernetes that you want to use in your new cluster.
[Kubernetes' version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
Check that policy to learn about what versions of Kubernetes and `kubeadm`
are supported. This page is written for Kubernetes {{< param "version" >}}.
The `kubeadm` tool's overall feature state is General Availability (GA). Some sub-features are
still under active development. The implementation of creating the cluster may change
slightly as the tool evolves, but the overall implementation should be pretty stable.
{{< note >}}
Any commands under `kubeadm alpha` are, by definition, supported on an alpha level.
{{< /note >}}
{{% /capture %}}
@ -94,27 +81,29 @@ After you initialize your control-plane, the kubelet runs normally.
### Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including
etcd (the cluster database) and the API server (which the kubectl CLI
{{< glossary_tooltip term_id="etcd" >}} (the cluster database) and the
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
(which the {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} command line tool
communicates with).
1. (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster
1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint
for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
1. Choose a Pod network add-on, and verify whether it requires any arguments to
be passed to kubeadm initialization. Depending on which
be passed to `kubeadm init`. Depending on which
third-party provider you choose, you might need to set the `--pod-network-cidr` to
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
1. (Optional) Since version 1.14, kubeadm will try to detect the container runtime on Linux
1. (Optional) Since version 1.14, `kubeadm` tries to detect the container runtime on Linux
by using a list of well known domain socket paths. To use different container runtime or
if there are more than one installed on the provisioned node, specify the `--cri-socket`
argument to `kubeadm init`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
1. (Optional) Unless otherwise specified, kubeadm uses the network interface associated
1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
with the default gateway to set the advertise address for this particular control-plane node's API server.
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
connectivity to gcr.io registries.
connectivity to the gcr.io container image registry.
To initialize the control-plane node run:
@ -258,26 +247,43 @@ created, and deleted with the `kubeadm token` command. See the
### Installing a Pod network add-on {#pod-network}
{{< caution >}}
This section contains important information about installation and deployment order. Read it carefully before proceeding.
This section contains important information about networking setup and
deployment order.
Read all of this advice carefully before proceeding.
**You must deploy a
{{< glossary_tooltip text="Container Network Interface" term_id="cni" >}}
(CNI) based Pod network add-on so that your Pods can communicate with each other.
Cluster DNS (CoreDNS) will not start up before a network is installed.**
- Take care that your Pod network must not overlap with any of the host
networks: you are likely to see problems if there is any overlap.
(If you find a collision between your network plugins preferred Pod
network and some of your host networks, you should think of a suitable
CIDR block to use instead, then use that during `kubeadm init` with
`--pod-network-cidr` and as a replacement in your network plugins YAML).
- By default, `kubeadm` sets up your cluster to use and enforce use of
[RBAC](/docs/reference/access-authn-authz/rbac/) (role based access
control).
Make sure that your Pod network plugin supports RBAC, and so do any manifests
that you use to deploy it.
- If you want to use IPv6--either dual-stack, or single-stack IPv6 only
networking--for your cluster, make sure that your Pod network plugin
supports IPv6.
IPv6 support was added to CNI in [v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
{{< /caution >}}
You must install a Pod network add-on so that your Pods can communicate with
each other.
Several external projects provide Kubernetes Pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/networkpolicies/).
**The network must be deployed before any applications. Also, CoreDNS will not start up before a network is installed.
kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).**
See the list of available
[networking and network policy add-ons](https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy).
Several projects provide Kubernetes Pod networks using CNI, some of which also
support [Network Policy](/docs/concepts/services-networking/networkpolicies/). See the [add-ons page](/docs/concepts/cluster-administration/addons/) for a complete list of available network add-ons.
- IPv6 support was added in [CNI v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0). See each plugin's documentation to see if it supports IPv6.
Note that kubeadm sets up a more secure cluster by default and enforces use of [RBAC](/docs/reference/access-authn-authz/rbac/).
Make sure that your network manifest supports RBAC.
Also, beware, that your Pod network must not overlap with any of the host networks as this can cause issues.
If you find a collision between your network plugins preferred Pod network and some of your host networks, you should think of a suitable CIDR replacement and use that during `kubeadm init` with `--pod-network-cidr` and as a replacement in your network plugins YAML.
You can install a Pod network add-on with the following command on the control-plane node or a node that has the kubeconfig credentials:
You can install a Pod network add-on with the following command on the
control-plane node or a node that has the kubeconfig credentials:
```bash
kubectl apply -f <add-on.yaml>
@ -291,7 +297,7 @@ Below you can find installation instructions for some popular Pod network plugin
{{% tab name="Calico" %}}
[Calico](https://docs.projectcalico.org/latest/introduction/) is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer. Calico works on several architectures, including `amd64`, `arm64`, and `ppc64le`.
By default, Calico uses `192.168.0.0/16` as the Pod network CIDR, though this can be configured in the calico.yaml file. For Calico to work correctly, you need to pass this same CIDR to the kubeadm init command using the `--pod-network-cidr=192.168.0.0/16` flag or via the kubeadm configuration.
By default, Calico uses `192.168.0.0/16` as the Pod network CIDR, though this can be configured in the calico.yaml file. For Calico to work correctly, you need to pass this same CIDR to the `kubeadm init` command using the `--pod-network-cidr=192.168.0.0/16` flag or via kubeadm's configuration.
```shell
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
@ -340,13 +346,11 @@ For `flannel` to work correctly, you must pass `--pod-network-cidr=10.244.0.0/16
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
please see [here](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
please see [Network Plugin Requirements](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network.
see [here
](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls).
Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. The [Firewall](https://coreos.com/flannel/docs/latest/troubleshooting.html#firewalls) section of Flannel's troubleshooting guide explains about this in more detail.
Note that `flannel` works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` under Linux.
Flannel works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` architectures under Linux.
Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented.
```shell
@ -360,23 +364,23 @@ For more information about `flannel`, see [the CoreOS flannel repository on GitH
{{% tab name="Kube-router" %}}
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
please see [here](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
please see [Network Plugin Requirements](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
Kube-router relies on kube-controller-manager to allocate Pod CIDR for the nodes. Therefore, use `kubeadm init` with the `--pod-network-cidr` flag.
Kube-router provides Pod networking, network policy, and high-performing IP Virtual Server(IPVS)/Linux Virtual Server(LVS) based service proxy.
For information on setting up Kubernetes cluster with Kube-router using kubeadm, please see official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
For information on using the `kubeadm` tool to set up a Kubernetes cluster with Kube-router, please see the official [setup guide](https://github.com/cloudnativelabs/kube-router/blob/master/docs/kubeadm.md).
{{% /tab %}}
{{% tab name="Weave Net" %}}
Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net.bridge.bridge-nf-call-iptables=1`
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
please see [here](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
please see [Network Plugin Requirements](/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
The official Weave Net set-up guide is [here](https://www.weave.works/docs/net/latest/kube-addon/).
For more information on setting up your Kubernetes cluster with Weave Net, please see [Integrating Kubernetes via the Addon]((https://www.weave.works/docs/net/latest/kube-addon/).
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` without any extra action required.
Weave Net works on `amd64`, `arm`, `arm64` and `ppc64le` platforms without any extra action required.
Weave Net sets hairpin mode by default. This allows Pods to access themselves via their Service IP address
if they don't know their PodIP.
@ -389,10 +393,12 @@ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl versio
Once a Pod network has been installed, you can confirm that it is working by
checking that the CoreDNS Pod is Running in the output of `kubectl get pods --all-namespaces`.
checking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`.
And once the CoreDNS Pod is up and running, you can continue by joining your nodes.
If your network is not working or CoreDNS is not in the Running state, checkout our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
If your network is not working or CoreDNS is not in the `Running` state, check out the
[troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)
for `kubeadm`.
### Control plane node isolation
@ -424,19 +430,19 @@ The nodes are where your workloads (containers and Pods, etc) run. To add new no
* Become root (e.g. `sudo su -`)
* Run the command that was output by `kubeadm init`. For example:
``` bash
```bash
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
```
If you do not have the token, you can get it by running the following command on the control-plane node:
``` bash
```bash
kubeadm token list
```
The output is similar to this:
``` console
```console
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
signing token generated by bootstrappers:
@ -447,26 +453,26 @@ TOKEN TTL EXPIRES USAGES DESCRIPTION
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
you can create a new token by running the following command on the control-plane node:
``` bash
```bash
kubeadm token create
```
The output is similar to this:
``` console
```console
5didvk.d09sbcov8ph2amjw
```
If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the control-plane node:
``` bash
```bash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
```
The output is similar to this:
The output is similar to:
``` console
```console
8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
```
@ -498,7 +504,7 @@ In order to get a kubectl on some other computer (e.g. laptop) to talk to your
cluster, you need to copy the administrator kubeconfig file from your control-plane node
to your workstation like this:
``` bash
```bash
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes
```
@ -529,11 +535,18 @@ kubectl --kubeconfig ./admin.conf proxy
You can now access the API Server locally at `http://localhost:8001/api/v1`
## Tear down {#tear-down}
## Clean up {#tear-down}
To undo what kubeadm did, you should first [drain the
node](/docs/reference/generated/kubectl/kubectl-commands#drain) and make
sure that the node is empty before shutting it down.
If you used disposable servers for your cluster, for testing, you can
switch those off and do no further clean up. You can use
`kubectl config delete-cluster` to delete your local references to the
cluster.
However, if you want to deprovision your cluster more cleanly, you should
first [drain the node](/docs/reference/generated/kubectl/kubectl-commands#drain)
and make sure that the node is empty, then deconfigure the node.
### Remove the node
Talking to the control-plane node with the appropriate credentials, run:
@ -542,7 +555,7 @@ kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
```
Then, on the node being removed, reset all kubeadm installed state:
Then, on the node being removed, reset all `kubeadm` installed state:
```bash
kubeadm reset
@ -563,47 +576,55 @@ ipvsadm -C
If you wish to start over simply run `kubeadm init` or `kubeadm join` with the
appropriate arguments.
More options and information about the
[`kubeadm reset command`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/).
### Clean up the control plane
## Maintaining a cluster {#lifecycle}
You can use `kubeadm reset` on the control plane host to trigger a best-effort
clean up.
Instructions for maintaining kubeadm clusters (e.g. upgrades,downgrades, etc.) can be found [here.](/docs/tasks/administer-cluster/kubeadm)
See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
reference documentation for more information about this subcommand and its
options.
## Explore other add-ons {#other-addons}
{{% /capture %}}
See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to explore other add-ons,
including tools for logging, monitoring, network policy, visualization &amp;
control of your Kubernetes cluster.
{{% capture discussion %}}
## What's next {#whats-next}
* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
* Learn about kubeadm's advanced usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
for details about upgrading your cluster using `kubeadm`.
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/user-guide/kubectl-overview/).
* Configure log rotation. You can use **logrotate** for that. When using Docker, you can specify log rotation options for Docker daemon, for example `--log-driver=json-file --log-opt=max-size=10m --log-opt=max-file=5`. See [Configure and troubleshoot the Docker daemon](https://docs.docker.com/engine/admin/) for more details.
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
of Pod network add-ons.
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp;
control of your Kubernetes cluster.
* Configure how your cluster handles logs for cluster events and from
applications running in Pods.
See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
an overview of what is involved.
## Feedback {#feedback}
### Feedback {#feedback}
* For bugs, visit [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* For support, visit kubeadm Slack Channel:
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/)
* General SIG Cluster Lifecycle Development Slack Channel:
* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
* For support, visit the
[#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
* General SIG Cluster Lifecycle development Slack channel:
[#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* SIG Cluster Lifecycle [SIG information](#TODO)
* SIG Cluster Lifecycle Mailing List:
* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
* SIG Cluster Lifecycle mailing list:
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
## Version skew policy {#version-skew-policy}
The kubeadm CLI tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
kubeadm CLI vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
The `kubeadm` tool of version vX.Y may deploy clusters with a control plane of version vX.Y or vX.(Y-1).
`kubeadm` vX.Y can also upgrade an existing kubeadm-created cluster of version vX.(Y-1).
Due to that we can't see into the future, kubeadm CLI vX.Y may or may not be able to deploy vX.(Y+1) clusters.
Example: kubeadm v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
Example: `kubeadm` v1.8 can deploy both v1.7 and v1.8 clusters and upgrade v1.7 kubeadm-created clusters to
v1.8.
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
@ -611,7 +632,24 @@ These resources provide more information on supported version skew between kubel
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
## kubeadm works on multiple platforms {#multi-platform}
## Limitations {#limitations}
### Cluster resilience {#resilience}
The cluster created here has a single control-plane node, with a single etcd database
running on it. This means that if the control-plane node fails, your cluster may lose
data and may need to be recreated from scratch.
Workarounds:
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
* Use multiple control-plane nodes. You can read
[Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
topology that provides higher availabilty.
### Platform compatibility {#multi-platform}
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
following the [multi-platform
@ -623,20 +661,8 @@ Only some of the network providers offer solutions for all platforms. Please con
network providers above or the documentation from each provider to figure out whether the provider
supports your chosen platform.
## Limitations {#limitations}
The cluster created here has a single control-plane node, with a single etcd database
running on it. This means that if the control-plane node fails, your cluster may lose
data and may need to be recreated from scratch.
Workarounds:
* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
* Use multiple control-plane nodes by completing the
[HA setup](/docs/setup/independent/ha-topology) instead.
## Troubleshooting {#troubleshooting}
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
{{% /capture %}}