mirror of https://github.com/rancher/rke1-docs.git
Convert tabs to Docusaurus syntax
This commit is contained in:
parent
472920a357
commit
1ce49bba80
|
|
@ -4,6 +4,9 @@ description: By default, RKE deploys the NGINX ingress controller. Learn how to
|
|||
weight: 262
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
### Default Ingress
|
||||
|
||||
By default, RKE deploys the NGINX ingress controller on all schedulable nodes.
|
||||
|
|
@ -106,14 +109,14 @@ ingress:
|
|||
|
||||
### Configuring network options
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "v1.3.x" %}}
|
||||
<Tabs>
|
||||
<TabItem value="v1.3.x">
|
||||
For Kubernetes v1.21 and up, the NGINX ingress controller no longer runs in `hostNetwork: true` but uses hostPorts for port `80` and port `443`. This was done so the admission webhook can be configured to be accessed using ClusterIP so it can only be reached inside the cluster. If you want to change the mode and/or the ports, see the options below.
|
||||
{{% /tab %}}
|
||||
{{% tab "v1.1.11 and up & v1.2.x" %}}
|
||||
</TabItem>
|
||||
<TabItem value="v1.1.11 and up & v1.2.x">
|
||||
By default, the nginx ingress controller is configured using `hostNetwork: true` on the default ports `80` and `443`. If you want to change the mode and/or the ports, see the options below.
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Configure the nginx ingress controller using `hostPort` and override the default ports:
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,13 @@ title: Example Scenarios
|
|||
weight: 4
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
These example scenarios for backup and restore are different based on your version of RKE.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v0.2.0+" %}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v0.2.0+">
|
||||
|
||||
This walkthrough will demonstrate how to restore an etcd cluster from a local snapshot with the following steps:
|
||||
|
||||
|
|
@ -92,15 +95,15 @@ If you want to directly retrieve the snapshot from S3, add in the [S3 options](#
|
|||
The `rke etcd snapshot-restore` command triggers `rke up` using the new `cluster.yml`. Confirm that your Kubernetes cluster is functional by checking the pods on your cluster.
|
||||
|
||||
```
|
||||
> kubectl get pods
|
||||
> kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-65899c769f-kcdpr 1/1 Running 0 17s
|
||||
nginx-65899c769f-pc45c 1/1 Running 0 17s
|
||||
nginx-65899c769f-qkhml 1/1 Running 0 17s
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v0.2.0">
|
||||
|
||||
This walkthrough will demonstrate how to restore an etcd cluster from a local snapshot with the following steps:
|
||||
|
||||
|
|
@ -171,7 +174,7 @@ rke remove --config rancher-cluster.yml
|
|||
<a id="retrieve-the-backup-and-place-it-on-a-new-node-rke-before-v0.2.0"></a>
|
||||
### 5. Retrieve the Backup and Place it On a New Node
|
||||
|
||||
Before restoring etcd and running `rke up`, we need to retrieve the backup saved on S3 to a new node, e.g. `node3`.
|
||||
Before restoring etcd and running `rke up`, we need to retrieve the backup saved on S3 to a new node, e.g. `node3`.
|
||||
|
||||
```
|
||||
# Make a Directory
|
||||
|
|
@ -238,12 +241,12 @@ $ rke up --config cluster.yml
|
|||
Confirm that your Kubernetes cluster is functional by checking the pods on your cluster.
|
||||
|
||||
```
|
||||
> kubectl get pods
|
||||
> kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-65899c769f-kcdpr 1/1 Running 0 17s
|
||||
nginx-65899c769f-pc45c 1/1 Running 0 17s
|
||||
nginx-65899c769f-qkhml 1/1 Running 0 17s
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
|
|||
|
|
@ -3,10 +3,13 @@ title: One-time Snapshots
|
|||
weight: 1
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
One-time snapshots are handled differently depending on your version of RKE.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v0.2.0+" %}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v0.2.0+">
|
||||
|
||||
To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command.
|
||||
|
||||
|
|
@ -19,7 +22,7 @@ The one-time snapshot can be uploaded to a S3 compatible backend by using the ad
|
|||
To create a local one-time snapshot, run:
|
||||
|
||||
```
|
||||
$ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
||||
$ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
||||
```
|
||||
|
||||
**Result:** The snapshot is saved in `/opt/rke/etcd-snapshots`.
|
||||
|
|
@ -93,8 +96,8 @@ Below is an [example IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuid
|
|||
|
||||
For details on giving an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v0.2.0">
|
||||
|
||||
To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command.
|
||||
|
||||
|
|
@ -105,7 +108,7 @@ RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, i
|
|||
To create a local one-time snapshot, run:
|
||||
|
||||
```
|
||||
$ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
||||
$ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
||||
```
|
||||
|
||||
**Result:** The snapshot is saved in `/opt/rke/etcd-snapshots`.
|
||||
|
|
@ -119,5 +122,5 @@ $ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
|||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK](config-options/#ssh-agent) |
|
||||
| `--ignore-docker-version` | [Disable Docker version check](config-options/#supported-docker-versions) |
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
|
|||
|
|
@ -3,10 +3,13 @@ title: Recurring Snapshots
|
|||
weight: 2
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
Recurring snapshots are handled differently based on your version of RKE.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v0.2.0+"%}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v0.2.0+">
|
||||
|
||||
To schedule automatic recurring etcd snapshots, you can enable the `etcd-snapshot` service with [extra configuration options](#options-for-the-etcd-snapshot-service). `etcd-snapshot` runs in a service container alongside the `etcd` container. By default, the `etcd-snapshot` service takes a snapshot for every node that has the `etcd` role and stores them to local disk in `/opt/rke/etcd-snapshots`.
|
||||
|
||||
|
|
@ -95,8 +98,8 @@ services:
|
|||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v0.2.0"%}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v0.2.0">
|
||||
|
||||
To schedule automatic recurring etcd snapshots, you can enable the `etcd-snapshot` service with [extra configuration options](#options-for-the-local-etcd-snapshot-service). `etcd-snapshot` runs in a service container alongside the `etcd` container. By default, the `etcd-snapshot` service takes a snapshot for every node that has the `etcd` role and stores them to local disk in `/opt/rke/etcd-snapshots`.
|
||||
|
||||
|
|
@ -134,5 +137,5 @@ services:
|
|||
retention: 24h
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
|
|||
|
|
@ -3,12 +3,15 @@ title: Restoring from Backup
|
|||
weight: 3
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
The details of restoring your cluster from backup are different depending on your version of RKE.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v0.2.0+"%}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v0.2.0+">
|
||||
|
||||
If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
|
||||
If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
|
||||
|
||||
The following actions will be performed when you run the command:
|
||||
|
||||
|
|
@ -73,10 +76,10 @@ $ rke etcd snapshot-restore \
|
|||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK](config-options/#ssh-agent) | |
|
||||
| `--ignore-docker-version` | [Disable Docker version check](config-options/#supported-docker-versions) |
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v0.2.0"%}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v0.2.0">
|
||||
|
||||
If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
|
||||
If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
|
||||
|
||||
The following actions will be performed when you run the command:
|
||||
|
||||
|
|
@ -115,5 +118,5 @@ The `pki.bundle.tar.gz` file is also expected to be in the same location.
|
|||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK](config-options/#ssh-agent) |
|
||||
| `--ignore-docker-version` | [Disable Docker version check](config-options/#supported-docker-versions) |
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
|
|||
|
|
@ -2,41 +2,9 @@
|
|||
title: Requirements
|
||||
weight: 5
|
||||
---
|
||||
**In this section:**
|
||||
|
||||
<!-- TOC -->
|
||||
- [Operating System](#operating-system)
|
||||
- [General Linux Requirements](#general-linux-requirements)
|
||||
- [SUSE Linux Enterprise Server (SLES) / openSUSE](#suse-linux-enterprise-server-sles-opensuse)
|
||||
[Using Upstream Docker](#using-upstream-docker)
|
||||
- [Using SUSE/openSUSE packaged Docker](#using-suse-opensuse-packaged-docker)
|
||||
[Adding the Software Repository for Docker](#adding-the-software-repository-for-docker)
|
||||
- [openSUSE MicroOS/Kubic (Atomic)](#opensuse-microos-kubic-atomic)
|
||||
- [openSUSE MicroOS](#opensuse-microos)
|
||||
- [openSUSE Kubic](#opensuse-kubic)
|
||||
- [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel-oracle-linux-ol-centos)
|
||||
- [Using upstream Docker](#using-upstream-docker-1)
|
||||
- [Using RHEL/CentOS packaged Docker](#using-rhel-centos-packaged-docker)
|
||||
- [Red Hat Atomic](#red-hat-atomic)
|
||||
- [OpenSSH version](#openssh-version)
|
||||
- [Creating a Docker Group](#creating-a-docker-group)
|
||||
- [Flatcar Container Linux](#flatcar-container-linux)
|
||||
- [Software](#software)
|
||||
- [OpenSSH](#openssh)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Docker](#docker)
|
||||
- [Installing Docker](#installing-docker)
|
||||
- [Checking the Installed Docker Version](#checking-the-installed-docker-version)
|
||||
- [Hardware](#hardware)
|
||||
- [Worker Role](#worker-role)
|
||||
- [Large Kubernetes Clusters](#large-kubernetes-clusters)
|
||||
- [Etcd clusters](#etcd-clusters)
|
||||
- [Ports](#ports)
|
||||
- [Opening port TCP/6443 using `iptables`](#opening-port-tcp-6443-using-iptables)
|
||||
- [Opening port TCP/6443 using `firewalld`](#opening-port-tcp-6443-using-firewalld)
|
||||
- [SSH Server Configuration](#ssh-server-configuration)
|
||||
|
||||
<!-- /TOC -->
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
## Operating System
|
||||
|
||||
|
|
@ -225,8 +193,8 @@ By default, Atomic hosts do not come with a Docker group. You can update the own
|
|||
|
||||
When using Flatcar Container Linux nodes, it is required to use the following configuration in the cluster configuration file:
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Canal"%}}
|
||||
<Tabs>
|
||||
<TabItem value="Canal">
|
||||
|
||||
```yaml
|
||||
rancher_kubernetes_engine_config:
|
||||
|
|
@ -241,9 +209,9 @@ rancher_kubernetes_engine_config:
|
|||
extra_args:
|
||||
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
|
||||
```
|
||||
{{% /tab %}}
|
||||
</TabItem>
|
||||
|
||||
{{% tab "Calico"%}}
|
||||
<TabItem value="Calico">
|
||||
|
||||
```yaml
|
||||
rancher_kubernetes_engine_config:
|
||||
|
|
@ -258,8 +226,8 @@ rancher_kubernetes_engine_config:
|
|||
extra_args:
|
||||
flex-volume-plugin-dir: /opt/kubernetes/kubelet-plugins/volume/exec/
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
It is also required to enable the Docker service, you can enable the Docker service using the following command:
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,13 @@ title: How Upgrades Work
|
|||
weight: 1
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
In this section, you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster. The below sections describe how each type of node is upgraded by default when a cluster is upgraded using `rke up`.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v1.1.0+" %}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v1.1.0+">
|
||||
|
||||
The following features are new in RKE v1.1.0:
|
||||
|
||||
|
|
@ -50,7 +53,7 @@ When each node in a batch returns to a Ready state, the next batch of nodes begi
|
|||
|
||||
RKE scans the cluster before starting the upgrade to find the powered down or unreachable hosts. The upgrade will stop if that number matches or exceeds the maximum number of unavailable nodes.
|
||||
|
||||
RKE will cordon each node before upgrading it, and uncordon the node afterward. RKE can also be configured to [drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) nodes before upgrading them.
|
||||
RKE will cordon each node before upgrading it, and uncordon the node afterward. RKE can also be configured to [drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) nodes before upgrading them.
|
||||
|
||||
RKE will handle all worker node upgrades before upgrading any add-ons. As long as the maximum number of unavailable worker nodes is not reached, RKE will attempt to upgrade the [addons.](#upgrades-of-addons) For example, if a cluster has two worker nodes and one worker node fails, but the maximum unavailable worker nodes is greater than one, the addons will still be upgraded.
|
||||
|
||||
|
|
@ -64,8 +67,8 @@ For more information on configuring the number of replicas for each addon, refer
|
|||
|
||||
For an example showing how to configure the addons, refer to the [example cluster.yml.](upgrades/configuring-strategy/#example-cluster-yml)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v1.1.0" %}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v1.1.0">
|
||||
|
||||
When a cluster is upgraded with `rke up`, using the default options, the following process is used:
|
||||
|
||||
|
|
@ -86,5 +89,5 @@ Worker nodes are upgraded simultaneously, in batches of either 50 or the total n
|
|||
|
||||
When a worker node is upgraded, it restarts several Docker processes, including the `kubelet` and `kube-proxy`. When `kube-proxy` comes up, it flushes `iptables`. When this happens, pods on this node can’t be accessed, resulting in downtime for the applications.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
|
|||
|
|
@ -3,6 +3,9 @@ title: Upgrades
|
|||
weight: 100
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
After RKE has deployed Kubernetes, you can upgrade the versions of the components in your Kubernetes cluster, the [definition of the Kubernetes services](config-options/services/) or the [add-ons](config-options/add-ons/).
|
||||
|
||||
The default Kubernetes version for each RKE version can be found in the release notes accompanying [the RKE download](https://github.com/rancher/rke/releases/). RKE v1.x should be used.
|
||||
|
|
@ -22,20 +25,20 @@ In [this section,](upgrades/how-upgrades-work) you'll learn what happens when yo
|
|||
- Ensure that any `system_images` configuration is absent from the `cluster.yml`. The Kubernetes version should only be listed under the `system_images` directive if an [unsupported version](#using-an-unsupported-kubernetes-version) is being used. Refer to [Kubernetes version precedence](#kubernetes-version-precedence) for more information.
|
||||
- Ensure that the correct files to manage [Kubernetes cluster state](installation/#kubernetes-cluster-state) are present in the working directory. Refer to the tabs below for the required files, which differ based on the RKE version.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "RKE v0.2.0+" %}}
|
||||
<Tabs>
|
||||
<TabItem value="RKE v0.2.0+">
|
||||
The `cluster.rkestate` file contains the current state of the cluster including the RKE configuration and the certificates.
|
||||
|
||||
This file is created in the same directory that has the cluster configuration file `cluster.yml`.
|
||||
|
||||
It is required to keep the `cluster.rkestate` file to perform any operation on the cluster through RKE, or when upgrading a cluster last managed via RKE v0.2.0 or later.
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
</TabItem>
|
||||
<TabItem value="RKE before v0.2.0">
|
||||
Ensure that the `kube_config_cluster.yml` file is present in the working directory.
|
||||
|
||||
RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates or changes the state, and saves a new secret. The `kube_config_cluster.yml` file is required for upgrading a cluster last managed via RKE v0.1.x.
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Upgrading Kubernetes
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue