mirror of https://github.com/rancher/rke1-docs.git
Convert h1 to h2
h1 is intended for page title. h1s don't generate an entry in the table of contents
This commit is contained in:
parent
e1891f1460
commit
b6a591d3cf
|
|
@ -22,7 +22,7 @@ There are a few things worth noting:
|
|||
- [Add-on Placement](#add-on-placement)
|
||||
- [Tolerations](#tolerations)
|
||||
|
||||
# Critical and Non-Critical Add-ons
|
||||
## Critical and Non-Critical Add-ons
|
||||
|
||||
As of version v0.1.7, add-ons are split into two categories:
|
||||
|
||||
|
|
@ -30,7 +30,7 @@ As of version v0.1.7, add-ons are split into two categories:
|
|||
|
||||
- **Non-critical add-ons:** If these add-ons fail to deploy, RKE will only log a warning and continue deploying any other add-ons. [User-defined add-ons](config-options/add-ons/user-defined-add-ons/) are considered non-critical.
|
||||
|
||||
# Add-on Deployment Jobs
|
||||
## Add-on Deployment Jobs
|
||||
|
||||
RKE uses Kubernetes jobs to deploy add-ons. In some cases, add-ons deployment takes longer than expected. As of with version v0.1.7, RKE provides an option to control the job check timeout in seconds. This timeout is set at the cluster level.
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ RKE uses Kubernetes jobs to deploy add-ons. In some cases, add-ons deployment ta
|
|||
addon_job_timeout: 30
|
||||
```
|
||||
|
||||
# Add-on Placement
|
||||
## Add-on Placement
|
||||
|
||||
_Applies to v0.2.3 and higher_
|
||||
|
||||
|
|
@ -53,7 +53,7 @@ _Applies to v0.2.3 and higher_
|
|||
| nginx-ingress | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
|
||||
| metrics-server | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
|
||||
|
||||
# Tolerations
|
||||
## Tolerations
|
||||
|
||||
_Available as of v1.2.4_
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
title: DNS providers
|
||||
---
|
||||
|
||||
# Available DNS Providers
|
||||
## Available DNS Providers
|
||||
|
||||
RKE provides the following DNS providers that can be deployed as add-ons:
|
||||
|
||||
|
|
@ -23,7 +23,7 @@ If you switch from one DNS provider to another, the existing DNS provider will b
|
|||
|
||||
:::
|
||||
|
||||
# Disabling Deployment of a DNS Provider
|
||||
## Disabling Deployment of a DNS Provider
|
||||
|
||||
_Available as of v0.2.0_
|
||||
|
||||
|
|
@ -34,7 +34,7 @@ dns:
|
|||
provider: none
|
||||
```
|
||||
|
||||
# CoreDNS
|
||||
## CoreDNS
|
||||
|
||||
_Available as of v0.2.5_
|
||||
|
||||
|
|
@ -119,7 +119,7 @@ kubectl -n kube-system get deploy coredns -o jsonpath='{.spec.template.spec.tole
|
|||
kubectl -n kube-system get deploy coredns-autoscaler -o jsonpath='{.spec.template.spec.tolerations}'
|
||||
```
|
||||
|
||||
# kube-dns
|
||||
## kube-dns
|
||||
|
||||
RKE will deploy kube-dns as a Deployment with the default replica count of 1. The pod consists of 3 containers: `kubedns`, `dnsmasq` and `sidecar`. RKE will also deploy kube-dns-autoscaler as a Deployment, which will scale the kube-dns Deployment by using the number of cores and nodes. Please see [Linear Mode](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler#linear-mode) for more information about this logic.
|
||||
|
||||
|
|
@ -206,7 +206,7 @@ kubectl get deploy kube-dns-autoscaler -n kube-system -o jsonpath='{.spec.templa
|
|||
|
||||
|
||||
|
||||
# NodeLocal DNS
|
||||
## NodeLocal DNS
|
||||
|
||||
_Available as of v1.1.0_
|
||||
|
||||
|
|
|
|||
|
|
@ -15,19 +15,19 @@ After you launch the cluster, you cannot change your network provider. Therefore
|
|||
|
||||
:::
|
||||
|
||||
# Changing the Default Network Plug-in
|
||||
## Changing the Default Network Plug-in
|
||||
|
||||
By default, the network plug-in is `canal`. If you want to use another network plug-in, you need to specify which network plug-in to enable at the cluster level in the `cluster.yml`.
|
||||
|
||||
```yaml
|
||||
# Setting the flannel network plug-in
|
||||
## Setting the flannel network plug-in
|
||||
network:
|
||||
plugin: flannel
|
||||
```
|
||||
|
||||
The images used for network plug-ins are under the [`system_images` directive](config-options/system-images/). For each Kubernetes version, there are default images associated with each network plug-in, but these can be overridden by changing the image tag in `system_images`.
|
||||
|
||||
# Disabling Deployment of a Network Plug-in
|
||||
## Disabling Deployment of a Network Plug-in
|
||||
|
||||
You can disable deploying a network plug-in by specifying `none` to the network `plugin` directive in the cluster configuration.
|
||||
|
||||
|
|
@ -36,7 +36,7 @@ network:
|
|||
plugin: none
|
||||
```
|
||||
|
||||
# Network Plug-in Options
|
||||
## Network Plug-in Options
|
||||
|
||||
Besides the different images that could be used to deploy network plug-ins, certain network plug-ins support additional options that can be used to customize the network plug-in.
|
||||
|
||||
|
|
@ -45,7 +45,7 @@ Besides the different images that could be used to deploy network plug-ins, cert
|
|||
- [Calico](#calico)
|
||||
- [Weave](#weave)
|
||||
|
||||
# Canal
|
||||
## Canal
|
||||
|
||||
### Canal Network Plug-in Options
|
||||
|
||||
|
|
@ -91,7 +91,7 @@ To check for applied tolerations on the `calico-kube-controllers` Deployment, us
|
|||
kubectl -n kube-system get deploy calico-kube-controllers -o jsonpath='{.spec.template.spec.tolerations}'
|
||||
```
|
||||
|
||||
# Flannel
|
||||
## Flannel
|
||||
### Flannel Network Plug-in Options
|
||||
|
||||
```yaml
|
||||
|
|
@ -110,7 +110,7 @@ By setting the `flannel_iface`, you can configure the interface to use for inter
|
|||
The `flannel_backend_type` option allows you to specify the type of [flannel backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md) to use. By default the `vxlan` backend is used.
|
||||
|
||||
|
||||
# Calico
|
||||
## Calico
|
||||
|
||||
### Calico Network Plug-in Options
|
||||
|
||||
|
|
@ -157,7 +157,7 @@ To check for applied tolerations on the `calico-kube-controllers` Deployment, us
|
|||
kubectl -n kube-system get deploy calico-kube-controllers -o jsonpath='{.spec.template.spec.tolerations}'
|
||||
```
|
||||
|
||||
# Weave
|
||||
## Weave
|
||||
### Weave Network Plug-in Options
|
||||
|
||||
```yaml
|
||||
|
|
@ -174,6 +174,6 @@ network:
|
|||
|
||||
Weave encryption can be enabled by passing a string password to the network provider config.
|
||||
|
||||
# Custom Network Plug-ins
|
||||
## Custom Network Plug-ins
|
||||
|
||||
It is possible to add a custom network plug-in by using the [user-defined add-on functionality](config-options/add-ons/user-defined-add-ons/) of RKE. In the `addons` field, you can add the add-on manifest of a cluster that has the network plugin-that you want, as shown in [this example.](config-options/add-ons/network-plugins/custom-network-plugin-example)
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ This documentation reflects the new vSphere Cloud Provider configuration schema
|
|||
|
||||
:::
|
||||
|
||||
# vSphere Configuration Example
|
||||
## vSphere Configuration Example
|
||||
|
||||
Given the following:
|
||||
|
||||
|
|
@ -45,7 +45,8 @@ rancher_kubernetes_engine_config:
|
|||
resourcepool-path: /eu-west-1/host/hn1/resources/myresourcepool
|
||||
|
||||
```
|
||||
# Configuration Options
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The vSphere configuration options are divided into 5 groups:
|
||||
|
||||
|
|
|
|||
|
|
@ -15,12 +15,12 @@ When provisioning Kubernetes using RKE CLI or using [RKE clusters](https://ranch
|
|||
- **Storage:** If you are setting up storage, see the [official vSphere documentation on storage for Kubernetes,](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) or the [official Kubernetes documentation on persistent volumes.](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) If you are using Rancher, refer to the [Rancher documentation on provisioning storage in vSphere.](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/vsphere-storage#docusaurus_skipToContent_fallback)
|
||||
- **For Rancher users:** Refer to the Rancher documentation on [creating vSphere Kubernetes clusters](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-ranchernode-pools/vsphere) and [provisioning storage.](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/provisioning-storage-examples/vsphere-storage#docusaurus_skipToContent_fallback)
|
||||
|
||||
# Prerequisites
|
||||
## Prerequisites
|
||||
|
||||
- **Credentials:** You'll need to have credentials of a vCenter/ESXi user account with privileges allowing the cloud provider to interact with the vSphere infrastructure to provision storage. Refer to [this document](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/vcp-roles.html) to create and assign a role with the required permissions in vCenter.
|
||||
- **VMware Tools** must be running in the Guest OS for all nodes in the cluster.
|
||||
- **Disk UUIDs:** All nodes must be configured with disk UUIDs. This is required so that attached VMDKs present a consistent UUID to the VM, allowing the disk to be mounted properly. See the section on [enabling disk UUIDs.](config-options/cloud-providers/vsphere/enabling-uuid)
|
||||
|
||||
# Enabling the vSphere Provider with the RKE CLI
|
||||
## Enabling the vSphere Provider with the RKE CLI
|
||||
|
||||
To enable the vSphere Cloud Provider in the cluster, you must add the top-level `cloud_provider` directive to the cluster configuration file, set the `name` property to `vsphere` and add the `vsphereCloudProvider` directive containing the configuration matching your infrastructure. See the [configuration reference](config-options/cloud-providers/vsphere/config-reference) for the gory details.
|
||||
|
|
@ -4,7 +4,7 @@ title: Nodes
|
|||
|
||||
The `nodes` directive is the only required section in the `cluster.yml` file. It's used by RKE to specify cluster node(s), ssh credentials used to access the node(s) and which roles these nodes will be in the Kubernetes cluster.
|
||||
|
||||
# Node Configuration Example
|
||||
## Node Configuration Example
|
||||
|
||||
The following example shows node configuration in an example `cluster.yml`:
|
||||
|
||||
|
|
@ -52,7 +52,7 @@ nodes:
|
|||
app: ingress
|
||||
```
|
||||
|
||||
# Kubernetes Roles
|
||||
## Kubernetes Roles
|
||||
|
||||
You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process.
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ Taint Key | Taint Value | Taint Effect
|
|||
|
||||
With this role, any workloads or pods that are deployed will land on these nodes.
|
||||
|
||||
# Node Options
|
||||
## Node Options
|
||||
|
||||
Within each node, there are multiple directives that can be used.
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,8 @@ services:
|
|||
- identity: {}
|
||||
|
||||
```
|
||||
# Managed At-Rest Data Encryption
|
||||
|
||||
## Managed At-Rest Data Encryption
|
||||
|
||||
Enabling and disabling at-rest data encryption in Kubernetes is a relatively complex process that requires several steps to be performed by the Kubernetes cluster administrator. The managed configuration aims to reduce this overhead and provides a simple abstraction layer to manage the process.
|
||||
|
||||
|
|
@ -81,8 +82,8 @@ Once encryption is disabled in `cluster.yml`, RKE will perform the following [ac
|
|||
- Update `kube-apiserver` arguments to remove the encryption provider configuration and restart the `kube-apiserver`.
|
||||
- Remove the provider configuration file.
|
||||
|
||||
## Key Rotation
|
||||
|
||||
# Key Rotation
|
||||
Sometimes there is a need to rotate encryption config in your cluster. For example, the key is compromised. There are two ways to rotate the keys: with an RKE CLI command, or by disabling and re-enabling encryption in `cluster.yml`.
|
||||
|
||||
### Rotating Keys with the RKE CLI
|
||||
|
|
@ -114,7 +115,8 @@ This command will perform the following actions:
|
|||
|
||||
For a cluster with encryption enabled, you can rotate the encryption keys by updating `cluster.yml`. If you disable and re-enable the data encryption in the `cluster.yml`, RKE will not reuse old keys. Instead, it will generate new keys every time, yielding the same result as a key rotation with the RKE CLI.
|
||||
|
||||
# Custom At-Rest Data Encryption Configuration
|
||||
## Custom At-Rest Data Encryption Configuration
|
||||
|
||||
With managed configuration, RKE provides the user with a very simple way to enable and disable encryption with minimal interaction and configuration. However, it doesn't allow for any customization to the configuration.
|
||||
|
||||
With custom encryption configuration, RKE allows the user to provide their own configuration. Although RKE will help the user to deploy the configuration and rewrite the secrets if needed, it doesn't provide a configuration validation on user's behalf. It's the user responsibility to make sure their configuration is valid.
|
||||
|
|
|
|||
|
|
@ -12,19 +12,19 @@ RKE can upload your snapshots to a S3 compatible backend.
|
|||
|
||||
**Note:** As of RKE v0.2.0, the `pki.bundle.tar.gz` file is no longer required because of a change in how the [Kubernetes cluster state is stored](installation/#kubernetes-cluster-state).
|
||||
|
||||
# Backing Up a Cluster
|
||||
## Backing Up a Cluster
|
||||
|
||||
You can create [one-time snapshots](etcd-snapshots/one-time-snapshots) to back up your cluster, and you can also configure [recurring snapshots](etcd-snapshots/recurring-snapshots).
|
||||
|
||||
# Restoring a Cluster from Backup
|
||||
## Restoring a Cluster from Backup
|
||||
|
||||
You can use RKE to [restore your cluster from backup](etcd-snapshots/restoring-from-backup).
|
||||
|
||||
# Example Scenarios
|
||||
## Example Scenarios
|
||||
|
||||
These [example scenarios](etcd-snapshots/example-scenarios) for backup and restore are different based on your version of RKE.
|
||||
|
||||
# How Snapshots Work
|
||||
## How Snapshots Work
|
||||
|
||||
For each etcd node in the cluster, the etcd cluster health is checked. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue