mirror of https://github.com/rancher/rke1-docs.git
Convert admonitions to Docusaurus syntax
This commit is contained in:
parent
7660fc3c45
commit
19474b77f7
|
|
@ -5,7 +5,11 @@ weight: 150
|
|||
|
||||
_Available as of v0.2.0_
|
||||
|
||||
> **Note:** This is not "TLS Certificates management in Kubernetes". Refer the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) and RKE [cluster.yaml example](example-yamls/) for more details.
|
||||
:::note
|
||||
|
||||
This is not "TLS Certificates management in Kubernetes". Refer the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/) and RKE [cluster.yaml example](example-yamls/) for more details.
|
||||
|
||||
:::
|
||||
|
||||
Certificates are an important part of Kubernetes clusters and are used for all Kubernetes cluster components. RKE has a `rke cert` command to help work with certificates.
|
||||
|
||||
|
|
@ -46,13 +50,13 @@ To rotate the service certificates for all the Kubernetes services, run the foll
|
|||
|
||||
```
|
||||
$ rke cert rotate
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
INFO[0000] [certificates] Generating Kubernetes API server certificates
|
||||
INFO[0000] [certificates] Generating Kube Controller certificates
|
||||
INFO[0000] [certificates] Generating Kube Scheduler certificates
|
||||
INFO[0001] [certificates] Generating Kube Proxy certificates
|
||||
INFO[0001] [certificates] Generating Node certificate
|
||||
INFO[0001] [certificates] Generating Node certificate
|
||||
INFO[0001] [certificates] Generating admin certificates and kubeconfig
|
||||
INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates
|
||||
INFO[0001] [certificates] Generating etcd-xxxxx certificate and key
|
||||
|
|
@ -72,9 +76,9 @@ Example of rotating the certificate for only the `kubelet`:
|
|||
|
||||
```
|
||||
$ rke cert rotate --service kubelet
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
INFO[0000] [certificates] Generating Node certificate
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
INFO[0000] [certificates] Generating Node certificate
|
||||
INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
|
||||
INFO[0000] Rebuilding Kubernetes cluster with rotated certificates
|
||||
.....
|
||||
|
|
@ -92,16 +96,16 @@ Rotating the CA certificate will result in restarting other system pods, that wi
|
|||
- KubeDNS pods
|
||||
|
||||
```
|
||||
$ rke cert rotate --rotate-ca
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
$ rke cert rotate --rotate-ca
|
||||
INFO[0000] Initiating Kubernetes cluster
|
||||
INFO[0000] Rotating Kubernetes cluster certificates
|
||||
INFO[0000] [certificates] Generating CA kubernetes certificates
|
||||
INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
|
||||
INFO[0000] [certificates] Generating Kubernetes API server certificates
|
||||
INFO[0000] [certificates] Generating Kube Controller certificates
|
||||
INFO[0000] [certificates] Generating Kube Scheduler certificates
|
||||
INFO[0000] [certificates] Generating Kube Proxy certificates
|
||||
INFO[0000] [certificates] Generating Node certificate
|
||||
INFO[0000] [certificates] Generating Node certificate
|
||||
INFO[0001] [certificates] Generating admin certificates and kubeconfig
|
||||
INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates
|
||||
INFO[0001] [certificates] Generating etcd-xxxxx certificate and key
|
||||
|
|
|
|||
|
|
@ -18,7 +18,11 @@ RKE provides the following DNS providers that can be deployed as add-ons:
|
|||
|
||||
CoreDNS was made the default in RKE v0.2.5 when using Kubernetes 1.14 and higher. If you are using an RKE version lower than v0.2.5, kube-dns will be deployed by default.
|
||||
|
||||
> **Note:** If you switch from one DNS provider to another, the existing DNS provider will be removed before the new one is deployed.
|
||||
:::note
|
||||
|
||||
If you switch from one DNS provider to another, the existing DNS provider will be removed before the new one is deployed.
|
||||
|
||||
:::
|
||||
|
||||
# Disabling Deployment of a DNS Provider
|
||||
|
||||
|
|
@ -207,11 +211,13 @@ kubectl get deploy kube-dns-autoscaler -n kube-system -o jsonpath='{.spec.templa
|
|||
|
||||
_Available as of v1.1.0_
|
||||
|
||||
> **Note:** The option to enable NodeLocal DNS is available for:
|
||||
>
|
||||
> * Kubernetes v1.15.11 and up
|
||||
> * Kubernetes v1.16.8 and up
|
||||
> * Kubernetes v1.17.4 and up
|
||||
:::note The option to enable NodeLocal DNS is available for:
|
||||
|
||||
- Kubernetes v1.15.11 and up
|
||||
- Kubernetes v1.16.8 and up
|
||||
- Kubernetes v1.17.4 and up
|
||||
|
||||
:::
|
||||
|
||||
NodeLocal DNS is an additional component that can be deployed on each node to improve DNS performance. It is not a replacement for the `provider` parameter, you will still need to have one of the available DNS providers configured. See [Using NodeLocal DNSCache in Kubernetes clusters](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) for more information on how NodeLocal DNS works.
|
||||
|
||||
|
|
@ -228,7 +234,11 @@ dns:
|
|||
ip_address: "169.254.20.10"
|
||||
```
|
||||
|
||||
> **Note:** When enabling NodeLocal DNS on an existing cluster, pods that are currently running will not be modified, the updated `/etc/resolv.conf` configuration will take effect only for pods started after enabling NodeLocal DNS.
|
||||
:::note
|
||||
|
||||
When enabling NodeLocal DNS on an existing cluster, pods that are currently running will not be modified, the updated `/etc/resolv.conf` configuration will take effect only for pods started after enabling NodeLocal DNS.
|
||||
|
||||
:::
|
||||
|
||||
### NodeLocal Priority Class Name
|
||||
|
||||
|
|
@ -248,4 +258,8 @@ dns:
|
|||
|
||||
By removing the `ip_address` value, NodeLocal DNS will be removed from the cluster.
|
||||
|
||||
> **Warning:** When removing NodeLocal DNS, a disruption to DNS can be expected. The updated `/etc/resolv.conf` configuration will take effect only for pods that are started after removing NodeLocal DNS. In general pods using the default `dnsPolicy: ClusterFirst` will need to be re-deployed.
|
||||
:::caution
|
||||
|
||||
When removing NodeLocal DNS, a disruption to DNS can be expected. The updated `/etc/resolv.conf` configuration will take effect only for pods that are started after removing NodeLocal DNS. In general pods using the default `dnsPolicy: ClusterFirst` will need to be re-deployed.
|
||||
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -11,11 +11,19 @@ import TabItem from '@theme/TabItem';
|
|||
|
||||
By default, RKE deploys the NGINX ingress controller on all schedulable nodes.
|
||||
|
||||
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes.
|
||||
:::note
|
||||
|
||||
As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes.
|
||||
|
||||
:::
|
||||
|
||||
RKE will deploy the ingress controller as a DaemonSet with `hostNetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed.
|
||||
|
||||
> **Note:** As of v1.1.11, the network options of the ingress controller are configurable. See [Configuring network options](#configuring-network-options).
|
||||
:::note
|
||||
|
||||
As of v1.1.11, the network options of the ingress controller are configurable. See [Configuring network options](#configuring-network-options).
|
||||
|
||||
:::
|
||||
|
||||
The images used for ingress controller is under the [`system_images` directive](config-options/system-images/). For each Kubernetes version, there are default images associated with the ingress controller, but these can be overridden by changing the image tag in `system_images`.
|
||||
|
||||
|
|
@ -105,7 +113,11 @@ ingress:
|
|||
default_backend: false
|
||||
```
|
||||
|
||||
> **What happens if the field is omitted?** The value of `default_backend` will default to `true`. This maintains behavior with older versions of `rke`. However, a future version of `rke` will change the default value to `false`.
|
||||
:::info What happens if the field is omitted?
|
||||
|
||||
The value of `default_backend` will default to `true`. This maintains behavior with older versions of `rke`. However, a future version of `rke` will change the default value to `false`.
|
||||
|
||||
:::
|
||||
|
||||
### Configuring network options
|
||||
|
||||
|
|
@ -156,10 +168,12 @@ When configuring an ingress object with TLS termination, you must provide it wit
|
|||
|
||||
Setting up a default certificate is especially helpful in environments where a wildcard certificate is used, as the certificate can be applied in multiple subdomains.
|
||||
|
||||
>**Prerequisites:**
|
||||
>
|
||||
>- Access to the `cluster.yml` used to create the cluster.
|
||||
>- The PEM encoded certificate you will use as the default certificate.
|
||||
:::note Prerequisites
|
||||
|
||||
- Access to the `cluster.yml` used to create the cluster.
|
||||
- The PEM encoded certificate you will use as the default certificate.
|
||||
|
||||
:::
|
||||
|
||||
1. Obtain or generate your certificate key pair in a PEM encoded form.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,11 @@ RKE provides the following network plug-ins that are deployed as add-ons:
|
|||
- Canal
|
||||
- Weave
|
||||
|
||||
> After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn’t allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
|
||||
:::caution
|
||||
|
||||
After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn’t allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.
|
||||
|
||||
:::
|
||||
|
||||
# Changing the Default Network Plug-in
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,11 @@ There are two ways that you can specify an add-on.
|
|||
- [In-line Add-ons](#in-line-add-ons)
|
||||
- [Referencing YAML Files for Add-ons](#referencing-yaml-files-for-add-ons)
|
||||
|
||||
> **Note:** When using user-defined add-ons, you *must* define a namespace for *all* your resources, otherwise they will end up in the `kube-system` namespace.
|
||||
:::note
|
||||
|
||||
When using user-defined add-ons, you *must* define a namespace for *all* your resources, otherwise they will end up in the `kube-system` namespace.
|
||||
|
||||
:::
|
||||
|
||||
RKE uploads the YAML manifest as a configmap to the Kubernetes cluster. Then, it runs a Kubernetes job that mounts the configmap and deploys the add-on using `kubectl apply -f`.
|
||||
|
||||
|
|
@ -41,7 +45,7 @@ addons: |-
|
|||
```
|
||||
|
||||
## Referencing YAML files for Add-ons
|
||||
Use the `addons_include` directive to reference a local file or a URL for any user-defined add-ons.
|
||||
Use the `addons_include` directive to reference a local file or a URL for any user-defined add-ons.
|
||||
|
||||
```yaml
|
||||
addons_include:
|
||||
|
|
|
|||
|
|
@ -9,7 +9,11 @@ The vSphere cloud provider must be enabled to allow dynamic provisioning of volu
|
|||
|
||||
For more details on deploying a Kubernetes cluster on vSphere, refer to the [official cloud provider documentation.](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html)
|
||||
|
||||
> **Note:** This documentation reflects the new vSphere Cloud Provider configuration schema introduced in Kubernetes v1.9 which differs from previous versions.
|
||||
:::note
|
||||
|
||||
This documentation reflects the new vSphere Cloud Provider configuration schema introduced in Kubernetes v1.9 which differs from previous versions.
|
||||
|
||||
:::
|
||||
|
||||
# vSphere Configuration Example
|
||||
|
||||
|
|
@ -84,7 +88,11 @@ Each vCenter is defined by adding a new entry under the `virtual_center` directi
|
|||
| datacenters | string | * | Comma-separated list of all datacenters in which cluster nodes are running in. |
|
||||
| soap-roundtrip-count | uint | | Round tripper count for API requests to the vCenter (num retries = value - 1). |
|
||||
|
||||
> The following additional options (introduced in Kubernetes v1.11) are not yet supported in RKE.
|
||||
:::note
|
||||
|
||||
The following additional options (introduced in Kubernetes v1.11) are not yet supported in RKE.
|
||||
|
||||
:::
|
||||
|
||||
| virtual_center Options | Type | Required | Description |
|
||||
|:----------------------:|:--------:|:---------:|:-------|
|
||||
|
|
|
|||
|
|
@ -57,7 +57,11 @@ nodes:
|
|||
|
||||
You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process.
|
||||
|
||||
> **Note:** Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
|
||||
:::note
|
||||
|
||||
Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
|
||||
|
||||
:::
|
||||
|
||||
### etcd
|
||||
|
||||
|
|
@ -95,7 +99,11 @@ The `internal_address` provides the ability to have nodes with multiple addresse
|
|||
|
||||
The `hostname_override` is used to be able to provide a friendly name for RKE to use when registering the node in Kubernetes. This hostname doesn't need to be a routable address, but it must be a valid [Kubernetes resource name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). If the `hostname_override` isn't set, then the `address` directive is used when registering the node in Kubernetes.
|
||||
|
||||
> **Note:** When [cloud providers](config-options/cloud-providers/) are configured, you may need to override the hostname in order to use the cloud provider correctly. There is an exception for the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws), where the `hostname_override` field will be explicitly ignored.
|
||||
:::note
|
||||
|
||||
When [cloud providers](config-options/cloud-providers/) are configured, you may need to override the hostname in order to use the cloud provider correctly. There is an exception for the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws), where the `hostname_override` field will be explicitly ignored.
|
||||
|
||||
:::
|
||||
|
||||
### SSH Port
|
||||
|
||||
|
|
@ -109,7 +117,11 @@ For each node, you specify the `user` to be used when connecting to this node. T
|
|||
|
||||
For each node, you specify the path, i.e. `ssh_key_path`, for the SSH private key to be used when connecting to this node. The default key path for each node is `~/.ssh/id_rsa`.
|
||||
|
||||
> **Note:** If you have a private key that can be used across all nodes, you can set the [SSH key path at the cluster level](config-options/#cluster-level-ssh-key-path). The SSH key path set in each node will always take precedence.
|
||||
:::note
|
||||
|
||||
If you have a private key that can be used across all nodes, you can set the [SSH key path at the cluster level](config-options/#cluster-level-ssh-key-path). The SSH key path set in each node will always take precedence.
|
||||
|
||||
:::
|
||||
|
||||
### SSH Key
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ title: Private Registries
|
|||
weight: 215
|
||||
---
|
||||
|
||||
RKE supports the ability to configure multiple private Docker registries in the `cluster.yml`. By passing in your registry and credentials, it allows the nodes to pull images from these private registries.
|
||||
RKE supports the ability to configure multiple private Docker registries in the `cluster.yml`. By passing in your registry and credentials, it allows the nodes to pull images from these private registries.
|
||||
|
||||
```yaml
|
||||
private_registries:
|
||||
|
|
@ -17,7 +17,11 @@ private_registries:
|
|||
|
||||
If you are using a Docker Hub registry, you can omit the `url` or set it to `docker.io`.
|
||||
|
||||
> **Note:** Although the directive is named `url`, there is no need to prefix the host or IP address with `https://`.
|
||||
:::note
|
||||
|
||||
Although the directive is named `url`, there is no need to prefix the host or IP address with `https://`.
|
||||
|
||||
:::
|
||||
|
||||
Valid `url` examples include:
|
||||
|
||||
|
|
@ -37,21 +41,21 @@ private_registries:
|
|||
- url: registry.com
|
||||
user: Username
|
||||
password: password
|
||||
is_default: true # All system images will be pulled using this registry.
|
||||
is_default: true # All system images will be pulled using this registry.
|
||||
```
|
||||
|
||||
### Air-gapped Setups
|
||||
|
||||
By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images](config-options/system-images/).
|
||||
By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images](config-options/system-images/).
|
||||
|
||||
As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images](config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry.
|
||||
As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images](config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry.
|
||||
|
||||
Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images](config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
|
||||
Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images](config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
|
||||
|
||||
|
||||
### Amazon Elastic Container Registry (ECR) Private Registry Setup
|
||||
|
||||
[Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is an AWS managed container image registry service that is secure, scalable, and reliable. There are two ways in which to provide ECR credentials to set up your ECR private registry: using an instance profile or adding a configuration snippet, which are hard-coded credentials in environment variables for the `kubelet` and credentials under the `ecrCredentialPlugin`.
|
||||
[Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) is an AWS managed container image registry service that is secure, scalable, and reliable. There are two ways in which to provide ECR credentials to set up your ECR private registry: using an instance profile or adding a configuration snippet, which are hard-coded credentials in environment variables for the `kubelet` and credentials under the `ecrCredentialPlugin`.
|
||||
|
||||
- **Instance Profile**: An instance profile is the preferred and more secure approach to provide ECR credentials (when running in EC2, etc.). The instance profile will be autodetected and used by default. For more information on configuring an instance profile with ECR permissions, go [here](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam.html).
|
||||
|
||||
|
|
@ -61,12 +65,14 @@ Before v0.1.10, you had to configure your private registry credentials **and** u
|
|||
- Node is an EC2 instance but does not have an instance profile configured
|
||||
- Node is an EC2 instance and has an instance profile configured but has no permissions for ECR
|
||||
|
||||
> **Note:** The ECR credentials are only used in the `kubelet` and `ecrCredentialPlugin` areas. This is important to remember if you have issues while creating a new cluster or when pulling images during reconcile/upgrades.
|
||||
>
|
||||
> - Kubelet: For add-ons, custom workloads, etc., the instance profile or credentials are used by the
|
||||
> downstream cluster nodes
|
||||
> - Pulling system images (directly via Docker): For bootstrap, upgrades, reconcile, etc., the instance profile
|
||||
> or credentials are used by nodes running RKE or running the Rancher pods.
|
||||
:::note
|
||||
|
||||
The ECR credentials are only used in the `kubelet` and `ecrCredentialPlugin` areas. This is important to remember if you have issues while creating a new cluster or when pulling images during reconcile/upgrades.
|
||||
|
||||
- Kubelet: For add-ons, custom workloads, etc., the instance profile or credentials are used by the downstream cluster nodes
|
||||
- Pulling system images (directly via Docker): For bootstrap, upgrades, reconcile, etc., the instance profile or credentials are used by nodes running RKE or running the Rancher pods.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
# Configuration snippet to be used when the instance profile is unavailable.
|
||||
|
|
@ -78,8 +84,7 @@ Before v0.1.10, you had to configure your private registry credentials **and** u
|
|||
private_registries:
|
||||
- url: ACCOUNTID.dkr.ecr.REGION.amazonaws.com
|
||||
is_default: true
|
||||
ecrCredentialPlugin:
|
||||
ecrCredentialPlugin:
|
||||
aws_access_key_id: "ACCESSKEY"
|
||||
aws_secret_access_key: "SECRETKEY"
|
||||
```
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -120,7 +120,11 @@ With managed configuration, RKE provides the user with a very simple way to enab
|
|||
|
||||
With custom encryption configuration, RKE allows the user to provide their own configuration. Although RKE will help the user to deploy the configuration and rewrite the secrets if needed, it doesn't provide a configuration validation on user's behalf. It's the user responsibility to make sure their configuration is valid.
|
||||
|
||||
>**Warning:** Using invalid Encryption Provider Configuration could cause several issues with your cluster, ranging from crashing the Kubernetes API service, `kube-api`, to completely losing access to encrypted data.
|
||||
:::caution
|
||||
|
||||
Using invalid Encryption Provider Configuration could cause several issues with your cluster, ranging from crashing the Kubernetes API service, `kube-api`, to completely losing access to encrypted data.
|
||||
|
||||
:::
|
||||
|
||||
### Example: Using Custom Encryption Configuration with User Provided 32-byte Random Key
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,11 @@ weight: 232
|
|||
|
||||
By default, RKE will launch etcd servers, but RKE also supports being able to use an external etcd. RKE only supports connecting to a TLS enabled etcd setup.
|
||||
|
||||
> **Note:** RKE will not accept having external etcd servers in conjunction with [nodes](config-options/nodes/) with the `etcd` role.
|
||||
:::note
|
||||
|
||||
RKE will not accept having external etcd servers in conjunction with [nodes](config-options/nodes/) with the `etcd` role.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
services:
|
||||
|
|
|
|||
|
|
@ -6,9 +6,13 @@ weight: 230
|
|||
|
||||
To deploy Kubernetes, RKE deploys several core components or services in Docker containers on the nodes. Based on the roles of the node, the containers deployed may be different.
|
||||
|
||||
>**Note:** All services support <b>additional custom arguments, Docker mount binds, and extra environment variables.</b>
|
||||
>
|
||||
>To configure advanced options for Kubernetes services such as `kubelet`, `kube-controller`, and `kube-apiserver` that are not documented below, see the [`extra_args` documentation](config-options/services/services-extras/) for more details.
|
||||
:::note
|
||||
|
||||
All services support <b>additional custom arguments, Docker mount binds, and extra environment variables.</b>
|
||||
|
||||
To configure advanced options for Kubernetes services such as `kubelet`, `kube-controller`, and `kube-apiserver` that are not documented below, see the [`extra_args` documentation](config-options/services/services-extras/) for more details.
|
||||
|
||||
:::
|
||||
|
||||
| Component | Services key name in cluster.yml |
|
||||
|-------------------------|----------------------------------|
|
||||
|
|
@ -31,7 +35,11 @@ By default, RKE will deploy a new etcd service, but you can also run Kubernetes
|
|||
|
||||
## Kubernetes API Server
|
||||
|
||||
> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_api`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
:::note Note for Rancher 2 users
|
||||
|
||||
If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_api`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
|
||||
:::
|
||||
|
||||
The [Kubernetes API](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) REST service, which handles requests and data for all Kubernetes objects and provide shared state for all the other Kubernetes components.
|
||||
|
||||
|
|
@ -63,7 +71,11 @@ RKE supports the following options for the `kube-api` service :
|
|||
- **Secrets Encryption Config** (`secrets_encryption_config`) - Manage Kubernetes at-rest data encryption. Documented [here](../secrets-encryption/secrets-encryption.md)
|
||||
## Kubernetes Controller Manager
|
||||
|
||||
> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
:::note Note for Rancher 2 users
|
||||
|
||||
If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
|
||||
:::
|
||||
|
||||
The [Kubernetes Controller Manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) service is the component responsible for running Kubernetes main control loops. The controller manager monitors the cluster desired state through the Kubernetes API server and makes the necessary changes to the current state to reach the desired state.
|
||||
|
||||
|
|
|
|||
|
|
@ -2,19 +2,23 @@
|
|||
title: System Images
|
||||
weight: 225
|
||||
---
|
||||
When RKE is deploying Kubernetes, there are several images that are pulled. These images are used as Kubernetes system components as well as helping to deploy these system components.
|
||||
When RKE is deploying Kubernetes, there are several images that are pulled. These images are used as Kubernetes system components as well as helping to deploy these system components.
|
||||
|
||||
As of `v0.1.6`, the functionality of a couple of the system images were consolidated into a single `rancher/rke-tools` image to simplify and speed the deployment process.
|
||||
|
||||
You can configure the [network plug-ins](config-options/add-ons/network-plugins/), [ingress controller](config-options/add-ons/ingress-controllers/) and [dns provider](config-options/add-ons/dns/) as well as the options for these add-ons separately in the `cluster.yml`.
|
||||
|
||||
Below is an example of the list of system images used to deploy Kubernetes through RKE. The default versions of Kubernetes are tied to specific versions of system images.
|
||||
Below is an example of the list of system images used to deploy Kubernetes through RKE. The default versions of Kubernetes are tied to specific versions of system images.
|
||||
|
||||
- For RKE v0.2.x and below, the map of versions and the system image versions is located here: https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
|
||||
|
||||
- For RKE v0.3.0 and above, the map of versions and the system image versions is located here: https://github.com/rancher/kontainer-driver-metadata/blob/release-v2.7/rke/k8s_rke_system_images.go
|
||||
|
||||
> **Note:** As versions of RKE are released, the tags on these images will no longer be up to date. This list is specific for `v1.10.3-rancher2`.
|
||||
:::note
|
||||
|
||||
As versions of RKE are released, the tags on these images will no longer be up to date. This list is specific for `v1.10.3-rancher2`.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
system_images:
|
||||
|
|
|
|||
|
|
@ -76,7 +76,11 @@ nodes:
|
|||
|
||||
### 4. Restore etcd on the New Node from the Backup
|
||||
|
||||
> **Prerequisite:** If the snapshot was created using RKE v1.1.4 or higher, the cluster state file should be included in the snapshot. The cluster state file will be automatically extracted and used for the restore. If the snapshot was created using RKE v1.1.3 or lower, please ensure your `cluster.rkestate` is present before starting the restore, because this contains your certificate data for the cluster.
|
||||
:::note Prerequisite
|
||||
|
||||
If the snapshot was created using RKE v1.1.4 or higher, the cluster state file should be included in the snapshot. The cluster state file will be automatically extracted and used for the restore. If the snapshot was created using RKE v1.1.3 or lower, please ensure your `cluster.rkestate` is present before starting the restore, because this contains your certificate data for the cluster.
|
||||
|
||||
:::
|
||||
|
||||
After the new node is added to the `cluster.yml`, run the `rke etcd snapshot-restore` to launch `etcd` from the backup:
|
||||
|
||||
|
|
@ -88,7 +92,11 @@ The snapshot is expected to be saved at `/opt/rke/etcd-snapshots`.
|
|||
|
||||
If you want to directly retrieve the snapshot from S3, add in the [S3 options](#options-for-rke-etcd-snapshot-restore).
|
||||
|
||||
> **Note:** As of v0.2.0, the file `pki.bundle.tar.gz` is no longer required for the restore process because the certificates required to restore are preserved within the `cluster.rkestate`.
|
||||
:::note
|
||||
|
||||
As of v0.2.0, the file `pki.bundle.tar.gz` is no longer required for the restore process because the certificates required to restore are preserved within the `cluster.rkestate`.
|
||||
|
||||
:::
|
||||
|
||||
### 5. Confirm that Cluster Operations are Restored
|
||||
|
||||
|
|
@ -191,7 +199,11 @@ root@node3:~# s3cmd get \
|
|||
/opt/rke/etcd-snapshots/pki.bundle.tar.gz
|
||||
```
|
||||
|
||||
> **Note:** If you had multiple etcd nodes, you would have to manually sync the snapshot and `pki.bundle.tar.gz` across all of the etcd nodes in the cluster.
|
||||
:::note
|
||||
|
||||
If you had multiple etcd nodes, you would have to manually sync the snapshot and `pki.bundle.tar.gz` across all of the etcd nodes in the cluster.
|
||||
|
||||
:::
|
||||
|
||||
<a id="add-a-new-etcd-node-to-the-kubernetes-cluster-rke-before-v0.2.0"></a>
|
||||
### 6. Add a New etcd Node to the Kubernetes Cluster
|
||||
|
|
|
|||
|
|
@ -22,7 +22,11 @@ The following actions will be performed when you run the command:
|
|||
- Creates a new cluster by running `rke up`.
|
||||
- Restarts cluster system pods.
|
||||
|
||||
>**Warning:** You should back up any important data in your cluster before running `rke etcd snapshot-restore` because the command deletes your current Kubernetes cluster and replaces it with a new one.
|
||||
:::danger
|
||||
|
||||
You should back up any important data in your cluster before running `rke etcd snapshot-restore` because the command deletes your current Kubernetes cluster and replaces it with a new one.
|
||||
|
||||
:::
|
||||
|
||||
The snapshot used to restore your etcd cluster can either be stored locally in `/opt/rke/etcd-snapshots` or from a S3 compatible backend.
|
||||
|
||||
|
|
@ -93,7 +97,11 @@ Before you run this command, you must:
|
|||
|
||||
After the restore, you must rebuild your Kubernetes cluster with `rke up`.
|
||||
|
||||
>**Warning:** You should back up any important data in your cluster before running `rke etcd snapshot-restore` because the command deletes your current etcd cluster and replaces it with a new one.
|
||||
:::danger
|
||||
|
||||
You should back up any important data in your cluster before running `rke etcd snapshot-restore` because the command deletes your current etcd cluster and replaces it with a new one.
|
||||
|
||||
:::
|
||||
|
||||
### Example of Restoring from a Local Snapshot
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,11 @@ aliases:
|
|||
|
||||
There are lots of different [configuration options](config-options/) that can be set in the cluster configuration file for RKE. Here are some examples of files:
|
||||
|
||||
> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_api` and `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
:::note Note for Rancher 2 users
|
||||
|
||||
If you are configuring Cluster Options using a [Config File](https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration#rke-cluster-config-file-reference) when creating [Rancher Launched Kubernetes](https://ranchermanager.docs.rancher.com/pages-for-subheaders/launch-kubernetes-with-rancher), the names of services should contain underscores only: `kube_api` and `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6.
|
||||
|
||||
:::
|
||||
|
||||
## Minimal `cluster.yml` example
|
||||
|
||||
|
|
|
|||
|
|
@ -87,7 +87,11 @@ $ port upgrade rke
|
|||
|
||||
The Kubernetes cluster components are launched using Docker on a Linux distro. You can use any Linux you want, as long as you can install Docker on it.
|
||||
|
||||
> For information on which Docker versions were tested with your version of RKE, refer to the [terms of service](https://rancher.com/support-maintenance-terms) for installing Rancher on RKE.
|
||||
:::tip
|
||||
|
||||
For information on which Docker versions were tested with your version of RKE, refer to the [terms of service](https://rancher.com/support-maintenance-terms) for installing Rancher on RKE.
|
||||
|
||||
:::
|
||||
|
||||
Review the [OS requirements](/os) and configure each node appropriately.
|
||||
|
||||
|
|
@ -151,12 +155,19 @@ INFO[0101] Finished building Kubernetes cluster successfully
|
|||
|
||||
The last line should read `Finished building Kubernetes cluster successfully` to indicate that your cluster is ready to use. As part of the Kubernetes creation process, a `kubeconfig` file has been created and written at `kube_config_cluster.yml`, which can be used to start interacting with your Kubernetes cluster.
|
||||
|
||||
> **Note:** If you have used a different file name from `cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
|
||||
:::note
|
||||
|
||||
If you have used a different file name from `cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
|
||||
|
||||
:::
|
||||
|
||||
## Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
:::note Important
|
||||
|
||||
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
|
||||
:::
|
||||
|
||||
Save a copy of the following files in a secure location:
|
||||
|
||||
|
|
@ -164,7 +175,11 @@ Save a copy of the following files in a secure location:
|
|||
- `kube_config_cluster.yml`: The [Kubeconfig file](kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `cluster.rkestate`: The [Kubernetes Cluster State file](#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
|
||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
:::note
|
||||
|
||||
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
|
||||
:::
|
||||
|
||||
### Kubernetes Cluster State
|
||||
|
||||
|
|
|
|||
|
|
@ -11,12 +11,16 @@ For more details on how kubeconfig and kubectl work together, see the [Kubernete
|
|||
|
||||
When you deployed Kubernetes, a kubeconfig is automatically generated for your RKE cluster. This file is created and saved as `kube_config_cluster.yml`.
|
||||
|
||||
>**Note:** By default, kubectl checks `~/.kube/config` for a kubeconfig file, but you can use any directory you want using the `--kubeconfig` flag. For example:
|
||||
>
|
||||
>```
|
||||
:::note
|
||||
|
||||
By default, kubectl checks `~/.kube/config` for a kubeconfig file, but you can use any directory you want using the `--kubeconfig` flag. For example:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig /custom/path/kube.config get pods
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
Confirm that kubectl is working by checking the version of your Kubernetes cluster
|
||||
|
||||
```
|
||||
|
|
|
|||
|
|
@ -20,13 +20,21 @@ After you've made changes to add/remove nodes, run `rke up` with the updated `cl
|
|||
|
||||
You can add/remove only worker nodes, by running `rke up --update-only`. This will ignore everything else in the `cluster.yml` except for any worker nodes.
|
||||
|
||||
> **Note:** When using `--update-only`, other actions that do not specifically relate to nodes may be deployed or updated, for example [addons](../config-options/add-ons/add-ons.md).
|
||||
:::note
|
||||
|
||||
When using `--update-only`, other actions that do not specifically relate to nodes may be deployed or updated, for example [addons](../config-options/add-ons/add-ons.md).
|
||||
|
||||
:::
|
||||
|
||||
### Removing Kubernetes Components from Nodes
|
||||
|
||||
In order to remove the Kubernetes components from nodes, you use the `rke remove` command.
|
||||
|
||||
> **Warning:** This command is irreversible and will destroy the Kubernetes cluster, including etcd snapshots on S3. If there is a disaster and your cluster is inaccessible, refer to the process for [restoring your cluster from a snapshot](etcd-snapshots/#etcd-disaster-recovery).
|
||||
:::danger
|
||||
|
||||
This command is irreversible and will destroy the Kubernetes cluster, including etcd snapshots on S3. If there is a disaster and your cluster is inaccessible, refer to the process for [restoring your cluster from a snapshot](etcd-snapshots/#etcd-disaster-recovery).
|
||||
|
||||
:::
|
||||
|
||||
The `rke remove` command does the following to each node in the `cluster.yml`:
|
||||
|
||||
|
|
@ -40,7 +48,11 @@ The `rke remove` command does the following to each node in the `cluster.yml`:
|
|||
|
||||
The cluster's etcd snapshots are removed, including both local snapshots and snapshots that are stored on S3.
|
||||
|
||||
> **Note:** Pods are not removed from the nodes. If the node is re-used, the pods will automatically be removed when the new Kubernetes cluster is created.
|
||||
:::note
|
||||
|
||||
Pods are not removed from the nodes. If the node is re-used, the pods will automatically be removed when the new Kubernetes cluster is created.
|
||||
|
||||
:::
|
||||
|
||||
- Clean each host from the directories left by the services:
|
||||
- /etc/kubernetes/ssl
|
||||
|
|
|
|||
|
|
@ -18,7 +18,11 @@ RKE runs on almost any Linux OS with Docker installed. For details on which OS a
|
|||
usermod -aG docker <user_name>
|
||||
```
|
||||
|
||||
> **Note:** Users added to the `docker` group are granted effective root permissions on the host by means of the Docker API. Only choose a user that is intended for this purpose and has its credentials and access properly secured.
|
||||
:::note
|
||||
|
||||
Users added to the `docker` group are granted effective root permissions on the host by means of the Docker API. Only choose a user that is intended for this purpose and has its credentials and access properly secured.
|
||||
|
||||
:::
|
||||
|
||||
See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) to see how you can configure access to Docker without using the `root` user.
|
||||
|
||||
|
|
@ -30,7 +34,11 @@ RKE runs on almost any Linux OS with Docker installed. For details on which OS a
|
|||
- Canal (Combination Calico and Flannel)
|
||||
- [Weave](https://www.weave.works/docs/net/latest/install/installing-weave/)
|
||||
|
||||
> **Note:** If you or your cloud provider are using a custom minimal kernel, some required (network) kernel modules might not be present.
|
||||
:::note
|
||||
|
||||
If you or your cloud provider are using a custom minimal kernel, some required (network) kernel modules might not be present.
|
||||
|
||||
:::
|
||||
|
||||
- Following sysctl settings must be applied
|
||||
|
||||
|
|
@ -116,18 +124,23 @@ https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/
|
|||
|
||||
If using Red Hat Enterprise Linux, Oracle Linux or CentOS, you cannot use the `root` user as [SSH user](config-options/nodes/#ssh-user) due to [Bugzilla 1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). Please follow the instructions below how to setup Docker correctly, based on the way you installed Docker on the node.
|
||||
|
||||
>**Note:** In RHEL 8.4, two extra services are included on the NetworkManager: `nm-cloud-setup.service` and `nm-cloud-setup.timer`. These services add a routing table that interferes with the CNI plugin's configuration. If these services are enabled, you must disable them using the command below, and then reboot the node to restore connectivity:
|
||||
>
|
||||
> ```
|
||||
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
|
||||
reboot
|
||||
```
|
||||
>
|
||||
> In addition, the default firewall settings of RHEL 8.4 prevent RKE1 pods from reaching out to Rancher to connect to the cluster agent. To allow Docker containers to reach out to the internet and connect to Rancher, make the following updates to the firewall settings:
|
||||
> ```
|
||||
:::note
|
||||
|
||||
In RHEL 8.4, two extra services are included on the NetworkManager: `nm-cloud-setup.service` and `nm-cloud-setup.timer`. These services add a routing table that interferes with the CNI plugin's configuration. If these services are enabled, you must disable them using the command below, and then reboot the node to restore connectivity:
|
||||
|
||||
```
|
||||
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
|
||||
reboot
|
||||
```
|
||||
|
||||
In addition, the default firewall settings of RHEL 8.4 prevent RKE1 pods from reaching out to Rancher to connect to the cluster agent. To allow Docker containers to reach out to the internet and connect to Rancher, make the following updates to the firewall settings:
|
||||
|
||||
```
|
||||
firewall-cmd --zone=public --add-masquerade --permanent
|
||||
firewall-cmd --reload
|
||||
```
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
#### Using upstream Docker
|
||||
If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing:
|
||||
|
|
|
|||
|
|
@ -106,7 +106,11 @@ For RKE before v0.3.0, the service defaults are located [here](https://github.co
|
|||
|
||||
[Services](config-options/services/) can be upgraded by changing any of the services arguments or `extra_args` and running `rke up` again with the updated configuration file.
|
||||
|
||||
> **Note:** The following arguments, `service_cluster_ip_range` or `cluster_cidr`, cannot be changed as any changes to these arguments will result in a broken cluster. Currently, network pods are not automatically upgraded.
|
||||
:::note
|
||||
|
||||
The following arguments, `service_cluster_ip_range` or `cluster_cidr`, cannot be changed as any changes to these arguments will result in a broken cluster. Currently, network pods are not automatically upgraded.
|
||||
|
||||
:::
|
||||
|
||||
### Upgrading Nodes Manually
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue