Sync community pr #338

Signed-off-by: Sunil Singh <sunil.singh@suse.com>
This commit is contained in:
Sunil Singh 2025-08-01 13:44:54 -07:00
parent 86391a99da
commit fa3c6df9f5
No known key found for this signature in database
GPG Key ID: B63745F5C803DA80
3 changed files with 1 additions and 437 deletions

View File

@ -11,7 +11,6 @@
** xref:reference/capiprovider.adoc[CAPIProvider]
** xref:reference/clusterctlconfig.adoc[ClusterctlConfig]
* User Guide
** xref:user/clusters.adoc[Provision a CAPI cluster]
** xref:user/clusterclass.adoc[Provision a CAPI cluster with ClusterClass]
** xref:user/fleet.adoc[Create a cluster using Fleet]
** xref:user/delete-cluster.adoc[Delete an imported cluster]

View File

@ -39,7 +39,7 @@ Contains the reference documentation for all {product_name} custom resources.
=== User Guide
Learn how to use {product_name} to xref:./user/clusters.adoc[manage your CAPI clusters with Rancher], and use more advanced features like xref:./user/clusterclass.adoc[`ClusterClass`].
Learn how to use {product_name} to xref:./user/clusterclass.adoc[manage your CAPI clusters with Rancher, using ClusterClass].
=== Operator Guide

View File

@ -1,435 +0,0 @@
:doctype: book
= Create & import a cluster using CAPI providers
This guide goes over the process of creating and importing CAPI clusters with a selection of the officially certified providers.
Keep in mind that most Cluster API Providers are upstream projects maintained by the Kubernetes open-source community.
== Prerequisites
[tabs]
======
AWS::
+
--
* Rancher Manager cluster with {product_name} installed
* Cluster API Providers: you can find a guide on how to install a provider using the `CAPIProvider` resource xref:../reference/capiprovider.adoc[here]
** https://github.com/kubernetes-sigs/cluster-api-provider-aws/[Infrastructure provider for AWS], this is an example of AWS provider installation, follow the provider documentation if some options need to be customized:
+
[source,yaml]
----
---
apiVersion: v1
kind: Namespace
metadata:
name: capa-system
---
apiVersion: v1
kind: Secret
metadata:
name: aws
namespace: capa-system
type: Opaque
stringData:
AWS_B64ENCODED_CREDENTIALS: xxx
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: aws
namespace: capa-system
spec:
type: infrastructure
----
** If using RKE2 or Kubeadm, it's required to have https://github.com/rancher/cluster-api-provider-rke2[Bootstrap/Control Plane provider for RKE2](installed by default) or https://github.com/kubernetes-sigs/cluster-api[Bootstrap/Control Plane provider for Kubeadm], example of Kubeadm installation:
+
[source,yaml]
----
---
apiVersion: v1
kind: Namespace
metadata:
name: capi-kubeadm-bootstrap-system
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: kubeadm-bootstrap
namespace: capi-kubeadm-bootstrap-system
spec:
name: kubeadm
type: bootstrap
---
apiVersion: v1
kind: Namespace
metadata:
name: capi-kubeadm-control-plane-system
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: kubeadm-control-plane
namespace: capi-kubeadm-control-plane-system
spec:
name: kubeadm
type: controlPlane
----
--
GCP GKE::
+
--
* Rancher Manager cluster with {product_name} installed
* Cluster API Providers: you can find a guide on how to install a provider using the `CAPIProvider` resource xref:../reference/capiprovider.adoc[here]
** https://github.com/kubernetes-sigs/cluster-api-provider-gcp/[Infrastructure provider for GCP], this is an example of GCP provider installation, follow the provider documentation if some options need to be customized:
+
[source,bash]
----
export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
----
+
[source,yaml]
----
---
apiVersion: v1
kind: Namespace
metadata:
name: capg-system
---
apiVersion: v1
kind: Secret
metadata:
name: gcp
namespace: capg-system
type: Opaque
stringData:
GCP_B64ENCODED_CREDENTIALS: "${GCP_B64ENCODED_CREDENTIALS}"
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: gcp
namespace: capg-system
spec:
type: infrastructure
----
--
Docker RKE2/Kubeadm::
+
--
* You can follow the installation guide xref:../user/clusterclass.adoc[here]
--
vSphere RKE2/Kubeadm::
+
--
* Rancher Manager cluster with {product_name} installed
* Cluster API Providers: you can find a guide on how to install a provider using the `CAPIProvider` resource xref:../reference/capiprovider.adoc[here]
** https://github.com/kubernetes-sigs/cluster-api-provider-aws/[Infrastructure provider for vSphere], this is an example of vSphere provider installation, follow the provider documentation if some options need to be customized:
+
[source, yaml]
----
---
apiVersion: v1
kind: Namespace
metadata:
name: capv-system
---
apiVersion: v1
kind: Secret
metadata:
name: vsphere
namespace: capv-system
type: Opaque
stringData:
VSPHERE_USERNAME: xxx
VSPHERE_PASSWORD: xxx
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: vsphere
namespace: capv-system
spec:
type: infrastructure
----
** https://github.com/rancher/cluster-api-provider-rke2[Bootstrap/Control Plane provider for RKE2](installed by default) or https://github.com/kubernetes-sigs/cluster-api[Bootstrap/Control Plane provider for Kubeadm], example of Kubeadm installation:
+
[source,yaml]
----
---
apiVersion: v1
kind: Namespace
metadata:
name: capi-kubeadm-bootstrap-system
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: kubeadm-bootstrap
namespace: capi-kubeadm-bootstrap-system
spec:
name: kubeadm
type: bootstrap
---
apiVersion: v1
kind: Namespace
metadata:
name: capi-kubeadm-control-plane-system
---
apiVersion: turtles-capi.cattle.io/v1alpha1
kind: CAPIProvider
metadata:
name: kubeadm-control-plane
namespace: capi-kubeadm-control-plane-system
spec:
name: kubeadm
type: controlPlane
----
--
======
== Create Your Cluster Definition
:kubernetes-version: v1.31.4
:cluster-name: cluster1
:namespace: capi-clusters
:worker-machine-count: 1
:control-plane-machine-count: 1
[tabs]
======
AWS EC2 RKE2::
+
--
* You can follow the installation guide xref:../user/clusterclass.adoc[here]
--
AWS EC2 Kubeadm::
+
--
* You can follow the installation guide xref:../user/clusterclass.adoc[here]
--
Docker RKE2::
+
--
* You can follow the installation guide xref:../user/clusterclass.adoc[here]
--
Docker Kubeadm::
+
--
* You can follow the installation guide xref:../user/clusterclass.adoc[here]
--
vSphere RKE2::
+
--
Before creating a vSphere+RKE2 workload cluster, it is required to have a VM template with the necessary RKE2 binaries and dependencies. The template should already include RKE2 binaries if operating in an air-gapped environment, following the https://docs.rke2.io/install/airgap#tarball-method[tarball method]. You can find additional configuration details in the https://github.com/rancher/cluster-api-provider-rke2/tree/main/samples/vmware[CAPRKE2 repository].
To generate the YAML for the cluster, do the following:
[source,bash,subs=attributes+]
----
export CLUSTER_NAME={cluster-name}
export NAMESPACE={namespace}
export CONTROL_PLANE_MACHINE_COUNT={control-plane-machine-count}
export WORKER_MACHINE_COUNT={worker-machine-count}
export VSPHERE_USERNAME: "<username>"
export VSPHERE_PASSWORD: "<password>"
export VSPHERE_SERVER: "10.0.0.1"
export VSPHERE_DATACENTER: "SDDC-Datacenter"
export VSPHERE_DATASTORE: "DefaultDatastore"
export VSPHERE_NETWORK: "VM Network"
export VSPHERE_RESOURCE_POOL: "*/Resources"
export VSPHERE_FOLDER: "vm"
export VSPHERE_TEMPLATE: "ubuntu-1804-kube-v1.17.3"
export CONTROL_PLANE_ENDPOINT_IP: "192.168.9.230"
export VSPHERE_TLS_THUMBPRINT: "..."
export EXP_CLUSTER_RESOURCE_SET: "true"
export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."
export CPI_IMAGE_K8S_VERSION: "v1.31.0"
export KUBERNETES_VERSION={kubernetes-version}
----
. Open a terminal and run the following:
+
[source,bash]
----
curl -s https://raw.githubusercontent.com/rancher/turtles/refs/tags/v0.21.0/test/e2e/data/cluster-templates/vsphere-rke2.yaml | envsubst > cluster1.yaml
----
. View **cluster1.yaml** and examine the resulting YAML file. You can make any changes you want as well.
+
> The Cluster API quickstart guide contains more detail. Read the steps related to this section https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers[here].
. Create the cluster using kubectl
+
[source,bash]
----
kubectl create namespace ${NAMESPACE}
kubectl apply -f cluster1.yaml
----
--
vSphere Kubeadm::
+
--
Before creating a vSphere+kubeadm workload cluster, it is required to have a VM template with the necessary kubeadm binaries and dependencies. The template should already include kubeadm, kubelet, and kubectl if operating in an air-gapped environment, following the https://github.com/kubernetes-sigs/image-builder[image-builder project]. You can find additional configuration details in the https://github.com/kubernetes-sigs/cluster-api-provider-vsphere[CAPV repository].
A list of published machine images (OVAs) is available https://github.com/kubernetes-sigs/image-builder#kubernetes-versions-with-published-ovas[here].
To generate the YAML for the cluster, do the following:
[source,bash,subs=attributes+]
----
export CLUSTER_NAME={cluster-name}
export NAMESPACE={namespace}
export CONTROL_PLANE_MACHINE_COUNT={control-plane-machine-count}
export WORKER_MACHINE_COUNT={worker-machine-count}
export VSPHERE_USERNAME: "<username>"
export VSPHERE_PASSWORD: "<password>"
export VSPHERE_SERVER: "10.0.0.1"
export VSPHERE_DATACENTER: "SDDC-Datacenter"
export VSPHERE_DATASTORE: "DefaultDatastore"
export VSPHERE_NETWORK: "VM Network"
export VSPHERE_RESOURCE_POOL: "*/Resources"
export VSPHERE_FOLDER: "vm"
export VSPHERE_TEMPLATE: "ubuntu-1804-kube-vxxx"
export CONTROL_PLANE_ENDPOINT_IP: "192.168.9.230"
export VSPHERE_TLS_THUMBPRINT: "..."
export EXP_CLUSTER_RESOURCE_SET: "true"
export VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3N..."
export CPI_IMAGE_K8S_VERSION: "v1.31.0"
export KUBERNETES_VERSION={kubernetes-version}
----
. Open a terminal and run the following:
+
[source,bash]
----
curl -s https://raw.githubusercontent.com/rancher/turtles/refs/tags/v0.21.0/test/e2e/data/cluster-templates/vsphere-kubeadm.yaml | envsubst > cluster1.yaml
----
. View **cluster1.yaml** and examine the resulting YAML file. You can make any changes you want as well.
+
> The Cluster API quickstart guide contains more detail. Read the steps related to this section https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers[here].
. Create the cluster using kubectl
+
[source,bash]
----
kubectl create namespace ${NAMESPACE}
kubectl apply -f cluster1.yaml
----
--
AWS EKS::
+
--
To generate the YAML for the cluster, do the following:
[source,bash,subs=attributes+]
----
export CLUSTER_NAME={cluster-name}
export NAMESPACE={namespace}
export WORKER_MACHINE_COUNT={worker-machine-count}
export KUBERNETES_VERSION={kubernetes-version}
----
. Open a terminal and run the following:
+
[source,bash]
----
curl -s https://raw.githubusercontent.com/rancher/turtles/refs/tags/v0.21.0/test/e2e/data/cluster-templates/aws-eks-mmp.yaml | envsubst > cluster1.yaml
----
. View **cluster1.yaml** and examine the resulting YAML file. You can make any changes you want as well.
+
> The Cluster API quickstart guide contains more detail. Read the steps related to this section https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers[here].
. Create the cluster using kubectl
+
[source,bash]
----
kubectl create namespace ${NAMESPACE}
kubectl apply -f cluster1.yaml
----
--
GCP GKE::
+
--
To generate the YAML for the cluster, do the following:
[source,bash,subs=attributes+]
----
export CLUSTER_NAME={cluster-name}
export NAMESPACE={namespace}
export GCP_PROJECT=cluster-api-gcp-project
export GCP_REGION=us-east4
export GCP_NETWORK_NAME=default
export WORKER_MACHINE_COUNT={worker-machine-count}
----
. Open a terminal and run the following:
+
[source,bash]
----
curl -s https://raw.githubusercontent.com/rancher/turtles/refs/tags/v0.21.0/test/e2e/data/cluster-templates/gcp-gke.yaml | envsubst > cluster1.yaml
----
. View **cluster1.yaml** and examine the resulting YAML file. You can make any changes you want as well.
+
> The Cluster API quickstart guide contains more detail. Read the steps related to this section https://cluster-api.sigs.k8s.io/user/quick-start.html#required-configuration-for-common-providers[here].
. Create the cluster using kubectl
+
[source,bash]
----
kubectl create namespace ${NAMESPACE}
kubectl apply -f cluster1.yaml
----
--
======
[TIP]
====
After your cluster is provisioned, you can check functionality of the workload cluster using `kubectl`:
[source,bash]
----
kubectl describe cluster cluster1
----
Remember that clusters are namespaced resources. These examples provision clusters in the `capi-clusters` namespace, but you will need to provide yours if using a different one.
====
== Mark Namespace or Cluster for Auto-Import
To automatically import a CAPI cluster into Rancher Manager, there are 2 options:
. Label a namespace so all clusters contained in it are imported.
. Label an individual cluster definition so that it's imported.
Labeling a namespace:
[source,bash]
----
export NAMESPACE=default
kubectl label namespace $NAMESPACE cluster-api.cattle.io/rancher-auto-import=true
----
Labeling an individual cluster definition:
[source,bash]
----
export CLUSTER_NAME=cluster1
export NAMESPACE=default
kubectl label cluster.cluster.x-k8s.io -n $NAMESPACE $CLUSTER_NAME cluster-api.cattle.io/rancher-auto-import=true
----