import {versions} from '@site/src/fleetVersions';
import CodeBlock from '@theme/CodeBlock';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
# Installation Details
The installation is broken up into two different use cases: single and multi-cluster.
The single cluster install is for if you wish to use GitOps to manage a single cluster,
in which case you do not need a centralized manager cluster. In the multi-cluster use case
you will setup a centralized manager cluster to which you can register clusters.
If you are just learning Fleet the single cluster install is the recommended starting
point. After which you can move from single cluster to multi-cluster setup down the line.

Single-cluster is the default installation. The same cluster will run both the Fleet
manager and the Fleet agent. The cluster will communicate with Git server to
deploy resources to this local cluster. This is the simplest setup and very
useful for dev/test and small scale setups. This use case is supported as a valid
use case for production.
## Prerequisites
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
fairly straight forward. To install the Helm 3 CLI follow the official install instructions.
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
Any Kubernetes community supported version of Kubernetes will work, in practice this means {versions.next.kubernetes} or greater.
## Default Install
Install the following two Helm charts.
:::caution Fleet in Rancher
Rancher has separate helm charts for Fleet and uses a different repository.
:::
First add Fleet's Helm repository.
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
Second install the Fleet CustomResourcesDefintions.
{`helm -n cattle-fleet-system install --create-namespace --wait fleet-crd \\
fleet/fleet-crd`}
Third install the Fleet controllers.
{`helm -n cattle-fleet-system install --create-namespace --wait fleet \\
fleet/fleet`}
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
running the below commands.
```bash
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.
## Tweaking your Fleet install
### Controller and agent replicas
Starting with v0.13, Fleet charts expose new Helm values setting replica counts for each type of controller and the
agent:
* `controller.replicas` for the `fleet-controller` deployment reconciling bundles, bundle deployments, clusters and
cluster groups
* `gitjob.replicas` for the gitOps controller reconciling `GitRepo` resources
* `helmops.replicas` for the experimental HelmOps controller
* `agent.replicas` for the agent.
Each of them defaults to 1.
## Multi-controller install: sharding
### Deployment
From 0.10 onwards, Fleet supports static sharding.
Each shard is defined by its shard ID.
Optionally, a shard can have a [node
selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector), instructing Fleet to
create all controller pods and jobs for that shard on nodes matching that selector.
The Fleet controller chart can be installed with the following arguments:
* `--set shards[$index].id=$shard_id`
* `--set shards[$index].nodeSelector.$key=$value`
This will result in:
* as many Fleet controller and gitjob deployments as specified unique shard IDs,
* plus the usual unsharded Fleet controller pod. That latter pod will be the only one containing agent management and
cleanup containers.
For instance:
```bash
$ helm -n cattle-fleet-system install --create-namespace --wait fleet fleet/fleet \
--set shards[0].id=foo \
--set shards[0].nodeSelector."kubernetes\.io/hostname"=k3d-upstream-server-0 \
--set shards[1].id=bar \
--set shards[1].nodeSelector."kubernetes\.io/hostname"=k3d-upstream-server-1 \
--set shards[2].id=baz \
--set shards[2].nodeSelector."kubernetes\.io/hostname"=k3d-upstream-server-2 \
$ kubectl -n cattle-fleet-system get pods -l app=fleet-controller \
-o=custom-columns='Name:.metadata.name,Shard-ID:.metadata.labels.fleet\.cattle\.io/shard-id,Node:spec.nodeName'
Name Shard-ID Node
fleet-controller-b4c469c85-rj2q8 k3d-upstream-server-2
fleet-controller-shard-bar-5f5999958f-nt4bm bar k3d-upstream-server-1
fleet-controller-shard-baz-75c8587898-2wkk9 baz k3d-upstream-server-2
fleet-controller-shard-foo-55478fb9d8-42q2f foo k3d-upstream-server-0
$ kubectl -n cattle-fleet-system get pods -l app=gitjob \
-o=custom-columns='Name:.metadata.name,Shard-ID:.metadata.labels.fleet\.cattle\.io/shard-id,Node:spec.nodeName'
Name Shard-ID Node
gitjob-8498c6d78b-mdhgh k3d-upstream-server-1
gitjob-shard-bar-8659ffc945-9vtlx bar k3d-upstream-server-1
gitjob-shard-baz-6d67f596dc-fsz9m baz k3d-upstream-server-2
gitjob-shard-foo-8697bb7f67-wzsfj foo k3d-upstream-server-0
```
### How it works
With sharding in place, each Fleet controller will process resources bearing its own shard ID. This also holds for the
unsharded controller, which has no set shard ID and will therefore process all unsharded resources.
To deploy a GitRepo for a specific shard, simply add label `fleet.cattle.io/shard-ref` with your desired shard ID as a
value.
Here is an example:
```bash
$ kubectl apply -n fleet-local -f - < ca.pem
```
Next, retrieve the CA certificate from your kubeconfig.
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
`KUBECONFIG` and place then in a file named `ca.pem`.
```shell
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
Or, if you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
#### Extract API Server
If you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
# Leave empty if your API server is signed by a well known CA
API_SERVER_CA="ca.pem"
```
#### Validate
First validate the server URL is correct.
```shell
curl -fLk "$API_SERVER_URL/version"
```
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
6443 for Kubernetes.
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
well known CA then omit the `--cacert "$API_SERVER_CA"` part of the command.
```shell
curl -fL --cacert "$API_SERVER_CA" "$API_SERVER_URL/version"
```
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
only because the curl command is not setting proper credentials, but this validates that the TLS
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
the `ca.pem` is not correct. The contents of the `$API_SERVER_CA` file should look similar to the below:
```pem title="ca.pem"
-----BEGIN CERTIFICATE-----
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
KDs/pb3fnMTtpA==
-----END CERTIFICATE-----
```
### Install for Multi-Cluster
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
Setup the environment with your specific values, e.g.:
```shell
API_SERVER_URL="https://example.com:6443"
API_SERVER_CA="ca.pem"
```
Once you have validated the API server URL and API server CA parameters, install the following two
Helm charts.
First add Fleet's Helm repository.
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
Second install the Fleet CustomResourcesDefintions.
{`helm -n cattle-fleet-system install --create-namespace --wait \\
fleet-crd fleet/fleet-crd`}
Third install the Fleet controllers.
{`helm -n cattle-fleet-system install --create-namespace --wait \\
--set apiServerURL="$API_SERVER_URL" \\
--set-file apiServerCA="$API_SERVER_CA" \\
fleet fleet/fleet`}
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
```bash
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-registration.md) and [git repos](./gitrepo-add.md#create-gitrepo-instance) with
the Fleet manager.