Add versioned docs for 0.4, 0.5

* fix image urls
* create json for versioned_sidebars
This commit is contained in:
Mario Manno 2022-10-18 16:58:38 +02:00
parent e6bcc1fe0e
commit c7f1aeceb5
58 changed files with 4739 additions and 9 deletions

View File

@ -1,6 +1,6 @@
# Architecture
![](../static/img/arch.png)
![](/img/arch.png)
Fleet has two primary components. The Fleet manager and the cluster agents. These
components work in a two-stage pull model. The Fleet manager will pull from git and the

View File

@ -9,10 +9,10 @@ The bundled charts may have some objects that are amended at runtime, for exampl
This leads the status of the bundle and associated GitRepo to be reported as "Modified"
![](../static/img/ModifiedGitRepo.png)
![](/img/ModifiedGitRepo.png)
Associated Bundle
![](../static/img/ModifiedBundle.png)
![](/img/ModifiedBundle.png)
Fleet bundles support the ability to specify a custom [jsonPointer patch](http://jsonpatch.com/).
@ -131,7 +131,7 @@ In summary, we need to ignore the fields `rules` and `clientConfig.caBundle` in
The field webhook in the ValidatingWebhookConfiguration spec is an array, so we need to address the elements by their index values.
![](../static/img/WebhookConfigurationSpec.png)
![](/img/WebhookConfigurationSpec.png)
Based on this information, our diff patch would look as follows:

View File

@ -1,6 +1,6 @@
# Overview
![](../static/img/arch.png)
![](/img/arch.png)
### What is Fleet?

View File

@ -1,5 +1,5 @@
# Multi-cluster Install
![](../static/img/arch.png)
![](/img/arch.png)
**Note:** Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.

View File

@ -1,5 +1,5 @@
# Single Cluster Install
![](../static/img/single-cluster.png)
![](/img/single-cluster.png)
In this use case you have only one cluster. The cluster will run both the Fleet
manager and the Fleet agent. The cluster will communicate with Git server to

View File

@ -33,7 +33,7 @@ You can configure [TLS](https://kubernetes.io/docs/concepts/services-networking/
### 2. Go to your webhook provider and configure the webhook callback url. Here is a Github example.
![](../static/img/webhook.png)
![](/img/webhook.png)
Configuring a secret is optional. This is used to validate the webhook payload as the payload should not be trusted by default.
If your webhook server is publicly accessible to the Internet, then it is recommended to configure the secret. If you do configure the

View File

@ -27,6 +27,12 @@ module.exports = {
src: 'img/logo.svg',
},
items: [
{
type: 'docsVersionDropdown',
position: 'right',
dropdownItemsAfter: [{to: '/versions', label: 'All versions'}],
dropdownActiveClassDisabled: true,
},
{
type: 'doc',
docId: 'index',
@ -58,6 +64,14 @@ module.exports = {
sidebarPath: require.resolve('./sidebars.js'),
showLastUpdateTime: true,
editUrl: 'https://github.com/rancher/fleet-docs/edit/main/',
versions: {
current: {
label: 'Next 🚧',
},
'0.4': {
banner: 'none',
},
},
},
blog: false, // Optional: disable the blog plugin
// ...

View File

@ -0,0 +1,13 @@
# Advanced Users
Note that using Fleet outside of Rancher is highly discouraged for any users who do not need to perform advanced actions. However, there are some advanced use cases that may need to be performed outside of Rancher, also known as Standalone Fleet, or Fleet without Rancher. This section will highlight such use cases.
The following are examples of advanced use cases:
- Nested GitRepo CRs
>Managing Fleet within Fleet (nested GitRepo usage) is not currently supported. We will update the documentation if support becomes available.
- [Single cluster installation](./single-cluster-install.md)
- [Multi-cluster installation](./multi-cluster-install.md)
Please refer to the [installation](./installation.md) and the [uninstall](./uninstall.md) documentation for additional information.

View File

@ -0,0 +1,171 @@
# Agent Initiated
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the agent initiated registration style.
## Cluster Registration Token and Client ID
A downstream cluster is registered using the **cluster registration token** and optionally a **client ID** or **cluster labels**.
The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
able to initiate the registration process. This is required. Refer to the [cluster registration token page](./cluster-tokens.md) for more information
on how to create tokens and obtain the values. The cluster registration token is manifested as a `values.yaml` file that will
be passed to the `helm install` process.
There are two styles of registering an agent. You can have the cluster for this agent dynamically created, in which
case you will probably want to specify **cluster labels** upon registration. Or you can have the agent register to a predefined
cluster in the Fleet manager, in which case you will need a **client ID**. The former approach is typically the easiest.
## Install agent for a new Cluster
The Fleet agent is installed as a Helm chart. Following are explanations how to determine and set its parameters.
First, follow the [cluster registration token page](./cluster-tokens.md) to obtain the `values.yaml` which contains
the registration token to authenticate against the Fleet cluster.
Second, optionally you can define labels that will assigned to the newly created cluster upon registration. After
registration is completed an agent cannot change the labels of the cluster. To add cluster labels add
`--set-string labels.KEY=VALUE` to the below Helm command. To add the labels `foo=bar` and `bar=baz` then you would
add `--set-string labels.foo=bar --set-string labels.bar=baz` to the command line.
```shell
# Leave blank if you do not want any labels
CLUSTER_LABELS="--set-string labels.example=true --set-string labels.env=dev"
```
Third, set variables with the Fleet cluster's API Server URL and CA, for the downstream cluster to use for connecting.
```shell
API_SERVER_URL=https://...
API_SERVER_CA=...
```
Value in `API_SERVER_CA` can be obtained from a `.kube/config` file with valid data to connect to the upstream cluster
(under the `certificate-authority-data` key). Alternatively it can be obtained from within the upstream cluster itself,
by looking up the default ServiceAccount secret name (typically prefixed with `default-token-`, in the default namespace),
under the `ca.crt` key.
:::caution
__Use proper namespace and release name__:
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
:::
:::warning
__Ensure you are installing to the right cluster__:
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
to change which cluster Helm is installing to.
:::
Finally, install the agent using Helm.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
${CLUSTER_LABELS} \
--values values.yaml \
--set apiServerCA=${API_SERVER_CA} \
--set apiServerURL=${API_SERVER_URL} \
fleet-agent https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-agent-0.4.0.tgz
```
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
```shell
# Ensure kubectl is pointing to the right cluster
kubectl -n cattle-fleet-system logs -l app=fleet-agent
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
```
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
manager to run this command.
```shell
kubectl -n clusters get clusters.fleet.cattle.io
```
```
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
```
## Install agent for a predefined Cluster
Client IDs are for the purpose of predefining clusters in the Fleet manager with existing labels and repos targeted to them.
A client ID is not required and is just one approach to managing clusters.
The **client ID** is a unique string that will identify the cluster.
This string is user generated and opaque to the Fleet manager and agent. It is assumed to be sufficiently unique. For security reasons one should not be able to easily guess this value
as then one cluster could impersonate another. The client ID is optional and if not specified the UID field of the `kube-system` namespace
resource will be used as the client ID. Upon registration if the client ID is found on a `Cluster` resource in the Fleet manager it will associate
the agent with that `Cluster`. If no `Cluster` resource is found with that client ID a new `Cluster` resource will be created with the specific
client ID.
The Fleet agent is installed as a Helm chart. The only parameters to the helm chart installation should be the cluster registration token, which
is represented by the `values.yaml` file and the client ID. The client ID is optional.
First, create a `Cluster` in the Fleet Manager with the random client ID you have chosen.
```yaml
kind: Cluster
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: my-cluster
namespace: clusters
spec:
clientID: "really-random"
```
Second, follow the [cluster registration token page](./cluster-tokens.md) to obtain the `values.yaml` file to be used.
Third, setup your environment to use the client ID.
```shell
CLUSTER_CLIENT_ID="really-random"
```
:::note
__Use proper namespace and release name__:
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
:::
:::note
__Ensure you are installing to the right cluster__:
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
to change which cluster Helm is installing to.
:::
Finally, install the agent using Helm.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
--set clientID="${CLUSTER_CLIENT_ID}" \
--values values.yaml \
fleet-agent https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-agent-v0.4.0.tgz
```
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
```shell
# Ensure kubectl is pointing to the right cluster
kubectl -n cattle-fleet-system logs -l app=fleet-agent
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
```
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
manager to run this command.
```shell
kubectl -n clusters get clusters.fleet.cattle.io
```
```
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
my-cluster 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
```

View File

@ -0,0 +1,40 @@
# Architecture
![](/img/arch.png)
Fleet has two primary components. The Fleet manager and the cluster agents. These
components work in a two-stage pull model. The Fleet manager will pull from git and the
cluster agents will pull from the Fleet manager.
## Fleet Manager
The Fleet manager is a set of Kubernetes controllers running in any standard Kubernetes
cluster. The only API exposed by the Fleet manager is the Kubernetes API, there is no
custom API for the fleet controller.
## Cluster Agents
One cluster agent runs in each cluster and is responsible for talking to the Fleet manager.
The only communication from cluster to Fleet manager is by this agent and all communication
goes from the managed cluster to the Fleet manager. The fleet manager does not initiate
connections to downstream clusters. This means managed clusters can run in private networks and behind
NATs. The only requirement is the cluster agent needs to be able to communicate with the
Kubernetes API of the cluster running the Fleet manager. The one exception to this is if you use
the [manager initiated](./manager-initiated.md) cluster registration flow. This is not required, but
an optional pattern.
The cluster agents are not assumed to have an "always on" connection. They will resume operation as
soon as they can connect. Future enhancements will probably add the ability to schedule times of when
the agent checks in, as it stands right now they will always attempt to connect.
## Security
The Fleet manager dynamically creates service accounts, manages their RBAC and then gives the
tokens to the downstream clusters. Clusters are registered by optionally expiring cluster registration tokens.
The cluster registration token is used only during the registration process to generate a credential specific
to that cluster. After the cluster credential is established the cluster "forgets" the cluster registration
token.
The service accounts given to the clusters only have privileges to list `BundleDeployment` in the namespace created
specifically for that cluster. It can also update the `status` subresource of `BundleDeployment` and the `status`
subresource of it's `Cluster` resource.

View File

@ -0,0 +1,267 @@
# Generating Diffs for Modified GitRepos
Continuous Delivery in Rancher is powered by fleet. When a user adds a GitRepo CR, then Continuous Delivery creates the associated fleet bundles.
You can access these bundles by navigating to the Cluster Explorer (Dashboard UI), and selecting the `Bundles` section.
The bundled charts may have some objects that are amended at runtime, for example in ValidatingWebhookConfiguration the `caBundle` is empty and the CA cert is injected by the cluster.
This leads the status of the bundle and associated GitRepo to be reported as "Modified"
![](/img/ModifiedGitRepo.png)
Associated Bundle
![](/img/ModifiedBundle.png)
Fleet bundles support the ability to specify a custom [jsonPointer patch](http://jsonpatch.com/).
With the patch, users can instruct fleet to ignore object modifications.
In this example, we are trying to deploy opa-gatekeeper using Continuous Delivery to our clusters.
The opa-gatekeeper bundle associated with the opa GitRepo is in modified state.
Each path in the GitRepo CR, has an associated Bundle CR. The user can view the Bundles, and the associated diff needed in the Bundle status.
In our case the differences detected are as follows:
```yaml
summary:
desiredReady: 1
modified: 1
nonReadyResources:
- bundleState: Modified
modifiedStatus:
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
patch: '{"$setElementOrder/webhooks":[{"name":"validation.gatekeeper.sh"},{"name":"check-ignore-label.gatekeeper.sh"}],"webhooks":[{"clientConfig":{"caBundle":"Cg=="},"name":"validation.gatekeeper.sh","rules":[{"apiGroups":["*"],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["*"]}]},{"clientConfig":{"caBundle":"Cg=="},"name":"check-ignore-label.gatekeeper.sh","rules":[{"apiGroups":[""],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["namespaces"]}]}]}'
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
```
Based on this summary, there are three objects which need to be patched.
We will look at these one at a time.
### 1. ValidatingWebhookConfiguration:
The gatekeeper-validating-webhook-configuration validating webhook has two ValidatingWebhooks in its spec.
In cases where more than one element in the field requires a patch, that patch will refer these to as `$setElementOrder/ELEMENTNAME`
From this information, we can see the two ValidatingWebhooks in question are:
```
"$setElementOrder/webhooks": [
{
"name": "validation.gatekeeper.sh"
},
{
"name": "check-ignore-label.gatekeeper.sh"
}
],
```
Within each ValidatingWebhook, the fields that need to be ignore are as follows:
```
{
"clientConfig": {
"caBundle": "Cg=="
},
"name": "validation.gatekeeper.sh",
"rules": [
{
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"operations": [
"CREATE",
"UPDATE"
],
"resources": [
"*"
]
}
]
},
```
and
```
{
"clientConfig": {
"caBundle": "Cg=="
},
"name": "check-ignore-label.gatekeeper.sh",
"rules": [
{
"apiGroups": [
""
],
"apiVersions": [
"*"
],
"operations": [
"CREATE",
"UPDATE"
],
"resources": [
"namespaces"
]
}
]
}
```
In summary, we need to ignore the fields `rules` and `clientConfig.caBundle` in our patch specification.
The field webhook in the ValidatingWebhookConfiguration spec is an array, so we need to address the elements by their index values.
![](/img/WebhookConfigurationSpec.png)
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
operations:
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/0/rules"}
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/1/rules"}
```
### 2. Deployment gatekeeper-controller-manager:
The gatekeeper-controller-manager deployment is modified since there are cpu limits and tolerations applied (which are not in the actual bundle).
```
{
"spec": {
"template": {
"spec": {
"$setElementOrder/containers": [
{
"name": "manager"
}
],
"containers": [
{
"name": "manager",
"resources": {
"limits": {
"cpu": "1000m"
}
}
}
],
"tolerations": []
}
}
}
}
```
In this case, there is only 1 container in the deployment container spec, and that container has cpu limits and tolerations added.
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
```
### 3. Deployment gatekeeper-audit:
The gatekeeper-audit deployment is modified in a similarly, to the gatekeeper-controller-manager, with additional cpu limits and tolerations applied.
```
{
"spec": {
"template": {
"spec": {
"$setElementOrder/containers": [
{
"name": "manager"
}
],
"containers": [
{
"name": "manager",
"resources": {
"limits": {
"cpu": "1000m"
}
}
}
],
"tolerations": []
}
}
}
}
```
Similar to gatekeeper-controller-manager, there is only 1 container in the deployments container spec, and that has cpu limits and tolerations added.
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
```
### Combining It All Together
We can now combine all these patches as follows:
```yaml
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
operations:
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/0/rules"}
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/1/rules"}
```
We can add these now to the bundle directly to test and also commit the same to the `fleet.yaml` in your GitRepo.
Once these are added, the GitRepo should deploy and be in "Active" status.

View File

@ -0,0 +1,37 @@
# Cluster and Bundle state
Clusters and Bundles have different states in each phase of applying Bundles.
## Bundles
**Ready**: Bundles have been deployed and all resources are ready.
**NotReady**: Bundles have been deployed and some resources are not ready.
**WaitApplied**: Bundles have been synced from Fleet controller and downstream cluster, but are waiting to be deployed.
**ErrApplied**: Bundles have been synced from the Fleet controller and the downstream cluster, but there were some errors when deploying the Bundle.
**OutOfSync**: Bundles have been synced from Fleet controller, but downstream agent hasn't synced the change yet.
**Pending**: Bundles are being processed by Fleet controller.
**Modified**: Bundles have been deployed and all resources are ready, but there are some changes that were not made from the Git Repository.
## Clusters
**WaitCheckIn**: Waiting for agent to report registration information and cluster status back.
**NotReady**: There are bundles in this cluster that are in NotReady state.
**WaitApplied**: There are bundles in this cluster that are in WaitApplied state.
**ErrApplied**: There are bundles in this cluster that are in ErrApplied state.
**OutOfSync**: There are bundles in this cluster that are in OutOfSync state.
**Pending**: There are bundles in this cluster that are in Pending state.
**Modified**: There are bundles in this cluster that are in Modified state.
**Ready**: Bundles in this cluster have been deployed and all resources are ready.

View File

@ -0,0 +1,22 @@
# Cluster Groups
Clusters in a namespace can be put into a cluster group. A cluster group is essentially a named selector.
The only parameter for a cluster group is essentially the selector.
When you get to a certain scale cluster groups become a more reasonable way to manage your clusters.
Cluster groups serve the purpose of giving aggregated
status of the deployments and then also a simpler way to manage targets.
A cluster group is created by creating a `ClusterGroup` resource like below
```yaml
kind: ClusterGroup
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: production-group
namespace: clusters
spec:
# This is the standard metav1.LabelSelector format to match clusters by labels
selector:
matchLabels:
env: prod
```

View File

@ -0,0 +1,25 @@
# Overview
There are two specific styles to registering clusters. These styles will be referred
to as **agent initiated** and **manager initiated** registration. Typically one would
go with the agent initiated registration but there are specific use cases in which
manager initiated is a better workflow.
## Agent Initiated Registration
Agent initiated refers to a pattern in which the downstream cluster installs an agent with a
[cluster registration token](./cluster-tokens.md) and optionally a client ID. The cluster
agent will then make a API request to the Fleet manager and initiate the registration process. Using
this process the Manager will never make an outbound API request to the downstream clusters and will thus
never need to have direct network access. The downstream cluster only needs to make outbound HTTPS
calls to the manager.
## Manager Initiated Registration
Manager initiated registration is a process in which you register an existing Kubernetes cluster
with the Fleet manager and the Fleet manager will make an API call to the downstream cluster to
deploy the agent. This style can place additional network access requirements because the Fleet
manager must be able to communicate with the downstream cluster API server for the registration process.
After the cluster is registered there is no further need for the manager to contact the downstream
cluster API. This style is more compatible if you wish to manage the creation of all your Kubernetes
clusters through GitOps using something like [cluster-api](https://github.com/kubernetes-sigs/cluster-api)
or [Rancher](https://github.com/rancher/rancher).

View File

@ -0,0 +1,65 @@
# Cluster Registration Tokens
:::info
__Not needed for Manager initiated registration__:
For manager initiated registrations the token is managed by the Fleet manager and does
not need to be manually created and obtained.
:::
For an agent initiated registration the downstream cluster must have a cluster registration token.
Cluster registration tokens are used to establish a new identity for a cluster. Internally
cluster registration tokens are managed by creating Kubernetes service accounts that have the
permissions to create `ClusterRegistrationRequests` within a specific namespace. Once the
cluster is registered a new `ServiceAccount` is created for that cluster that is used as
the unique identity of the cluster. The agent is designed to forget the cluster registration
token after registration. While the agent will not maintain a reference to the cluster registration
token after a successful registration please note that usually other system bootstrap scripts do.
Since the cluster registration token is forgotten, if you need to re-register a cluster you must
give the cluster a new registration token.
## Token TTL
Cluster registration tokens can be reused by any cluster in a namespace. The tokens can be given a TTL
such that it will expire after a specific time.
## Create a new Token
The `ClusterRegistationToken` is a namespaced type and should be created in the same namespace
in which you will create `GitRepo` and `ClusterGroup` resources. For in depth details on how namespaces
are used in Fleet refer to the documentation on [namespaces](./namespaces.md). Create a new
token with the below YAML.
```yaml
kind: ClusterRegistrationToken
apiVersion: "fleet.cattle.io/v1alpha1"
metadata:
name: new-token
namespace: clusters
spec:
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
ttl: 240h
```
After the `ClusterRegistrationToken` is created, Fleet will create a corresponding `Secret` with the same name.
As the `Secret` creation is performed asynchronously, you will need to wait until it's available before using it.
One way to do so is via the following one-liner:
```shell
while ! kubectl --namespace=clusters get secret new-token; do sleep 5; done
```
## Obtaining Token Value (Agent values.yaml)
The token value contains YAML content for a `values.yaml` file that is expected to be passed to `helm install`
to install the Fleet agent on a downstream cluster.
Such value is contained in the `values` field of the `Secret` mentioned above. To obtain the YAML content for the
above example one can run the following one-liner:
```shell
kubectl --namespace clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
```
Once the `values.yaml` is ready it can be used repeatedly by clusters to register until the TTL expires.

View File

@ -0,0 +1,51 @@
# Core Concepts
Fleet is fundamentally a set of Kubernetes custom resource definitions (CRDs) and controllers
to manage GitOps for a single Kubernetes cluster or a large-scale deployment of Kubernetes clusters.
:::info
For more on the naming conventions of CRDs, click [here](./troubleshooting.md#naming-conventions-for-crds).
:::
Below are some of the concepts of Fleet that will be useful throughout this documentation:
* **Fleet Manager**: The centralized component that orchestrates the deployments of Kubernetes assets
from git. In a multi-cluster setup, this will typically be a dedicated Kubernetes cluster. In a
single cluster setup, the Fleet manager will be running on the same cluster you are managing with GitOps.
* **Fleet controller**: The controller(s) running on the Fleet manager orchestrating GitOps. In practice,
the Fleet manager and Fleet controllers are used fairly interchangeably.
* **Single Cluster Style**: This is a style of installing Fleet in which the manager and downstream cluster are the
same cluster. This is a very simple pattern to quickly get up and running with GitOps.
* **Multi Cluster Style**: This is a style of running Fleet in which you have a central manager that manages a large
number of downstream clusters.
* **Fleet agent**: Every managed downstream cluster will run an agent that communicates back to the Fleet manager.
This agent is just another set of Kubernetes controllers running in the downstream cluster.
* **GitRepo**: Git repositories that are watched by Fleet are represented by the type `GitRepo`.
>**Example installation order via `GitRepo` custom resources when using Fleet for the configuration management of downstream clusters:**
>
> 1. Install [Calico](https://github.com/projectcalico/calico) CRDs and controllers.
> 2. Set one or multiple cluster-level global network policies.
> 3. Install [GateKeeper](https://github.com/open-policy-agent/gatekeeper). Note that **cluster labels** and **overlays** are critical features in Fleet as they determine which clusters will get each part of the bundle.
> 4. Set up and configure ingress and system daemons.
* **Bundle**: An internal unit used for the orchestration of resources from git.
When a `GitRepo` is scanned it will produce one or more bundles. Bundles are a collection of
resources that get deployed to a cluster. `Bundle` is the fundamental deployment unit used in Fleet. The
contents of a `Bundle` may be Kubernetes manifests, Kustomize configuration, or Helm charts.
Regardless of the source the contents are dynamically rendered into a Helm chart by the agent
and installed into the downstream cluster as a helm release.
- To see the **lifecycle of a bundle**, click [here](./examples.md#lifecycle-of-a-fleet-bundle).
* **BundleDeployment**: When a `Bundle` is deployed to a cluster an instance of a `Bundle` is called a `BundleDeployment`.
A `BundleDeployment` represents the state of that `Bundle` on a specific cluster with its cluster specific
customizations. The Fleet agent is only aware of `BundleDeployment` resources that are created for
the cluster the agent is managing.
- For an example of how to deploy Kubernetes manifests across clusters using Fleet customization, click [here](./examples.md#deploy-kubernetes-manifests-across-clusters-with-customization).
* **Downstream Cluster**: Clusters to which Fleet deploys manifests are referred to as downstream clusters. In the single cluster use case, the Fleet manager Kubernetes cluster is both the manager and downstream cluster at the same time.
* **Cluster Registration Token**: Tokens used by agents to register a new cluster.

View File

@ -0,0 +1,75 @@
# Examples
### Lifecycle of a Fleet Bundle
To demonstrate the lifecycle of a Fleet bundle, we will use [multi-cluster/helm](https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm) as a case study.
1. User will create a [GitRepo](./gitrepo-add.md#create-gitrepo-instance) that points to the multi-cluster/helm repository.
2. The `gitjob-controller` will sync changes from the GitRepo and detect changes from the polling or [webhook event](./webhook.md). With every commit change, the `gitjob-controller` will create a job that clones the git repository, reads content from the repo such as `fleet.yaml` and other manifests, and creates the Fleet [bundle](./cluster-bundles-state.md#bundles).
>**Note:** The job pod with the image name `rancher/tekton-utils` will be under the same namespace as the GitRepo.
3. The `fleet-controller` then syncs changes from the bundle. According to the targets, the `fleet-controller` will create `BundleDeployment` resources, which are a combination of a bundle and a target cluster.
4. The `fleet-agent` will then pull the `BundleDeployment` from the Fleet controlplane. The agent deploys bundle manifests as a [Helm chart](https://helm.sh/docs/intro/install/) from the `BundleDeployment` into the downstream clusters.
5. The `fleet-agent` will continue to monitor the application bundle and report statuses back in the following order: bundledeployment > bundle > GitRepo > cluster.
### Deploy Kubernetes Manifests Across Clusters with Customization
[Fleet in Rancher](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/) allows users to manage clusters easily as if they were one cluster. Users can deploy bundles, which can be comprised of deployment manifests or any other Kubernetes resource, across clusters using grouping configuration.
To demonstrate how to deploy Kubernetes manifests across different clusters using Fleet, we will use [multi-cluster/helm/fleet.yaml](https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm/fleet.yaml) as a case study.
**Situation:** User has three clusters with three different labels: `env=dev`, `env=test`, and `env=prod`. User wants to deploy a frontend application with a backend database across these clusters.
**Expected behavior:**
- After deploying to the `dev` cluster, database replication is not enabled.
- After deploying to the `test` cluster, database replication is enabled.
- After deploying to the `prod` cluster, database replication is enabled and Load balancer services are exposed.
**Advantage of Fleet:**
Instead of deploying the app on each cluster, Fleet allows you to deploy across all clusters following these steps:
1. Deploy gitRepo `https://github.com/rancher/fleet-examples.git` and specify the path `multi-cluster/helm`.
2. Under `multi-cluster/helm`, a Helm chart will deploy the frontend app service and backend database service.
3. The following rule will be defined in `fleet.yaml`:
```
targetCustomizations:
- name: dev
helm:
values:
replication: false
clusterSelector:
matchLabels:
env: dev
- name: test
helm:
values:
replicas: 3
clusterSelector:
matchLabels:
env: test
- name: prod
helm:
values:
serviceType: LoadBalancer
replicas: 3
clusterSelector:
matchLabels:
env: prod
```
**Result:**
Fleet will deploy the Helm chart with your customized `values.yaml` to the different clusters.
>**Note:** Configuration management is not limited to deployments but can be expanded to general configuration management. Fleet is able to apply configuration management through customization among any set of clusters automatically.
### Additional Examples
Examples using raw Kubernetes YAML, Helm charts, Kustomize, and combinations
of the three are in the [Fleet Examples repo](https://github.com/rancher/fleet-examples/).

View File

@ -0,0 +1,202 @@
# Adding a GitRepo
## Proper namespace
Git repos are added to the Fleet manager using the `GitRepo` custom resource type. The `GitRepo` type is namespaced. By default, Rancher will create two Fleet workspaces: **fleet-default** and **fleet-local**.
- `Fleet-default` will contain all the downstream clusters that are already registered through Rancher.
- `Fleet-local` will contain the local cluster by default.
Users can create new workspaces and move clusters across workspaces. An example of a special case might be including the local cluster in the `GitRepo` payload for config maps and secrets (no active deployments or payloads).
:::warning
While it's possible to move clusters out of either workspace, we recommend that you keep the local cluster in `fleet-local`.
:::
If you are using Fleet in a [single cluster](./concepts.md) style, the namespace will always be **fleet-local**. Check [here](https://fleet.rancher.io/namespaces/#fleet-local) for more on the `fleet-local` namespace.
For a [multi-cluster](./concepts.md) style, please ensure you use the correct repo that will map to the right target clusters.
## Create GitRepo instance
Git repositories are register by creating a `GitRepo` following the below YAML sample. Refer
to the inline comments as the means of each field
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
# Any name can be used here
name: my-repo
# For single cluster use fleet-local, otherwise use the namespace of
# your choosing
namespace: fleet-local
spec:
# This can be a HTTPS or git URL. If you are using a git URL then
# clientSecretName will probably need to be set to supply a credential.
# repo is the only required parameter for a repo to be monitored.
#
repo: https://github.com/rancher/fleet-examples
# Enforce all resources go to this target namespace. If a cluster scoped
# resource is found the deployment will fail.
#
# targetNamespace: app1
# Any branch can be watched, this field is optional. If not specified the
# branch is assumed to be master
#
# branch: master
# A specific commit or tag can also be watched.
#
# revision: v0.3.0
# For a private registry you must supply a clientSecretName. A default
# secret can be set at the namespace level using the GitRepoRestriction
# type. Secrets must be of the type "kubernetes.io/ssh-auth" or
# "kubernetes.io/basic-auth". The secret is assumed to be in the
# same namespace as the GitRepo
#
# clientSecretName: my-ssh-key
#
# If fleet.yaml contains a private Helm repo that requires authentication,
# provide the credentials in a K8s secret and specify them here. Details are provided
# in the fleet.yaml documentation.
#
# helmSecretName: my-helm-secret
#
# To add additional ca-bundle for self-signed certs, caBundle can be
# filled with base64 encoded pem data. For example:
# `cat /path/to/ca.pem | base64 -w 0`
#
# caBundle: my-ca-bundle
#
# Disable SSL verification for git repo
#
# insecureSkipTLSVerify: true
#
# A git repo can read multiple paths in a repo at once.
# The below field is expected to be an array of paths and
# supports path globbing (ex: some/*/path)
#
# Example:
# paths:
# - single-path
# - multiple-paths/*
paths:
- simple
# PollingInterval configures how often fleet checks the git repo. The default
# is 15 seconds.
# Setting this to zero does not disable polling. It results in a 15s
# interval, too.
#
# pollingInterval: 15
# Paused causes changes in Git to not be propagated down to the clusters but
# instead mark resources as OutOfSync
#
# paused: false
# Increment this number to force a redeployment of contents from Git
#
# forceSyncGeneration: 0
# The service account that will be used to perform this deployment.
# This is the name of the service account that exists in the
# downstream cluster in the cattle-fleet-system namespace. It is assumed
# this service account already exists so it should be create before
# hand, most likely coming from another git repo registered with
# the Fleet manager.
#
# serviceAccount: moreSecureAccountThanClusterAdmin
# Target clusters to deploy to if running Fleet in a multi-cluster
# style. Refer to the "Mapping to Downstream Clusters" docs for
# more information.
#
# targets: ...
```
## Adding private repository
Fleet supports both http and ssh auth key for private repository. To use this you have to create a secret in the same namespace.
For example, to generate a private ssh key
```text
ssh-keygen -t rsa -b 4096 -m pem -C "user@email.com"
```
Note: The private key format has to be in `EC PRIVATE KEY`, `RSA PRIVATE KEY` or `PRIVATE KEY` and should not contain a passphase.
Put your private key into secret, use the namespace the GitRepo is in:
```text
kubectl create secret generic ssh-key -n fleet-default --from-file=ssh-privatekey=/file/to/private/key --type=kubernetes.io/ssh-auth
```
:::caution
Private key with passphrase is not supported.
:::
:::caution
The key has to be in PEM format.
:::
Fleet supports putting `known_hosts` into ssh secret. Here is an example of how to add it:
Fetch the public key hash(take github as an example)
```text
ssh-keyscan -H github.com
```
And add it into secret:
```text
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: <private-key>
known_hosts: |-
|1|YJr1VZoi6dM0oE+zkM0do3Z04TQ=|7MclCn1fLROZG+BgR4m1r8TLwWc= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
```
:::warning
If you don't add it any server's public key will be trusted and added. (`ssh -o stricthostkeychecking=accept-new` will be used)
:::
:::info
If you are using openssh format for the private key and you are creating it in the UI, make sure a carriage return is appended in the end of the private key.
:::
### Using HTTP Auth
Create a secret containing username and password. You can replace the password with a personal access token if necessary. Also see [HTTP secrets in Github](./troubleshooting#http-secrets-in-github).
kubectl create secret generic basic-auth-secret -n fleet-default --type=kubernetes.io/basic-auth --from-literal=username=$user --from-literal=password=$pat
Just like with SSH, reference the secret in your GitRepo resource via `clientSecretName`.
spec:
repo: https://github.com/fleetrepoci/gitjob-private.git
branch: main
clientSecretName: basic-auth-secret
# Troubleshooting
See Fleet Troubleshooting section [here](./troubleshooting.md).

View File

@ -0,0 +1,314 @@
# Expected Repo Structure
**The git repository has no explicitly required structure.** It is important
to realize the scanned resources will be saved as a resource in Kubernetes so
you want to make sure the directories you are scanning in git do not contain
arbitrarily large resources. Right now there is a limitation that the resources
deployed must **gzip to less than 1MB**.
## How repos are scanned
Multiple paths can be defined for a `GitRepo` and each path is scanned independently.
Internally each scanned path will become a [bundle](./concepts.md) that Fleet will manage,
deploy, and monitor independently.
The following files are looked for to determine the how the resources will be deployed.
| File | Location | Meaning |
|------|----------|---------|
| **Chart.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed as a Helm chart. Refer to the `fleet.yaml` for more options. |
| **kustomization.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed using Kustomize. Refer to the `fleet.yaml` for more options. |
| **fleet.yaml** | Any subpath | If any fleet.yaml is found a new [bundle](./concepts.md) will be defined. This allows mixing charts, kustomize, and raw YAML in the same repo |
| ** *.yaml ** | Any subpath | If a `Chart.yaml` or `kustomization.yaml` is not found then any `.yaml` or `.yml` file will be assumed to be a Kubernetes resource and will be deployed. |
| **overlays/{name}** | / relative to `path` | When deploying using raw YAML (not Kustomize or Helm) `overlays` is a special directory for customizations. |
## `fleet.yaml`
The `fleet.yaml` is an optional file that can be included in the git repository to change the behavior of how
the resources are deployed and customized. The `fleet.yaml` is always at the root relative to the `path` of the `GitRepo`
and if a subdirectory is found with a `fleet.yaml` a new [bundle](./concepts.md) is defined that will then be
configured differently from the parent bundle.
:::caution
__Helm chart dependencies__:
It is up to the user to fulfill the dependency list for the Helm charts. As such, you must manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` prior to install. See the [Fleet docs](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/#helm-chart-dependencies) in Rancher for more information.
:::
### Reference
:::info
__How changes are applied to `values.yaml`__:
- Note that the most recently applied changes to the `values.yaml` will override any previously existing values.
- When changes are applied to the `values.yaml` from multiple sources at the same time, the values will update in the following order: `helmValues` -> `helm.valuesFiles` -> `helm.valuesFrom`.
:::
```yaml
# The default namespace to be applied to resources. This field is not used to
# enforce or lock down the deployment to a specific namespace, but instead
# provide the default value of the namespace field if one is not specified
# in the manifests.
# Default: default
defaultNamespace: default
# All resources will be assigned to this namespace and if any cluster scoped
# resource exists the deployment will fail.
# Default: ""
namespace: default
kustomize:
# Use a custom folder for kustomize resources. This folder must contain
# a kustomization.yaml file.
dir: ./kustomize
helm:
# Use a custom location for the Helm chart. This can refer to any go-getter URL or
# OCI registry based helm chart URL e.g. "oci://ghcr.io/fleetrepoci/guestbook".
# This allows one to download charts from most any location. Also know that
# go-getter URL supports adding a digest to validate the download. If repo
# is set below this field is the name of the chart to lookup
chart: ./chart
# A https URL to a Helm repo to download the chart from. It's typically easier
# to just use `chart` field and refer to a tgz file. If repo is used the
# value of `chart` will be used as the chart name to lookup in the Helm repository.
repo: https://charts.rancher.io
# A custom release name to deploy the chart as. If not specified a release name
# will be generated.
releaseName: my-release
# The version of the chart or semver constraint of the chart to find. If a constraint
# is specified it is evaluated each time git changes.
# The version also determines which chart to download from OCI registries.
version: 0.1.0
# Any values that should be placed in the `values.yaml` and passed to helm during
# install.
values:
any-custom: value
# All labels on Rancher clusters are available using global.fleet.clusterLabels.LABELNAME
# These can now be accessed directly as variables
variableName: global.fleet.clusterLabels.LABELNAME
# Path to any values files that need to be passed to helm during install
valuesFiles:
- values1.yaml
- values2.yaml
# Allow to use values files from configmaps or secrets
valuesFrom:
- configMapKeyRef:
name: configmap-values
# default to namespace of bundle
namespace: default
key: values.yaml
secretKeyRef:
name: secret-values
namespace: default
key: values.yaml
# Override immutable resources. This could be dangerous.
force: false
# Set the Helm --atomic flag when upgrading
atomic: false
# A paused bundle will not update downstream clusters but instead mark the bundle
# as OutOfSync. One can then manually confirm that a bundle should be deployed to
# the downstream clusters.
# Default: false
paused: false
rolloutStrategy:
# A number or percentage of clusters that can be unavailable during an update
# of a bundle. This follows the same basic approach as a deployment rollout
# strategy. Once the number of clusters meets unavailable state update will be
# paused. Default value is 100% which doesn't take effect on update.
# default: 100%
maxUnavailable: 15%
# A number or percentage of cluster partitions that can be unavailable during
# an update of a bundle.
# default: 0
maxUnavailablePartitions: 20%
# A number of percentage of how to automatically partition clusters if not
# specific partitioning strategy is configured.
# default: 25%
autoPartitionSize: 10%
# A list of definitions of partitions. If any target clusters do not match
# the configuration they are added to partitions at the end following the
# autoPartitionSize.
partitions:
# A user friend name given to the partition used for Display (optional).
# default: ""
- name: canary
# A number or percentage of clusters that can be unavailable in this
# partition before this partition is treated as done.
# default: 10%
maxUnavailable: 10%
# Selector matching cluster labels to include in this partition
clusterSelector:
matchLabels:
env: prod
# A cluster group name to include in this partition
clusterGroup: agroup
# Selector matching cluster group labels to include in this partition
clusterGroupSelector: agroup
# Target customization are used to determine how resources should be modified per target
# Targets are evaluated in order and the first one to match a cluster is used for that cluster.
targetCustomizations:
# The name of target. If not specified a default name of the format "target000"
# will be used. This value is mostly for display
- name: prod
# Custom namespace value overriding the value at the root
namespace: newvalue
# Custom defaultNamespace value overriding the value at the root
defaultNamespace: newdefaultvalue
# Custom kustomize options overriding the options at the root
kustomize: {}
# Custom Helm options override the options at the root
helm: {}
# If using raw YAML these are names that map to overlays/{name} that will be used
# to replace or patch a resource. If you wish to customize the file ./subdir/resource.yaml
# then a file ./overlays/myoverlay/subdir/resource.yaml will replace the base file.
# A file named ./overlays/myoverlay/subdir/resource_patch.yaml will patch the base file.
# A patch can in JSON Patch or JSON Merge format or a strategic merge patch for builtin
# Kubernetes types. Refer to "Raw YAML Resource Customization" below for more information.
yaml:
overlays:
- custom2
- custom3
# A selector used to match clusters. The structure is the standard
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
# clusterSelector will be used only to further refine the selection after
# clusterGroupSelector and clusterGroup is evaluated.
clusterSelector:
matchLabels:
env: prod
# A selector used to match a specific cluster by name.
clusterName: dev-cluster
# A selector used to match cluster groups.
clusterGroupSelector:
matchLabels:
region: us-east
# A specific clusterGroup by name that will be selected
clusterGroup: group1
# dependsOn allows you to configure dependencies to other bundles. The current bundle
# will only be deployed, after all dependencies are deployed and in a Ready state.
dependsOn:
# Format: <GITREPO-NAME>-<BUNDLE_PATH> with all path separators replaced by "-"
# Example: GitRepo name "one", Bundle path "/multi-cluster/hello-world" => "one-multi-cluster-hello-world"
- name: one-multi-cluster-hello-world
```
:::info
For a private Helm repo, users can reference a secret with the following keys:
1. `username` and `password` for basic http auth if the Helm HTTP repo is behind basic auth.
2. `cacerts` for custom CA bundle if the Helm repo is using a custom CA.
3. `ssh-privatekey` for ssh private key if repo is using ssh protocol. Private key with passphase is not supported currently.
For example, to add a secret in kubectl, run
`kubectl create secret -n $namespace generic helm --from-literal=username=foo --from-literal=password=bar --from-file=cacerts=/path/to/cacerts --from-file=ssh-privatekey=/path/to/privatekey.pem`
After secret is created, specify the secret to `gitRepo.spec.helmSecretName`. Make sure secret is created under the same namespace with gitrepo.
:::
### Using ValuesFrom
These examples showcase the style and format for using `valuesFrom`.
Example [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/):
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-values
namespace: default
data:
values.yaml: |-
replication: true
replicas: 2
serviceType: NodePort
```
Example [Secret](https://kubernetes.io/docs/concepts/configuration/secret/):
```yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-values
namespace: default
stringData:
values.yaml: |-
replication: true
replicas: 2
serviceType: NodePort
```
## Per Cluster Customization
The `GitRepo` defines which clusters a git repository should be deployed to and the `fleet.yaml` in the repository
determines how the resources are customized per target.
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets of that
`GitRepo`. The targets list is evaluated one by one and if there is a match the resource will be deployed to the cluster.
If no match is made against the target list on the `GitRepo` then the resources will not be deployed to that cluster.
Once a target cluster is matched the `fleet.yaml` from the git repository is then consulted for customizations. The
`targetCustomizations` in the `fleet.yaml` will be evaluated one by one and the first match will define how the
resource is to be configured. If no match is made the resources will be deployed with no additional customizations.
There are three approaches to matching clusters for both `GitRepo` `targets` and `fleet.yaml` `targetCustomizations`.
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
that the value `{}` for a selector means "match everything."
```yaml
# Match everything
clusterSelector: {}
# Selector ignored
clusterSelector: null
```
## Raw YAML Resource Customization
When using Kustomize or Helm the `kustomization.yaml` or the `helm.values` will control how the resource are
customized per target cluster. If you are using raw YAML then the following simple mechanism is built-in and can
be used. The `overlays/` folder in the git repo is treated specially as folder containing folders that
can be selected to overlay on top per target cluster. The resource overlay content
uses a file name based approach. This is different from kustomize which uses a resource based approach. In kustomize
the resource Group, Kind, Version, Name, and Namespace identify resources and are then merged or patched. For Fleet
the overlay resources will override or patch content with a matching file name.
```shell
# Base files
deployment.yaml
svc.yaml
# Overlay files
# The following file we be added
overlays/custom/configmap.yaml
# The following file will replace svc.yaml
overlays/custom/svc.yaml
# The following file will patch deployment.yaml
overlays/custom/deployment_patch.yaml
```
A file named `foo` will replace a file called `foo` from the base resources or a previous overlay. In order to patch
the contents a file the convention of adding `_patch.` (notice the trailing period) to the filename is used. The string `_patch.`
will be replaced with `.` from the file name and that will be used as the target. For example `deployment_patch.yaml`
will target `deployment.yaml`. The patch will be applied using JSON Merge, Strategic Merge Patch, or JSON Patch.
Which strategy is used is based on the file content. Even though JSON strategies are used, the files can be written
using YAML syntax.
## Cluster and Bundle state
See [Cluster and Bundle state](./cluster-bundles-state.md).

View File

@ -0,0 +1,79 @@
# Mapping to Downstream Clusters
:::info
__Multi-cluster Only__:
This approach only applies if you are running Fleet in a multi-cluster style
:::
When deploying `GitRepos` to downstream clusters the clusters must be mapped to a target.
## Defining targets
The deployment targets of `GitRepo` is done using the `spec.targets` field to
match clusters or cluster groups. The YAML specification is as below.
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: myrepo
namespace: clusters
spec:
repo: https://github.com/rancher/fleet-examples
paths:
- simple
# Targets are evaluated in order and the first one to match is used. If
# no targets match then the evaluated cluster will not be deployed to.
targets:
# The name of target. This value is largely for display and logging.
# If not specified a default name of the format "target000" will be used
- name: prod
# A selector used to match clusters. The structure is the standard
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
# clusterSelector will be used only to further refine the selection after
# clusterGroupSelector and clusterGroup is evaluated.
clusterSelector:
matchLabels:
env: prod
# A selector used to match cluster groups.
clusterGroupSelector:
matchLabels:
region: us-east
# A specific clusterGroup by name that will be selected
clusterGroup: group1
```
## Target Matching
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets.
If any of the targets match the cluster then the `GitRepo` will be deployed to the downstream cluster. If
no match is made, then the `GitRepo` will not be deployed to that cluster.
There are three approaches to matching clusters.
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
that the value `{}` for a selector means "match everything."
```yaml
# Match everything
clusterSelector: {}
# Selector ignored
clusterSelector: null
```
## Default target
If no target is set for the `GitRepo` then the default targets value is applied. The default targets value is as below.
```yaml
targets:
- name: default
clusterGroup: default
```
This means if you wish to setup a default location non-configured GitRepos will go to, then just create a cluster group called default
and add clusters to it.

View File

@ -0,0 +1,115 @@
# Image scan
Image scan in fleet allows you to scan your image repository, fetch the desired image and update your git repository,
without the need to manually update your manifests.
:::caution
This feature is considered as experimental feature.
:::
Go to `fleet.yaml` and add the following section.
```yaml
imageScans:
# specify the policy to retrieve images, can be semver or alphabetical order
- policy:
# if range is specified, it will take the latest image according to semver order in the range
# for more details on how to use semver, see https://github.com/Masterminds/semver
semver:
range: "*"
# can use ascending or descending order
alphabetical:
order: asc
# specify images to scan
image: "your.registry.com/repo/image"
# Specify the tag name, it has to be unique in the same bundle
tagName: test-scan
# specify secret to pull image if in private registry
secretRef:
name: dockerhub-secret
# Specify the scan interval
interval: 5m
```
:::info
You can create multiple image scans in fleet.yaml.
:::
Go to your manifest files and update the field that you want to replace. For example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-slave
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: <image>:<tag> # {"$imagescan": "test-scan"}
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
```
:::note
There are multiple form of tagName you can reference. For example
`{"$imagescan": "test-scan"}`: Use full image name(foo/bar:tag)
`{"$imagescan": "test-scan:name"}`: Only use image name without tag(foo/bar)
`{"$imagescan": "test-scan:tag"}`: Only use image tag
`{"$imagescan": "test-scan:digest"}`: Use full image name with digest(foo/bar:tag@sha256...)
:::
Create a GitRepo that includes your fleet.yaml
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: my-repo
namespace: fleet-local
spec:
# change this to be your own repo
repo: https://github.com/rancher/fleet-examples
# define how long it will sync all the images and decide to apply change
imageScanInterval: 5m
# user must properly provide a secret that have write access to git repository
clientSecretName: secret
# specify the commit pattern
imageScanCommit:
authorName: foo
authorEmail: foo@bar.com
messageTemplate: "update image"
```
Try pushing a new image tag, for example, `<image>:<new-tag>`. Wait for a while and there should be a new commit pushed into your git repository to change tag in deployment.yaml.
Once change is made into git repository, fleet will read through the change and deploy the change into your cluster.

View File

@ -0,0 +1,13 @@
# Overview
![](/img/arch.png)
### What is Fleet?
- **Cluster engine**: Fleet is a container management and deployment engine designed to offer users more control on the local cluster and constant monitoring through **GitOps**. Fleet focuses not only on the ability to scale, but it also gives users a high degree of control and visibility to monitor exactly what is installed on the cluster.
- **Deployment management**: Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, Kustomize, or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy all resources in the cluster. As a result, users have a high degree of control, consistency, and auditability.
### Configuration Management
Fleet is fundamentally a set of Kubernetes [custom resource definitions (CRDs)](https://fleet.rancher.io/concepts/) and controllers that manage GitOps for a single Kubernetes cluster or a large scale deployment of Kubernetes clusters. It is a distributed initialization system that makes it easy to customize applications and manage HA clusters from a single point.

View File

@ -0,0 +1,9 @@
# Installation
The installation is broken up into two different use cases: [Single](./single-cluster-install.md) and
[Multi-Cluster](./multi-cluster-install.md) install. The single cluster install is for if you wish to use GitOps to manage a single cluster,
in which case you do not need a centralized manager cluster. In the multi-cluster use case
you will setup a centralized manager cluster to which you can register clusters.
If you are just learning Fleet the single cluster install is the recommended starting
point. After which you can move from single cluster to multi-cluster setup down the line.

View File

@ -0,0 +1,46 @@
# Manager Initiated
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the manager initiated registration style.
## Kubeconfig Secret
The manager initiated registration flow is accomplished by creating a
`Cluster` resource in the Fleet Manager that refers to a Kubernetes
`Secret` containing a valid kubeconfig file in the data field called `value`.
The format of this secret is intended to match the [format](https://cluster-api.sigs.k8s.io/developer/architecture/controllers/cluster.html#secrets)
of the kubeconfig
secret used in [cluster-api](https://github.com/kubernetes-sigs/cluster-api).
This means you can use `cluster-api` to create a cluster that is dynamically
registered with Fleet.
## Example
### Kubeconfig Secret
```yaml
kind: Secret
apiVersion: v1
metadata:
name: my-cluster-kubeconfig
namespace: clusters
data:
value: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIHNlcnZlcjogaHR0cHM6Ly9leGFtcGxlLmNvbTo2NDQzCiAgbmFtZTogY2x1c3Rlcgpjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogY2x1c3RlcgogICAgdXNlcjogdXNlcgogIG5hbWU6IGRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0CmtpbmQ6IENvbmZpZwpwcmVmZXJlbmNlczoge30KdXNlcnM6Ci0gbmFtZTogdXNlcgogIHVzZXI6CiAgICB0b2tlbjogc29tZXRoaW5nCg==
```
### Cluster
```yaml
apiVersion: fleet.cattle.io/v1alpha1
kind: Cluster
metadata:
name: my-cluster
namespace: clusters
labels:
demo: "true"
env: dev
spec:
kubeConfigSecret: my-cluster-kubeconfig
```

View File

@ -0,0 +1,162 @@
# Multi-cluster Install
![](/img/arch.png)
**Note:** Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.
**Warning:** The multi-cluster install described below is **only** covered in standalone Fleet, which is untested by Rancher QA.
In the below use case, you will setup a centralized Fleet manager. The centralized Fleet manager is a
Kubernetes cluster running the Fleet controllers. After installing the Fleet manager, you will then
need to register remote downstream clusters with the Fleet manager.
## Prerequisites
### Helm 3
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
fairly straight forward. To install the Helm 3 CLI follow the
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
macOS
```
brew install helm
```
Windows
```
choco install kubernetes-helm
```
### Kubernetes
The Fleet manager is a controller running on a Kubernetes cluster so an existing cluster is required. All
downstream cluster that will be managed will need to communicate to this central Kubernetes cluster. This
means the Kubernetes API server URL must be accessible to the downstream clusters. Any Kubernetes community
supported version of Kubernetes will work, in practice this means 1.15 or greater.
## API Server URL and CA certificate
In order for your Fleet management installation to properly work it is important
the correct API server URL and CA certificates are configured properly. The Fleet agents
will communicate to the Kubernetes API server URL. This means the Kubernetes
API server must be accessible to the downstream clusters. You will also need
to obtain the CA certificate of the API server. The easiest way to obtain this information
is typically from your kubeconfig file (`${HOME}/.kube/config`). The `server`,
`certificate-authority-data`, or `certificate-authority` fields will have these values.
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTi...
server: https://example.com:6443
```
Please note that the `certificate-authority-data` field is base64 encoded and will need to be
decoded before you save it into a file. This can be done by saving the base64 encoded contents to
a file and then running
```shell
base64 -d encoded-file > ca.pem
```
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
`KUBECONFIG` and place then in a file named `ca.pem`.
```shell
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
If you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
## Install
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
Run the following commands
Setup the environment with your specific values.
```shell
API_SERVER_URL="https://example.com:6443"
API_SERVER_CA="ca.pem"
```
If you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
# Leave empty if your API server is signed by a well known CA
API_SERVER_CA="ca.pem"
```
First validate the server URL is correct.
```shell
curl -fLk ${API_SERVER_URL}/version
```
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
6443 for Kubernetes.
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
well known CA then omit the `--cacert ${API_SERVER_CA}` part of the command.
```shell
curl -fL --cacert ${API_SERVER_CA} ${API_SERVER_URL}/version
```
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
only because the curl command is not setting proper credentials, but this validates that the TLS
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
the `ca.pem` is not correct. The contents of the `${API_SERVER_CA}` file should look similar to the below
```
-----BEGIN CERTIFICATE-----
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
KDs/pb3fnMTtpA==
-----END CERTIFICATE-----
```
Once you have validated the API server URL and API server CA parameters, install the following two
Helm charts.
First install the Fleet CustomResourcesDefintions.
```shell
helm -n cattle-fleet-system install --create-namespace --wait fleet-crd https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-crd-0.4.0.tgz
```
Second install the Fleet controllers.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
--set apiServerURL="${API_SERVER_URL}" \
--set-file apiServerCA="${API_SERVER_CA}" \
fleet https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-0.4.0.tgz
```
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
```shell
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-overview.md) and [git repos](./gitrepo-add.md) with
the Fleet manager.

View File

@ -0,0 +1,108 @@
# Namespaces
All types in the Fleet manager are namespaced. The namespaces of the manager types do not correspond to the namespaces
of the deployed resources in the downstream cluster. Understanding how namespaces are use in the Fleet manager is
important to understand the security model and how one can use Fleet in a multi-tenant fashion.
## GitRepos, Bundles, Clusters, ClusterGroups
The primary types are all scoped to a namespace. All selectors for `GitRepo` targets will be evaluated against
the `Clusters` and `ClusterGroups` in the same namespaces. This means that if you give `create` or `update` privileges
to a the `GitRepo` type in a namespace, that end user can modify the selector to match any cluster in that namespace.
This means in practice if you want to have two teams self manage their own `GitRepo` registrations but they should
not be able to target each others clusters, they should be in different namespaces.
## Namespace Creation Behavior in Bundles
When deploying a Fleet bundle, the specified namespace will automatically be created if it does not already exist.
## Special Namespaces
### fleet-local
The **fleet-local** namespace is a special namespace used for the single cluster use case or to bootstrap
the configuration of the Fleet manager.
When fleet is installed the `fleet-local` namespace is created along with one `Cluster` called `local` and one
`ClusterGroup` called `default`. If no targets are specified on a `GitRepo`, it is by default targeted to the
`ClusterGroup` named `default`. This means that all `GitRepos` created in `fleet-local` will
automatically target the `local` `Cluster`. The `local` `Cluster` refers to the cluster the Fleet manager is running
on.
**Note:** If you would like to migrate your cluster from `fleet-local` to `default`, please see this [documentation](./troubleshooting.md#migrate-the-local-cluster-to-the-fleet-default-cluster).
### cattle-fleet-system
The Fleet controller and Fleet agent run in this namespace. All service accounts referenced by `GitRepos` are expected
to live in this namespace in the downstream cluster.
### cattle-fleet-clusters-system
This namespace holds secrets for the cluster registration process. It should contain no other resources in it,
especially secrets.
### Cluster namespaces
For every cluster that is registered a namespace is created by the Fleet manager for that cluster.
These namespaces have are named in the form `cluster-${namespace}-${cluster}-${random}`. The purpose of this
namespace is that all `BundleDeployments` for that cluster are put into this namespace and
then the downstream cluster is given access to watch and update `BundleDeployments` in that namespace only.
## Cross namespace deployments
It is possible to create a GitRepo that will deploy across namespaces. The primary purpose of this is so that a
central privileged team can manage common configuration for many clusters that are managed by different teams. The way
this is accomplished is by creating a `BundleNamespaceMapping` resource in a cluster.
If you are creating a `BundleNamespaceMapping` resource it is best to do it in a namespace that only contains `GitRepos`
and no `Clusters`. It seems to get confusing if you have Clusters in the same repo as the cross namespace `GitRepos` will still
always be evaluated against the current namespace. So if you have clusters in the same namespace you may wish to make them
canary clusters.
A `BundleNamespaceMapping` has only two fields. Which are as below
```yaml
kind: BundleNamespaceMapping
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: not-important
namespace: typically-unique
# Bundles to match by label. The labels are defined in the fleet.yaml
# labels field or from the GitRepo metadata.labels field
bundleSelector:
matchLabels:
foo: bar
# Namespaces to match by label
namespaceSelector:
matchLabels:
foo: bar
```
If the `BundleNamespaceMappings` `bundleSelector` field matches a `Bundles` labels then that `Bundle` target criteria will
be evaluated against all clusters in all namespaces that match `namespaceSelector`. One can specify labels for the created
bundles from git by putting labels in the `fleet.yaml` file or on the `metadata.labels` field on the `GitRepo`.
## Restricting GitRepos
A namespace can contain multiple `GitRepoRestriction` resources. All `GitRepos`
created in that namespace will be checked against the list of restrictions.
If a `GitRepo` violates one of the constraints its `BundleDeployment` will be
in an error state and won't be deployed.
This can also be used to set the defaults for GitRepo's `serviceAccount` and `clientSecretName` fields.
```
kind: GitRepoRestriction
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: restriction
namespace: typically-unique
spec:
allowedClientSecretNames: []
allowedRepoPatterns: []
allowedServiceAccounts: []
defaultClientSecretName: ""
defaultServiceAccount: ""
```

View File

@ -0,0 +1,64 @@
# Quick Start
Who needs documentation, lets just run this thing!
## Install
Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure
things to your cluster.
```
brew install helm
```
Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet-crd https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-crd-v0.4.0.tgz
helm -n cattle-fleet-system install --create-namespace --wait \
fleet https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-v0.4.0.tgz
```
## Add a Git Repo to watch
Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
be deployed should be in `/manifests` in your repo.
```bash
cat > example.yaml << "EOF"
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: sample
# This namespace is special and auto-wired to deploy to the local cluster
namespace: fleet-local
spec:
# Everything from this repo will be ran in this cluster. You trust me right?
repo: "https://github.com/rancher/fleet-examples"
paths:
- simple
EOF
kubectl apply -f example.yaml
```
## Get Status
Get status of what fleet is doing
```shell
kubectl -n fleet-local get fleet
```
You should see something like this get created in your cluster.
```
kubectl get deploy frontend
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 3/3 3 3 116m
```
Enjoy and read the [docs](https://rancher.github.io/fleet).

View File

@ -0,0 +1,62 @@
# Single Cluster Install
![](/img/single-cluster.png)
In this use case you have only one cluster. The cluster will run both the Fleet
manager and the Fleet agent. The cluster will communicate with Git server to
deploy resources to this local cluster. This is the simplest setup and very
useful for dev/test and small scale setups. This use case is supported as a valid
use case for production.
## Prerequisites
### Helm 3
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
fairly straight forward. To install the Helm 3 CLI follow the
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
macOS
```
brew install helm
```
Windows
```
choco install kubernetes-helm
```
### Kubernetes
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
Any Kubernetes community supported version of Kubernetes will work, in practice this means 1.15 or greater.
## Install
Install the following two Helm charts.
First install the Fleet CustomResourcesDefintions.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet-crd https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-crd-0.4.0.tgz
```
Second install the Fleet controllers.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet https://github.com/rancher/fleet/releases/download/v0.4.0/fleet-0.4.0.tgz
```
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
running the below commands.
```shell
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.

View File

@ -0,0 +1,226 @@
# Troubleshooting
This section contains commands and tips to troubleshoot Fleet.
## **How Do I...**
### Fetch the log from `fleet-controller`?
In the local management cluster where the `fleet-controller` is deployed, run the following command with your specific `fleet-controller` pod name filled in:
```
$ kubectl logs -l app=fleet-controller -n cattle-fleet-system
```
### Fetch the log from the `fleet-agent`?
Go to each downstream cluster and run the following command for the local cluster with your specific `fleet-agent` pod name filled in:
```
# Downstream cluster
$ kubectl logs -l app=fleet-agent -n cattle-fleet-system
# Local cluster
$ kubectl logs -l app=fleet-agent -n cattle-local-fleet-system
```
### Fetch detailed error logs from `GitRepos` and `Bundles`?
Normally, errors should appear in the Rancher UI. However, if there is not enough information displayed about the error there, you can research further by trying one or more of the following as needed:
- For more information about the bundle, click on `bundle`, and the YAML mode will be enabled.
- For more information about the GitRepo, click on `GitRepo`, then click on `View Yaml` in the upper right of the screen. After viewing the YAML, check `status.conditions`; a detailed error message should be displayed here.
- Check the `fleet-controller` for synching errors.
- Check the `fleet-agent` log in the downstream cluster if you encounter issues when deploying the bundle.
### Check a chart rendering error in `Kustomize`?
Check the [`fleet-controller` logs](./troubleshooting.md#fetch-the-log-from-fleet-controller) and the [`fleet-agent` logs](./troubleshooting.md#fetch-the-log-from-the-fleet-agent).
### Check errors about watching or checking out the `GitRepo`, or about the downloaded Helm repo in `fleet.yaml`?
Check the `gitjob-controller` logs using the following command with your specific `gitjob` pod name filled in:
```
$ kubectl logs -f $gitjob-pod-name -n cattle-fleet-system
```
Note that there are two containers inside the pod: the `step-git-source` container that clones the git repo, and the `fleet` container that applies bundles based on the git repo.
The pods will usually have images named `rancher/tekton-utils` with the `gitRepo` name as a prefix. Check the logs for these Kubernetes job pods in the local management cluster as follows, filling in your specific `gitRepoName` pod name and namespace:
```
$ kubectl logs -f $gitRepoName-pod-name -n namespace
```
### Check the status of the `fleet-controller`?
You can check the status of the `fleet-controller` pods by running the commands below:
```bash
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```bash
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
### Migrate the local cluster to the Fleet default cluster?
For users who want to deploy to the local cluster as well, they may move the cluster from `fleet-local` to `fleet-default` in the Rancher UI as follows:
- To get to Fleet in Rancher, click ☰ > Continuous Delivery.
- Under the **Clusters** menu, select the **local** cluster by checking the box to the left.
- Select **Assign to** from the tabs above the cluster.
- Select **`fleet-default`** from the **Assign Cluster To** dropdown.
**Result**: The cluster will be migrated to `fleet-default`.
### Enable debug logging for `fleet-controller` and `fleet-agent`?
Available in Rancher v2.6.3 (Fleet v0.3.8), the ability to enable debug logging has been added.
- Go to the **Dashboard**, then click on the **local cluster** in the left navigation menu
- Select **Apps & Marketplace**, then **Installed Apps** from the dropdown
- From there, you will upgrade the Fleet chart with the value `debug=true`. You can also set `debugLevel=5` if desired.
## **Additional Solutions for Other Fleet Issues**
### Naming conventions for CRDs
1. For CRD terms like `clusters` and `gitrepos`, you must reference the full CRD name. For example, the cluster CRD's complete name is `cluster.fleet.cattle.io`, and the gitrepo CRD's complete name is `gitrepo.fleet.cattle.io`.
1. `Bundles`, which are created from the `GitRepo`, follow the pattern `$gitrepoName-$path` in the same workspace/namespace where the `GitRepo` was created. Note that `$path` is the path directory in the git repository that contains the `bundle` (`fleet.yaml`).
1. `BundleDeployments`, which are created from the `bundle`, follow the pattern `$bundleName-$clusterName` in the namespace `clusters-$workspace-$cluster-$generateHash`. Note that `$clusterName` is the cluster to which the bundle will be deployed.
### HTTP secrets in Github
When testing Fleet with private git repositories, you will notice that HTTP secrets are no longer supported in Github. To work around this issue, follow these steps:
1. Create a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) in Github.
1. In Rancher, create an HTTP [secret](https://rancher.com/docs/rancher/v2.6/en/k8s-in-rancher/secrets/) with your Github username.
1. Use your token as the secret.
### Fleet fails with bad response code: 403
If your GitJob returns the error below, the problem may be that Fleet cannot access the Helm repo you specified in your [`fleet.yaml`](./gitrepo-structure.md):
```
time="2021-11-04T09:21:24Z" level=fatal msg="bad response code: 403"
```
Perform the following steps to assess:
- Check that your repo is accessible from your dev machine, and that you can download the Helm chart successfully
- Check that your credentials for the git repo are valid
### Helm chart repo: certificate signed by unknown authority
If your GitJob returns the error below, you may have added the wrong certificate chain:
```
time="2021-11-11T05:55:08Z" level=fatal msg="Get \"https://helm.intra/virtual-helm/index.yaml\": x509: certificate signed by unknown authority"
```
Please verify your certificate with the following command:
```bash
context=playground-local
kubectl get secret -n fleet-default helm-repo -o jsonpath="{['data']['cacerts']}" --context $context | base64 -d | openssl x509 -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
7a:1e:df:79:5f:b0:e0:be:49:de:11:5e:d9:9c:a9:71
Signature Algorithm: sha512WithRSAEncryption
Issuer: C = CH, O = MY COMPANY, CN = NOP Root CA G3
...
```
### Fleet deployment stuck in modified state
When you deploy bundles to Fleet, some of the components are modified, and this causes the "modified" flag in the Fleet environment.
To ignore the modified flag for the differences between the Helm install generated by `fleet.yaml` and the resource in your cluster, add a `diff.comparePatches` to the `fleet.yaml` for your Deployment, as shown in this example:
```yaml
defaultNamespace: <namespace name>
helm:
releaseName: <release name>
repo: <repo name>
chart: <chart name>
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
operations:
- {"op":"remove", "path":"/spec/template/spec/hostNetwork"}
- {"op":"remove", "path":"/spec/template/spec/nodeSelector"}
jsonPointers: # jsonPointers allows to ignore diffs at certain json path
- "/spec/template/spec/priorityClassName"
- "/spec/template/spec/tolerations"
```
To determine which operations should be removed, observe the logs from `fleet-agent` on the target cluster. You should see entries similar to the following:
```text
level=error msg="bundle monitoring-monitoring: deployment.apps monitoring/monitoring-monitoring-kube-state-metrics modified {\"spec\":{\"template\":{\"spec\":{\"hostNetwork\":false}}}}"
```
Based on the above log, you can add the following entry to remove the operation:
```json
{"op":"remove", "path":"/spec/template/spec/hostNetwork"}
```
### `GitRepo` or `Bundle` stuck in modified state
**Modified** means that there is a mismatch between the actual state and the desired state, the source of truth, which lives in the git repository.
1. Check the [bundle diffs documentation](./bundle-diffs.md) for more information.
1. You can also force update the `gitrepo` to perform a manual resync. Select **GitRepo** on the left navigation bar, then select **Force Update**.
### Bundle has a Horizontal Pod Autoscaler (HPA) in modified state
For bundles with an HPA, the expected state is `Modified`, as the bundle contains fields that differ from the state of the Bundle at deployment - usually `ReplicaSet`.
You must define a patch in the `fleet.yaml` to ignore this field according to [`GitRepo` or `Bundle` stuck in modified state](#gitrepo-or-bundle-stuck-in-modified-state).
Here is an example of such a patch for the deployment `nginx` in namespace `default`:
```yaml
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
name: nginx
namespace: default
operations:
- {"op": "remove", "path": "/spec/replicas"}
```
### What if the cluster is unavailable, or is in a `WaitCheckIn` state?
You will need to re-import and restart the registration process: Select **Cluster** on the left navigation bar, then select **Force Update**
:::caution
__WaitCheckIn status for Rancher v2.5__:
The cluster will show in `WaitCheckIn` status because the `fleet-controller` is attempting to communicate with Fleet using the Rancher service IP. However, Fleet must communicate directly with Rancher via the Kubernetes service DNS using service discovery, not through the proxy. For more, see the [Rancher docs](https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/#install-rancher).
:::
### GitRepo complains with `gzip: invalid header`
When you see an error like the one below ...
```sh
Error opening a gzip reader for /tmp/getter154967024/archive: gzip: invalid header
```
... the content of the helm chart is incorrect. Manually download the chart to your local machine and check the content.

View File

@ -0,0 +1,10 @@
# Uninstall
Fleet is packaged as two Helm charts so uninstall is accomplished by
uninstalling the appropriate Helm charts. To uninstall Fleet run the following
two commands:
```shell
helm -n cattle-fleet-system uninstall fleet
helm -n cattle-fleet-system uninstall fleet-crd
```

View File

@ -0,0 +1,70 @@
# Webhook
By default, Fleet utilizes polling (default: 15 seconds) to pull from a Git repo.However, this can be configured to utilize a webhook instead.Fleet currently supports Github,
GitLab, Bitbucket, Bitbucket Server and Gogs.
### 1. Configure the webhook service. Fleet uses a gitjob service to handle webhook requests. Create an ingress that points to the gitjob service.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webhook-ingress
namespace: cattle-fleet-system
spec:
rules:
- host: your.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitjob
port:
number: 80
```
:::info
You can configure [TLS](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) on ingress.
:::
### 2. Go to your webhook provider and configure the webhook callback url. Here is a Github example.
![](/img/webhook.png)
Configuring a secret is optional. This is used to validate the webhook payload as the payload should not be trusted by default.
If your webhook server is publicly accessible to the Internet, then it is recommended to configure the secret. If you do configure the
secret, follow step 3.
:::note
only application/json is supported due to the limitation of webhook library.
:::
:::caution
If you configured the webhook the polling interval will be automatically adjusted to 1 hour.
:::
### 3. (Optional) Configure webhook secret. The secret is for validating webhook payload. Make sure to put it in a k8s secret called `gitjob-webhook` in `cattle-fleet-system`.
| Provider | K8s Secret Key |
|-----------------| ---------------------------------|
| GitHub | `github` |
| GitLab | `gitlab` |
| BitBucket | `bitbucket` |
| BitBucketServer | `bitbucket-server` |
| Gogs | `gogs` |
For example, to create a secret containing a GitHub secret to validate the webhook payload, run:
```shell
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=github=webhooksecretvalue
```
### 4. Go to your git provider and test the connection. You should get a HTTP response code.

View File

@ -0,0 +1,13 @@
# Advanced Users
Note that using Fleet outside of Rancher is highly discouraged for any users who do not need to perform advanced actions. However, there are some advanced use cases that may need to be performed outside of Rancher, also known as Standalone Fleet, or Fleet without Rancher. This section will highlight such use cases.
The following are examples of advanced use cases:
- Nested GitRepo CRs
>Managing Fleet within Fleet (nested GitRepo usage) is not currently supported. We will update the documentation if support becomes available.
- [Single cluster installation](./single-cluster-install.md)
- [Multi-cluster installation](./multi-cluster-install.md)
Please refer to the [installation](./installation.md) and the [uninstall](./uninstall.md) documentation for additional information.

View File

@ -0,0 +1,171 @@
# Agent Initiated
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the agent initiated registration style.
## Cluster Registration Token and Client ID
A downstream cluster is registered using the **cluster registration token** and optionally a **client ID** or **cluster labels**.
The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
able to initiate the registration process. This is required. Refer to the [cluster registration token page](./cluster-tokens.md) for more information
on how to create tokens and obtain the values. The cluster registration token is manifested as a `values.yaml` file that will
be passed to the `helm install` process.
There are two styles of registering an agent. You can have the cluster for this agent dynamically created, in which
case you will probably want to specify **cluster labels** upon registration. Or you can have the agent register to a predefined
cluster in the Fleet manager, in which case you will need a **client ID**. The former approach is typically the easiest.
## Install agent for a new Cluster
The Fleet agent is installed as a Helm chart. Following are explanations how to determine and set its parameters.
First, follow the [cluster registration token page](./cluster-tokens.md) to obtain the `values.yaml` which contains
the registration token to authenticate against the Fleet cluster.
Second, optionally you can define labels that will assigned to the newly created cluster upon registration. After
registration is completed an agent cannot change the labels of the cluster. To add cluster labels add
`--set-string labels.KEY=VALUE` to the below Helm command. To add the labels `foo=bar` and `bar=baz` then you would
add `--set-string labels.foo=bar --set-string labels.bar=baz` to the command line.
```shell
# Leave blank if you do not want any labels
CLUSTER_LABELS="--set-string labels.example=true --set-string labels.env=dev"
```
Third, set variables with the Fleet cluster's API Server URL and CA, for the downstream cluster to use for connecting.
```shell
API_SERVER_URL=https://...
API_SERVER_CA=...
```
Value in `API_SERVER_CA` can be obtained from a `.kube/config` file with valid data to connect to the upstream cluster
(under the `certificate-authority-data` key). Alternatively it can be obtained from within the upstream cluster itself,
by looking up the default ServiceAccount secret name (typically prefixed with `default-token-`, in the default namespace),
under the `ca.crt` key.
:::caution
__Use proper namespace and release name__:
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
:::
:::warning
__Ensure you are installing to the right cluster__:
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
to change which cluster Helm is installing to.
:::
Finally, install the agent using Helm.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
${CLUSTER_LABELS} \
--values values.yaml \
--set apiServerCA=${API_SERVER_CA} \
--set apiServerURL=${API_SERVER_URL} \
fleet-agent https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-agent-0.5.0-rc2.tgz
```
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
```shell
# Ensure kubectl is pointing to the right cluster
kubectl -n cattle-fleet-system logs -l app=fleet-agent
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
```
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
manager to run this command.
```shell
kubectl -n clusters get clusters.fleet.cattle.io
```
```
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
```
## Install agent for a predefined Cluster
Client IDs are for the purpose of predefining clusters in the Fleet manager with existing labels and repos targeted to them.
A client ID is not required and is just one approach to managing clusters.
The **client ID** is a unique string that will identify the cluster.
This string is user generated and opaque to the Fleet manager and agent. It is assumed to be sufficiently unique. For security reasons one should not be able to easily guess this value
as then one cluster could impersonate another. The client ID is optional and if not specified the UID field of the `kube-system` namespace
resource will be used as the client ID. Upon registration if the client ID is found on a `Cluster` resource in the Fleet manager it will associate
the agent with that `Cluster`. If no `Cluster` resource is found with that client ID a new `Cluster` resource will be created with the specific
client ID.
The Fleet agent is installed as a Helm chart. The only parameters to the helm chart installation should be the cluster registration token, which
is represented by the `values.yaml` file and the client ID. The client ID is optional.
First, create a `Cluster` in the Fleet Manager with the random client ID you have chosen.
```yaml
kind: Cluster
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: my-cluster
namespace: clusters
spec:
clientID: "really-random"
```
Second, follow the [cluster registration token page](./cluster-tokens.md) to obtain the `values.yaml` file to be used.
Third, setup your environment to use the client ID.
```shell
CLUSTER_CLIENT_ID="really-random"
```
:::note
__Use proper namespace and release name__:
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
:::
:::note
__Ensure you are installing to the right cluster__:
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
to change which cluster Helm is installing to.
:::
Finally, install the agent using Helm.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
--set clientID="${CLUSTER_CLIENT_ID}" \
--values values.yaml \
fleet-agent https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-agent-v0.5.0-rc2.tgz
```
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
```shell
# Ensure kubectl is pointing to the right cluster
kubectl -n cattle-fleet-system logs -l app=fleet-agent
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
```
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
manager to run this command.
```shell
kubectl -n clusters get clusters.fleet.cattle.io
```
```
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
my-cluster 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
```

View File

@ -0,0 +1,40 @@
# Architecture
![](/img/arch.png)
Fleet has two primary components. The Fleet manager and the cluster agents. These
components work in a two-stage pull model. The Fleet manager will pull from git and the
cluster agents will pull from the Fleet manager.
## Fleet Manager
The Fleet manager is a set of Kubernetes controllers running in any standard Kubernetes
cluster. The only API exposed by the Fleet manager is the Kubernetes API, there is no
custom API for the fleet controller.
## Cluster Agents
One cluster agent runs in each cluster and is responsible for talking to the Fleet manager.
The only communication from cluster to Fleet manager is by this agent and all communication
goes from the managed cluster to the Fleet manager. The fleet manager does not initiate
connections to downstream clusters. This means managed clusters can run in private networks and behind
NATs. The only requirement is the cluster agent needs to be able to communicate with the
Kubernetes API of the cluster running the Fleet manager. The one exception to this is if you use
the [manager initiated](./manager-initiated.md) cluster registration flow. This is not required, but
an optional pattern.
The cluster agents are not assumed to have an "always on" connection. They will resume operation as
soon as they can connect. Future enhancements will probably add the ability to schedule times of when
the agent checks in, as it stands right now they will always attempt to connect.
## Security
The Fleet manager dynamically creates service accounts, manages their RBAC and then gives the
tokens to the downstream clusters. Clusters are registered by optionally expiring cluster registration tokens.
The cluster registration token is used only during the registration process to generate a credential specific
to that cluster. After the cluster credential is established the cluster "forgets" the cluster registration
token.
The service accounts given to the clusters only have privileges to list `BundleDeployment` in the namespace created
specifically for that cluster. It can also update the `status` subresource of `BundleDeployment` and the `status`
subresource of it's `Cluster` resource.

View File

@ -0,0 +1,267 @@
# Generating Diffs for Modified GitRepos
Continuous Delivery in Rancher is powered by fleet. When a user adds a GitRepo CR, then Continuous Delivery creates the associated fleet bundles.
You can access these bundles by navigating to the Cluster Explorer (Dashboard UI), and selecting the `Bundles` section.
The bundled charts may have some objects that are amended at runtime, for example in ValidatingWebhookConfiguration the `caBundle` is empty and the CA cert is injected by the cluster.
This leads the status of the bundle and associated GitRepo to be reported as "Modified"
![](/img/ModifiedGitRepo.png)
Associated Bundle
![](/img/ModifiedBundle.png)
Fleet bundles support the ability to specify a custom [jsonPointer patch](http://jsonpatch.com/).
With the patch, users can instruct fleet to ignore object modifications.
In this example, we are trying to deploy opa-gatekeeper using Continuous Delivery to our clusters.
The opa-gatekeeper bundle associated with the opa GitRepo is in modified state.
Each path in the GitRepo CR, has an associated Bundle CR. The user can view the Bundles, and the associated diff needed in the Bundle status.
In our case the differences detected are as follows:
```yaml
summary:
desiredReady: 1
modified: 1
nonReadyResources:
- bundleState: Modified
modifiedStatus:
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
patch: '{"$setElementOrder/webhooks":[{"name":"validation.gatekeeper.sh"},{"name":"check-ignore-label.gatekeeper.sh"}],"webhooks":[{"clientConfig":{"caBundle":"Cg=="},"name":"validation.gatekeeper.sh","rules":[{"apiGroups":["*"],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["*"]}]},{"clientConfig":{"caBundle":"Cg=="},"name":"check-ignore-label.gatekeeper.sh","rules":[{"apiGroups":[""],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["namespaces"]}]}]}'
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
```
Based on this summary, there are three objects which need to be patched.
We will look at these one at a time.
### 1. ValidatingWebhookConfiguration:
The gatekeeper-validating-webhook-configuration validating webhook has two ValidatingWebhooks in its spec.
In cases where more than one element in the field requires a patch, that patch will refer these to as `$setElementOrder/ELEMENTNAME`
From this information, we can see the two ValidatingWebhooks in question are:
```
"$setElementOrder/webhooks": [
{
"name": "validation.gatekeeper.sh"
},
{
"name": "check-ignore-label.gatekeeper.sh"
}
],
```
Within each ValidatingWebhook, the fields that need to be ignore are as follows:
```
{
"clientConfig": {
"caBundle": "Cg=="
},
"name": "validation.gatekeeper.sh",
"rules": [
{
"apiGroups": [
"*"
],
"apiVersions": [
"*"
],
"operations": [
"CREATE",
"UPDATE"
],
"resources": [
"*"
]
}
]
},
```
and
```
{
"clientConfig": {
"caBundle": "Cg=="
},
"name": "check-ignore-label.gatekeeper.sh",
"rules": [
{
"apiGroups": [
""
],
"apiVersions": [
"*"
],
"operations": [
"CREATE",
"UPDATE"
],
"resources": [
"namespaces"
]
}
]
}
```
In summary, we need to ignore the fields `rules` and `clientConfig.caBundle` in our patch specification.
The field webhook in the ValidatingWebhookConfiguration spec is an array, so we need to address the elements by their index values.
![](/img/WebhookConfigurationSpec.png)
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
operations:
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/0/rules"}
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/1/rules"}
```
### 2. Deployment gatekeeper-controller-manager:
The gatekeeper-controller-manager deployment is modified since there are cpu limits and tolerations applied (which are not in the actual bundle).
```
{
"spec": {
"template": {
"spec": {
"$setElementOrder/containers": [
{
"name": "manager"
}
],
"containers": [
{
"name": "manager",
"resources": {
"limits": {
"cpu": "1000m"
}
}
}
],
"tolerations": []
}
}
}
}
```
In this case, there is only 1 container in the deployment container spec, and that container has cpu limits and tolerations added.
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
```
### 3. Deployment gatekeeper-audit:
The gatekeeper-audit deployment is modified in a similarly, to the gatekeeper-controller-manager, with additional cpu limits and tolerations applied.
```
{
"spec": {
"template": {
"spec": {
"$setElementOrder/containers": [
{
"name": "manager"
}
],
"containers": [
{
"name": "manager",
"resources": {
"limits": {
"cpu": "1000m"
}
}
}
],
"tolerations": []
}
}
}
}
```
Similar to gatekeeper-controller-manager, there is only 1 container in the deployments container spec, and that has cpu limits and tolerations added.
Based on this information, our diff patch would look as follows:
```yaml
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
```
### Combining It All Together
We can now combine all these patches as follows:
```yaml
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-audit
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
- apiVersion: apps/v1
kind: Deployment
name: gatekeeper-controller-manager
namespace: cattle-gatekeeper-system
operations:
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
- apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
name: gatekeeper-validating-webhook-configuration
operations:
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/0/rules"}
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
- {"op": "remove", "path":"/webhooks/1/rules"}
```
We can add these now to the bundle directly to test and also commit the same to the `fleet.yaml` in your GitRepo.
Once these are added, the GitRepo should deploy and be in "Active" status.

View File

@ -0,0 +1,37 @@
# Cluster and Bundle state
Clusters and Bundles have different states in each phase of applying Bundles.
## Bundles
**Ready**: Bundles have been deployed and all resources are ready.
**NotReady**: Bundles have been deployed and some resources are not ready.
**WaitApplied**: Bundles have been synced from Fleet controller and downstream cluster, but are waiting to be deployed.
**ErrApplied**: Bundles have been synced from the Fleet controller and the downstream cluster, but there were some errors when deploying the Bundle.
**OutOfSync**: Bundles have been synced from Fleet controller, but downstream agent hasn't synced the change yet.
**Pending**: Bundles are being processed by Fleet controller.
**Modified**: Bundles have been deployed and all resources are ready, but there are some changes that were not made from the Git Repository.
## Clusters
**WaitCheckIn**: Waiting for agent to report registration information and cluster status back.
**NotReady**: There are bundles in this cluster that are in NotReady state.
**WaitApplied**: There are bundles in this cluster that are in WaitApplied state.
**ErrApplied**: There are bundles in this cluster that are in ErrApplied state.
**OutOfSync**: There are bundles in this cluster that are in OutOfSync state.
**Pending**: There are bundles in this cluster that are in Pending state.
**Modified**: There are bundles in this cluster that are in Modified state.
**Ready**: Bundles in this cluster have been deployed and all resources are ready.

View File

@ -0,0 +1,22 @@
# Cluster Groups
Clusters in a namespace can be put into a cluster group. A cluster group is essentially a named selector.
The only parameter for a cluster group is essentially the selector.
When you get to a certain scale cluster groups become a more reasonable way to manage your clusters.
Cluster groups serve the purpose of giving aggregated
status of the deployments and then also a simpler way to manage targets.
A cluster group is created by creating a `ClusterGroup` resource like below
```yaml
kind: ClusterGroup
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: production-group
namespace: clusters
spec:
# This is the standard metav1.LabelSelector format to match clusters by labels
selector:
matchLabels:
env: prod
```

View File

@ -0,0 +1,25 @@
# Overview
There are two specific styles to registering clusters. These styles will be referred
to as **agent initiated** and **manager initiated** registration. Typically one would
go with the agent initiated registration but there are specific use cases in which
manager initiated is a better workflow.
## Agent Initiated Registration
Agent initiated refers to a pattern in which the downstream cluster installs an agent with a
[cluster registration token](./cluster-tokens.md) and optionally a client ID. The cluster
agent will then make a API request to the Fleet manager and initiate the registration process. Using
this process the Manager will never make an outbound API request to the downstream clusters and will thus
never need to have direct network access. The downstream cluster only needs to make outbound HTTPS
calls to the manager.
## Manager Initiated Registration
Manager initiated registration is a process in which you register an existing Kubernetes cluster
with the Fleet manager and the Fleet manager will make an API call to the downstream cluster to
deploy the agent. This style can place additional network access requirements because the Fleet
manager must be able to communicate with the downstream cluster API server for the registration process.
After the cluster is registered there is no further need for the manager to contact the downstream
cluster API. This style is more compatible if you wish to manage the creation of all your Kubernetes
clusters through GitOps using something like [cluster-api](https://github.com/kubernetes-sigs/cluster-api)
or [Rancher](https://github.com/rancher/rancher).

View File

@ -0,0 +1,65 @@
# Cluster Registration Tokens
:::info
__Not needed for Manager initiated registration__:
For manager initiated registrations the token is managed by the Fleet manager and does
not need to be manually created and obtained.
:::
For an agent initiated registration the downstream cluster must have a cluster registration token.
Cluster registration tokens are used to establish a new identity for a cluster. Internally
cluster registration tokens are managed by creating Kubernetes service accounts that have the
permissions to create `ClusterRegistrationRequests` within a specific namespace. Once the
cluster is registered a new `ServiceAccount` is created for that cluster that is used as
the unique identity of the cluster. The agent is designed to forget the cluster registration
token after registration. While the agent will not maintain a reference to the cluster registration
token after a successful registration please note that usually other system bootstrap scripts do.
Since the cluster registration token is forgotten, if you need to re-register a cluster you must
give the cluster a new registration token.
## Token TTL
Cluster registration tokens can be reused by any cluster in a namespace. The tokens can be given a TTL
such that it will expire after a specific time.
## Create a new Token
The `ClusterRegistationToken` is a namespaced type and should be created in the same namespace
in which you will create `GitRepo` and `ClusterGroup` resources. For in depth details on how namespaces
are used in Fleet refer to the documentation on [namespaces](./namespaces.md). Create a new
token with the below YAML.
```yaml
kind: ClusterRegistrationToken
apiVersion: "fleet.cattle.io/v1alpha1"
metadata:
name: new-token
namespace: clusters
spec:
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
ttl: 240h
```
After the `ClusterRegistrationToken` is created, Fleet will create a corresponding `Secret` with the same name.
As the `Secret` creation is performed asynchronously, you will need to wait until it's available before using it.
One way to do so is via the following one-liner:
```shell
while ! kubectl --namespace=clusters get secret new-token; do sleep 5; done
```
## Obtaining Token Value (Agent values.yaml)
The token value contains YAML content for a `values.yaml` file that is expected to be passed to `helm install`
to install the Fleet agent on a downstream cluster.
Such value is contained in the `values` field of the `Secret` mentioned above. To obtain the YAML content for the
above example one can run the following one-liner:
```shell
kubectl --namespace clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
```
Once the `values.yaml` is ready it can be used repeatedly by clusters to register until the TTL expires.

View File

@ -0,0 +1,51 @@
# Core Concepts
Fleet is fundamentally a set of Kubernetes custom resource definitions (CRDs) and controllers
to manage GitOps for a single Kubernetes cluster or a large-scale deployment of Kubernetes clusters.
:::info
For more on the naming conventions of CRDs, click [here](./troubleshooting.md#naming-conventions-for-crds).
:::
Below are some of the concepts of Fleet that will be useful throughout this documentation:
* **Fleet Manager**: The centralized component that orchestrates the deployments of Kubernetes assets
from git. In a multi-cluster setup, this will typically be a dedicated Kubernetes cluster. In a
single cluster setup, the Fleet manager will be running on the same cluster you are managing with GitOps.
* **Fleet controller**: The controller(s) running on the Fleet manager orchestrating GitOps. In practice,
the Fleet manager and Fleet controllers are used fairly interchangeably.
* **Single Cluster Style**: This is a style of installing Fleet in which the manager and downstream cluster are the
same cluster. This is a very simple pattern to quickly get up and running with GitOps.
* **Multi Cluster Style**: This is a style of running Fleet in which you have a central manager that manages a large
number of downstream clusters.
* **Fleet agent**: Every managed downstream cluster will run an agent that communicates back to the Fleet manager.
This agent is just another set of Kubernetes controllers running in the downstream cluster.
* **GitRepo**: Git repositories that are watched by Fleet are represented by the type `GitRepo`.
>**Example installation order via `GitRepo` custom resources when using Fleet for the configuration management of downstream clusters:**
>
> 1. Install [Calico](https://github.com/projectcalico/calico) CRDs and controllers.
> 2. Set one or multiple cluster-level global network policies.
> 3. Install [GateKeeper](https://github.com/open-policy-agent/gatekeeper). Note that **cluster labels** and **overlays** are critical features in Fleet as they determine which clusters will get each part of the bundle.
> 4. Set up and configure ingress and system daemons.
* **Bundle**: An internal unit used for the orchestration of resources from git.
When a `GitRepo` is scanned it will produce one or more bundles. Bundles are a collection of
resources that get deployed to a cluster. `Bundle` is the fundamental deployment unit used in Fleet. The
contents of a `Bundle` may be Kubernetes manifests, Kustomize configuration, or Helm charts.
Regardless of the source the contents are dynamically rendered into a Helm chart by the agent
and installed into the downstream cluster as a helm release.
- To see the **lifecycle of a bundle**, click [here](./examples.md#lifecycle-of-a-fleet-bundle).
* **BundleDeployment**: When a `Bundle` is deployed to a cluster an instance of a `Bundle` is called a `BundleDeployment`.
A `BundleDeployment` represents the state of that `Bundle` on a specific cluster with its cluster specific
customizations. The Fleet agent is only aware of `BundleDeployment` resources that are created for
the cluster the agent is managing.
- For an example of how to deploy Kubernetes manifests across clusters using Fleet customization, click [here](./examples.md#deploy-kubernetes-manifests-across-clusters-with-customization).
* **Downstream Cluster**: Clusters to which Fleet deploys manifests are referred to as downstream clusters. In the single cluster use case, the Fleet manager Kubernetes cluster is both the manager and downstream cluster at the same time.
* **Cluster Registration Token**: Tokens used by agents to register a new cluster.

View File

@ -0,0 +1,75 @@
# Examples
### Lifecycle of a Fleet Bundle
To demonstrate the lifecycle of a Fleet bundle, we will use [multi-cluster/helm](https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm) as a case study.
1. User will create a [GitRepo](./gitrepo-add.md#create-gitrepo-instance) that points to the multi-cluster/helm repository.
2. The `gitjob-controller` will sync changes from the GitRepo and detect changes from the polling or [webhook event](./webhook.md). With every commit change, the `gitjob-controller` will create a job that clones the git repository, reads content from the repo such as `fleet.yaml` and other manifests, and creates the Fleet [bundle](./cluster-bundles-state.md#bundles).
>**Note:** The job pod with the image name `rancher/tekton-utils` will be under the same namespace as the GitRepo.
3. The `fleet-controller` then syncs changes from the bundle. According to the targets, the `fleet-controller` will create `BundleDeployment` resources, which are a combination of a bundle and a target cluster.
4. The `fleet-agent` will then pull the `BundleDeployment` from the Fleet controlplane. The agent deploys bundle manifests as a [Helm chart](https://helm.sh/docs/intro/install/) from the `BundleDeployment` into the downstream clusters.
5. The `fleet-agent` will continue to monitor the application bundle and report statuses back in the following order: bundledeployment > bundle > GitRepo > cluster.
### Deploy Kubernetes Manifests Across Clusters with Customization
[Fleet in Rancher](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/) allows users to manage clusters easily as if they were one cluster. Users can deploy bundles, which can be comprised of deployment manifests or any other Kubernetes resource, across clusters using grouping configuration.
To demonstrate how to deploy Kubernetes manifests across different clusters using Fleet, we will use [multi-cluster/helm/fleet.yaml](https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm/fleet.yaml) as a case study.
**Situation:** User has three clusters with three different labels: `env=dev`, `env=test`, and `env=prod`. User wants to deploy a frontend application with a backend database across these clusters.
**Expected behavior:**
- After deploying to the `dev` cluster, database replication is not enabled.
- After deploying to the `test` cluster, database replication is enabled.
- After deploying to the `prod` cluster, database replication is enabled and Load balancer services are exposed.
**Advantage of Fleet:**
Instead of deploying the app on each cluster, Fleet allows you to deploy across all clusters following these steps:
1. Deploy gitRepo `https://github.com/rancher/fleet-examples.git` and specify the path `multi-cluster/helm`.
2. Under `multi-cluster/helm`, a Helm chart will deploy the frontend app service and backend database service.
3. The following rule will be defined in `fleet.yaml`:
```
targetCustomizations:
- name: dev
helm:
values:
replication: false
clusterSelector:
matchLabels:
env: dev
- name: test
helm:
values:
replicas: 3
clusterSelector:
matchLabels:
env: test
- name: prod
helm:
values:
serviceType: LoadBalancer
replicas: 3
clusterSelector:
matchLabels:
env: prod
```
**Result:**
Fleet will deploy the Helm chart with your customized `values.yaml` to the different clusters.
>**Note:** Configuration management is not limited to deployments but can be expanded to general configuration management. Fleet is able to apply configuration management through customization among any set of clusters automatically.
### Additional Examples
Examples using raw Kubernetes YAML, Helm charts, Kustomize, and combinations
of the three are in the [Fleet Examples repo](https://github.com/rancher/fleet-examples/).

View File

@ -0,0 +1,202 @@
# Adding a GitRepo
## Proper namespace
Git repos are added to the Fleet manager using the `GitRepo` custom resource type. The `GitRepo` type is namespaced. By default, Rancher will create two Fleet workspaces: **fleet-default** and **fleet-local**.
- `Fleet-default` will contain all the downstream clusters that are already registered through Rancher.
- `Fleet-local` will contain the local cluster by default.
Users can create new workspaces and move clusters across workspaces. An example of a special case might be including the local cluster in the `GitRepo` payload for config maps and secrets (no active deployments or payloads).
:::warning
While it's possible to move clusters out of either workspace, we recommend that you keep the local cluster in `fleet-local`.
:::
If you are using Fleet in a [single cluster](./concepts.md) style, the namespace will always be **fleet-local**. Check [here](https://fleet.rancher.io/namespaces/#fleet-local) for more on the `fleet-local` namespace.
For a [multi-cluster](./concepts.md) style, please ensure you use the correct repo that will map to the right target clusters.
## Create GitRepo instance
Git repositories are register by creating a `GitRepo` following the below YAML sample. Refer
to the inline comments as the means of each field
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
# Any name can be used here
name: my-repo
# For single cluster use fleet-local, otherwise use the namespace of
# your choosing
namespace: fleet-local
spec:
# This can be a HTTPS or git URL. If you are using a git URL then
# clientSecretName will probably need to be set to supply a credential.
# repo is the only required parameter for a repo to be monitored.
#
repo: https://github.com/rancher/fleet-examples
# Enforce all resources go to this target namespace. If a cluster scoped
# resource is found the deployment will fail.
#
# targetNamespace: app1
# Any branch can be watched, this field is optional. If not specified the
# branch is assumed to be master
#
# branch: master
# A specific commit or tag can also be watched.
#
# revision: v0.3.0
# For a private registry you must supply a clientSecretName. A default
# secret can be set at the namespace level using the GitRepoRestriction
# type. Secrets must be of the type "kubernetes.io/ssh-auth" or
# "kubernetes.io/basic-auth". The secret is assumed to be in the
# same namespace as the GitRepo
#
# clientSecretName: my-ssh-key
#
# If fleet.yaml contains a private Helm repo that requires authentication,
# provide the credentials in a K8s secret and specify them here. Details are provided
# in the fleet.yaml documentation.
#
# helmSecretName: my-helm-secret
#
# To add additional ca-bundle for self-signed certs, caBundle can be
# filled with base64 encoded pem data. For example:
# `cat /path/to/ca.pem | base64 -w 0`
#
# caBundle: my-ca-bundle
#
# Disable SSL verification for git repo
#
# insecureSkipTLSVerify: true
#
# A git repo can read multiple paths in a repo at once.
# The below field is expected to be an array of paths and
# supports path globbing (ex: some/*/path)
#
# Example:
# paths:
# - single-path
# - multiple-paths/*
paths:
- simple
# PollingInterval configures how often fleet checks the git repo. The default
# is 15 seconds.
# Setting this to zero does not disable polling. It results in a 15s
# interval, too.
#
# pollingInterval: 15
# Paused causes changes in Git to not be propagated down to the clusters but
# instead mark resources as OutOfSync
#
# paused: false
# Increment this number to force a redeployment of contents from Git
#
# forceSyncGeneration: 0
# The service account that will be used to perform this deployment.
# This is the name of the service account that exists in the
# downstream cluster in the cattle-fleet-system namespace. It is assumed
# this service account already exists so it should be create before
# hand, most likely coming from another git repo registered with
# the Fleet manager.
#
# serviceAccount: moreSecureAccountThanClusterAdmin
# Target clusters to deploy to if running Fleet in a multi-cluster
# style. Refer to the "Mapping to Downstream Clusters" docs for
# more information.
#
# targets: ...
```
## Adding private repository
Fleet supports both http and ssh auth key for private repository. To use this you have to create a secret in the same namespace.
For example, to generate a private ssh key
```text
ssh-keygen -t rsa -b 4096 -m pem -C "user@email.com"
```
Note: The private key format has to be in `EC PRIVATE KEY`, `RSA PRIVATE KEY` or `PRIVATE KEY` and should not contain a passphase.
Put your private key into secret, use the namespace the GitRepo is in:
```text
kubectl create secret generic ssh-key -n fleet-default --from-file=ssh-privatekey=/file/to/private/key --type=kubernetes.io/ssh-auth
```
:::caution
Private key with passphrase is not supported.
:::
:::caution
The key has to be in PEM format.
:::
Fleet supports putting `known_hosts` into ssh secret. Here is an example of how to add it:
Fetch the public key hash(take github as an example)
```text
ssh-keyscan -H github.com
```
And add it into secret:
```text
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
type: kubernetes.io/ssh-auth
stringData:
ssh-privatekey: <private-key>
known_hosts: |-
|1|YJr1VZoi6dM0oE+zkM0do3Z04TQ=|7MclCn1fLROZG+BgR4m1r8TLwWc= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
```
:::warning
If you don't add it any server's public key will be trusted and added. (`ssh -o stricthostkeychecking=accept-new` will be used)
:::
:::info
If you are using openssh format for the private key and you are creating it in the UI, make sure a carriage return is appended in the end of the private key.
:::
### Using HTTP Auth
Create a secret containing username and password. You can replace the password with a personal access token if necessary. Also see [HTTP secrets in Github](./troubleshooting#http-secrets-in-github).
kubectl create secret generic basic-auth-secret -n fleet-default --type=kubernetes.io/basic-auth --from-literal=username=$user --from-literal=password=$pat
Just like with SSH, reference the secret in your GitRepo resource via `clientSecretName`.
spec:
repo: https://github.com/fleetrepoci/gitjob-private.git
branch: main
clientSecretName: basic-auth-secret
# Troubleshooting
See Fleet Troubleshooting section [here](./troubleshooting.md).

View File

@ -0,0 +1,314 @@
# Expected Repo Structure
**The git repository has no explicitly required structure.** It is important
to realize the scanned resources will be saved as a resource in Kubernetes so
you want to make sure the directories you are scanning in git do not contain
arbitrarily large resources. Right now there is a limitation that the resources
deployed must **gzip to less than 1MB**.
## How repos are scanned
Multiple paths can be defined for a `GitRepo` and each path is scanned independently.
Internally each scanned path will become a [bundle](./concepts.md) that Fleet will manage,
deploy, and monitor independently.
The following files are looked for to determine the how the resources will be deployed.
| File | Location | Meaning |
|------|----------|---------|
| **Chart.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed as a Helm chart. Refer to the `fleet.yaml` for more options. |
| **kustomization.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed using Kustomize. Refer to the `fleet.yaml` for more options. |
| **fleet.yaml** | Any subpath | If any fleet.yaml is found a new [bundle](./concepts.md) will be defined. This allows mixing charts, kustomize, and raw YAML in the same repo |
| ** *.yaml ** | Any subpath | If a `Chart.yaml` or `kustomization.yaml` is not found then any `.yaml` or `.yml` file will be assumed to be a Kubernetes resource and will be deployed. |
| **overlays/{name}** | / relative to `path` | When deploying using raw YAML (not Kustomize or Helm) `overlays` is a special directory for customizations. |
## `fleet.yaml`
The `fleet.yaml` is an optional file that can be included in the git repository to change the behavior of how
the resources are deployed and customized. The `fleet.yaml` is always at the root relative to the `path` of the `GitRepo`
and if a subdirectory is found with a `fleet.yaml` a new [bundle](./concepts.md) is defined that will then be
configured differently from the parent bundle.
:::caution
__Helm chart dependencies__:
It is up to the user to fulfill the dependency list for the Helm charts. As such, you must manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` prior to install. See the [Fleet docs](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/#helm-chart-dependencies) in Rancher for more information.
:::
### Reference
:::info
__How changes are applied to `values.yaml`__:
- Note that the most recently applied changes to the `values.yaml` will override any previously existing values.
- When changes are applied to the `values.yaml` from multiple sources at the same time, the values will update in the following order: `helmValues` -> `helm.valuesFiles` -> `helm.valuesFrom`.
:::
```yaml
# The default namespace to be applied to resources. This field is not used to
# enforce or lock down the deployment to a specific namespace, but instead
# provide the default value of the namespace field if one is not specified
# in the manifests.
# Default: default
defaultNamespace: default
# All resources will be assigned to this namespace and if any cluster scoped
# resource exists the deployment will fail.
# Default: ""
namespace: default
kustomize:
# Use a custom folder for kustomize resources. This folder must contain
# a kustomization.yaml file.
dir: ./kustomize
helm:
# Use a custom location for the Helm chart. This can refer to any go-getter URL or
# OCI registry based helm chart URL e.g. "oci://ghcr.io/fleetrepoci/guestbook".
# This allows one to download charts from most any location. Also know that
# go-getter URL supports adding a digest to validate the download. If repo
# is set below this field is the name of the chart to lookup
chart: ./chart
# A https URL to a Helm repo to download the chart from. It's typically easier
# to just use `chart` field and refer to a tgz file. If repo is used the
# value of `chart` will be used as the chart name to lookup in the Helm repository.
repo: https://charts.rancher.io
# A custom release name to deploy the chart as. If not specified a release name
# will be generated.
releaseName: my-release
# The version of the chart or semver constraint of the chart to find. If a constraint
# is specified it is evaluated each time git changes.
# The version also determines which chart to download from OCI registries.
version: 0.1.0
# Any values that should be placed in the `values.yaml` and passed to helm during
# install.
values:
any-custom: value
# All labels on Rancher clusters are available using global.fleet.clusterLabels.LABELNAME
# These can now be accessed directly as variables
variableName: global.fleet.clusterLabels.LABELNAME
# Path to any values files that need to be passed to helm during install
valuesFiles:
- values1.yaml
- values2.yaml
# Allow to use values files from configmaps or secrets
valuesFrom:
- configMapKeyRef:
name: configmap-values
# default to namespace of bundle
namespace: default
key: values.yaml
secretKeyRef:
name: secret-values
namespace: default
key: values.yaml
# Override immutable resources. This could be dangerous.
force: false
# Set the Helm --atomic flag when upgrading
atomic: false
# A paused bundle will not update downstream clusters but instead mark the bundle
# as OutOfSync. One can then manually confirm that a bundle should be deployed to
# the downstream clusters.
# Default: false
paused: false
rolloutStrategy:
# A number or percentage of clusters that can be unavailable during an update
# of a bundle. This follows the same basic approach as a deployment rollout
# strategy. Once the number of clusters meets unavailable state update will be
# paused. Default value is 100% which doesn't take effect on update.
# default: 100%
maxUnavailable: 15%
# A number or percentage of cluster partitions that can be unavailable during
# an update of a bundle.
# default: 0
maxUnavailablePartitions: 20%
# A number of percentage of how to automatically partition clusters if not
# specific partitioning strategy is configured.
# default: 25%
autoPartitionSize: 10%
# A list of definitions of partitions. If any target clusters do not match
# the configuration they are added to partitions at the end following the
# autoPartitionSize.
partitions:
# A user friend name given to the partition used for Display (optional).
# default: ""
- name: canary
# A number or percentage of clusters that can be unavailable in this
# partition before this partition is treated as done.
# default: 10%
maxUnavailable: 10%
# Selector matching cluster labels to include in this partition
clusterSelector:
matchLabels:
env: prod
# A cluster group name to include in this partition
clusterGroup: agroup
# Selector matching cluster group labels to include in this partition
clusterGroupSelector: agroup
# Target customization are used to determine how resources should be modified per target
# Targets are evaluated in order and the first one to match a cluster is used for that cluster.
targetCustomizations:
# The name of target. If not specified a default name of the format "target000"
# will be used. This value is mostly for display
- name: prod
# Custom namespace value overriding the value at the root
namespace: newvalue
# Custom defaultNamespace value overriding the value at the root
defaultNamespace: newdefaultvalue
# Custom kustomize options overriding the options at the root
kustomize: {}
# Custom Helm options override the options at the root
helm: {}
# If using raw YAML these are names that map to overlays/{name} that will be used
# to replace or patch a resource. If you wish to customize the file ./subdir/resource.yaml
# then a file ./overlays/myoverlay/subdir/resource.yaml will replace the base file.
# A file named ./overlays/myoverlay/subdir/resource_patch.yaml will patch the base file.
# A patch can in JSON Patch or JSON Merge format or a strategic merge patch for builtin
# Kubernetes types. Refer to "Raw YAML Resource Customization" below for more information.
yaml:
overlays:
- custom2
- custom3
# A selector used to match clusters. The structure is the standard
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
# clusterSelector will be used only to further refine the selection after
# clusterGroupSelector and clusterGroup is evaluated.
clusterSelector:
matchLabels:
env: prod
# A selector used to match a specific cluster by name.
clusterName: dev-cluster
# A selector used to match cluster groups.
clusterGroupSelector:
matchLabels:
region: us-east
# A specific clusterGroup by name that will be selected
clusterGroup: group1
# dependsOn allows you to configure dependencies to other bundles. The current bundle
# will only be deployed, after all dependencies are deployed and in a Ready state.
dependsOn:
# Format: <GITREPO-NAME>-<BUNDLE_PATH> with all path separators replaced by "-"
# Example: GitRepo name "one", Bundle path "/multi-cluster/hello-world" => "one-multi-cluster-hello-world"
- name: one-multi-cluster-hello-world
```
:::info
For a private Helm repo, users can reference a secret with the following keys:
1. `username` and `password` for basic http auth if the Helm HTTP repo is behind basic auth.
2. `cacerts` for custom CA bundle if the Helm repo is using a custom CA.
3. `ssh-privatekey` for ssh private key if repo is using ssh protocol. Private key with passphase is not supported currently.
For example, to add a secret in kubectl, run
`kubectl create secret -n $namespace generic helm --from-literal=username=foo --from-literal=password=bar --from-file=cacerts=/path/to/cacerts --from-file=ssh-privatekey=/path/to/privatekey.pem`
After secret is created, specify the secret to `gitRepo.spec.helmSecretName`. Make sure secret is created under the same namespace with gitrepo.
:::
### Using ValuesFrom
These examples showcase the style and format for using `valuesFrom`.
Example [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/):
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-values
namespace: default
data:
values.yaml: |-
replication: true
replicas: 2
serviceType: NodePort
```
Example [Secret](https://kubernetes.io/docs/concepts/configuration/secret/):
```yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-values
namespace: default
stringData:
values.yaml: |-
replication: true
replicas: 2
serviceType: NodePort
```
## Per Cluster Customization
The `GitRepo` defines which clusters a git repository should be deployed to and the `fleet.yaml` in the repository
determines how the resources are customized per target.
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets of that
`GitRepo`. The targets list is evaluated one by one and if there is a match the resource will be deployed to the cluster.
If no match is made against the target list on the `GitRepo` then the resources will not be deployed to that cluster.
Once a target cluster is matched the `fleet.yaml` from the git repository is then consulted for customizations. The
`targetCustomizations` in the `fleet.yaml` will be evaluated one by one and the first match will define how the
resource is to be configured. If no match is made the resources will be deployed with no additional customizations.
There are three approaches to matching clusters for both `GitRepo` `targets` and `fleet.yaml` `targetCustomizations`.
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
that the value `{}` for a selector means "match everything."
```yaml
# Match everything
clusterSelector: {}
# Selector ignored
clusterSelector: null
```
## Raw YAML Resource Customization
When using Kustomize or Helm the `kustomization.yaml` or the `helm.values` will control how the resource are
customized per target cluster. If you are using raw YAML then the following simple mechanism is built-in and can
be used. The `overlays/` folder in the git repo is treated specially as folder containing folders that
can be selected to overlay on top per target cluster. The resource overlay content
uses a file name based approach. This is different from kustomize which uses a resource based approach. In kustomize
the resource Group, Kind, Version, Name, and Namespace identify resources and are then merged or patched. For Fleet
the overlay resources will override or patch content with a matching file name.
```shell
# Base files
deployment.yaml
svc.yaml
# Overlay files
# The following file we be added
overlays/custom/configmap.yaml
# The following file will replace svc.yaml
overlays/custom/svc.yaml
# The following file will patch deployment.yaml
overlays/custom/deployment_patch.yaml
```
A file named `foo` will replace a file called `foo` from the base resources or a previous overlay. In order to patch
the contents a file the convention of adding `_patch.` (notice the trailing period) to the filename is used. The string `_patch.`
will be replaced with `.` from the file name and that will be used as the target. For example `deployment_patch.yaml`
will target `deployment.yaml`. The patch will be applied using JSON Merge, Strategic Merge Patch, or JSON Patch.
Which strategy is used is based on the file content. Even though JSON strategies are used, the files can be written
using YAML syntax.
## Cluster and Bundle state
See [Cluster and Bundle state](./cluster-bundles-state.md).

View File

@ -0,0 +1,79 @@
# Mapping to Downstream Clusters
:::info
__Multi-cluster Only__:
This approach only applies if you are running Fleet in a multi-cluster style
:::
When deploying `GitRepos` to downstream clusters the clusters must be mapped to a target.
## Defining targets
The deployment targets of `GitRepo` is done using the `spec.targets` field to
match clusters or cluster groups. The YAML specification is as below.
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: myrepo
namespace: clusters
spec:
repo: https://github.com/rancher/fleet-examples
paths:
- simple
# Targets are evaluated in order and the first one to match is used. If
# no targets match then the evaluated cluster will not be deployed to.
targets:
# The name of target. This value is largely for display and logging.
# If not specified a default name of the format "target000" will be used
- name: prod
# A selector used to match clusters. The structure is the standard
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
# clusterSelector will be used only to further refine the selection after
# clusterGroupSelector and clusterGroup is evaluated.
clusterSelector:
matchLabels:
env: prod
# A selector used to match cluster groups.
clusterGroupSelector:
matchLabels:
region: us-east
# A specific clusterGroup by name that will be selected
clusterGroup: group1
```
## Target Matching
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets.
If any of the targets match the cluster then the `GitRepo` will be deployed to the downstream cluster. If
no match is made, then the `GitRepo` will not be deployed to that cluster.
There are three approaches to matching clusters.
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
that the value `{}` for a selector means "match everything."
```yaml
# Match everything
clusterSelector: {}
# Selector ignored
clusterSelector: null
```
## Default target
If no target is set for the `GitRepo` then the default targets value is applied. The default targets value is as below.
```yaml
targets:
- name: default
clusterGroup: default
```
This means if you wish to setup a default location non-configured GitRepos will go to, then just create a cluster group called default
and add clusters to it.

View File

@ -0,0 +1,115 @@
# Image scan
Image scan in fleet allows you to scan your image repository, fetch the desired image and update your git repository,
without the need to manually update your manifests.
:::caution
This feature is considered as experimental feature.
:::
Go to `fleet.yaml` and add the following section.
```yaml
imageScans:
# specify the policy to retrieve images, can be semver or alphabetical order
- policy:
# if range is specified, it will take the latest image according to semver order in the range
# for more details on how to use semver, see https://github.com/Masterminds/semver
semver:
range: "*"
# can use ascending or descending order
alphabetical:
order: asc
# specify images to scan
image: "your.registry.com/repo/image"
# Specify the tag name, it has to be unique in the same bundle
tagName: test-scan
# specify secret to pull image if in private registry
secretRef:
name: dockerhub-secret
# Specify the scan interval
interval: 5m
```
:::info
You can create multiple image scans in fleet.yaml.
:::
Go to your manifest files and update the field that you want to replace. For example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-slave
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: <image>:<tag> # {"$imagescan": "test-scan"}
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
```
:::note
There are multiple form of tagName you can reference. For example
`{"$imagescan": "test-scan"}`: Use full image name(foo/bar:tag)
`{"$imagescan": "test-scan:name"}`: Only use image name without tag(foo/bar)
`{"$imagescan": "test-scan:tag"}`: Only use image tag
`{"$imagescan": "test-scan:digest"}`: Use full image name with digest(foo/bar:tag@sha256...)
:::
Create a GitRepo that includes your fleet.yaml
```yaml
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: my-repo
namespace: fleet-local
spec:
# change this to be your own repo
repo: https://github.com/rancher/fleet-examples
# define how long it will sync all the images and decide to apply change
imageScanInterval: 5m
# user must properly provide a secret that have write access to git repository
clientSecretName: secret
# specify the commit pattern
imageScanCommit:
authorName: foo
authorEmail: foo@bar.com
messageTemplate: "update image"
```
Try pushing a new image tag, for example, `<image>:<new-tag>`. Wait for a while and there should be a new commit pushed into your git repository to change tag in deployment.yaml.
Once change is made into git repository, fleet will read through the change and deploy the change into your cluster.

View File

@ -0,0 +1,13 @@
# Overview
![](/img/arch.png)
### What is Fleet?
- **Cluster engine**: Fleet is a container management and deployment engine designed to offer users more control on the local cluster and constant monitoring through **GitOps**. Fleet focuses not only on the ability to scale, but it also gives users a high degree of control and visibility to monitor exactly what is installed on the cluster.
- **Deployment management**: Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, Kustomize, or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy all resources in the cluster. As a result, users have a high degree of control, consistency, and auditability.
### Configuration Management
Fleet is fundamentally a set of Kubernetes [custom resource definitions (CRDs)](https://fleet.rancher.io/concepts/) and controllers that manage GitOps for a single Kubernetes cluster or a large scale deployment of Kubernetes clusters. It is a distributed initialization system that makes it easy to customize applications and manage HA clusters from a single point.

View File

@ -0,0 +1,9 @@
# Installation
The installation is broken up into two different use cases: [Single](./single-cluster-install.md) and
[Multi-Cluster](./multi-cluster-install.md) install. The single cluster install is for if you wish to use GitOps to manage a single cluster,
in which case you do not need a centralized manager cluster. In the multi-cluster use case
you will setup a centralized manager cluster to which you can register clusters.
If you are just learning Fleet the single cluster install is the recommended starting
point. After which you can move from single cluster to multi-cluster setup down the line.

View File

@ -0,0 +1,46 @@
# Manager Initiated
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the manager initiated registration style.
## Kubeconfig Secret
The manager initiated registration flow is accomplished by creating a
`Cluster` resource in the Fleet Manager that refers to a Kubernetes
`Secret` containing a valid kubeconfig file in the data field called `value`.
The format of this secret is intended to match the [format](https://cluster-api.sigs.k8s.io/developer/architecture/controllers/cluster.html#secrets)
of the kubeconfig
secret used in [cluster-api](https://github.com/kubernetes-sigs/cluster-api).
This means you can use `cluster-api` to create a cluster that is dynamically
registered with Fleet.
## Example
### Kubeconfig Secret
```yaml
kind: Secret
apiVersion: v1
metadata:
name: my-cluster-kubeconfig
namespace: clusters
data:
value: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIHNlcnZlcjogaHR0cHM6Ly9leGFtcGxlLmNvbTo2NDQzCiAgbmFtZTogY2x1c3Rlcgpjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogY2x1c3RlcgogICAgdXNlcjogdXNlcgogIG5hbWU6IGRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0CmtpbmQ6IENvbmZpZwpwcmVmZXJlbmNlczoge30KdXNlcnM6Ci0gbmFtZTogdXNlcgogIHVzZXI6CiAgICB0b2tlbjogc29tZXRoaW5nCg==
```
### Cluster
```yaml
apiVersion: fleet.cattle.io/v1alpha1
kind: Cluster
metadata:
name: my-cluster
namespace: clusters
labels:
demo: "true"
env: dev
spec:
kubeConfigSecret: my-cluster-kubeconfig
```

View File

@ -0,0 +1,162 @@
# Multi-cluster Install
![](/img/arch.png)
**Note:** Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.
**Warning:** The multi-cluster install described below is **only** covered in standalone Fleet, which is untested by Rancher QA.
In the below use case, you will setup a centralized Fleet manager. The centralized Fleet manager is a
Kubernetes cluster running the Fleet controllers. After installing the Fleet manager, you will then
need to register remote downstream clusters with the Fleet manager.
## Prerequisites
### Helm 3
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
fairly straight forward. To install the Helm 3 CLI follow the
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
macOS
```
brew install helm
```
Windows
```
choco install kubernetes-helm
```
### Kubernetes
The Fleet manager is a controller running on a Kubernetes cluster so an existing cluster is required. All
downstream cluster that will be managed will need to communicate to this central Kubernetes cluster. This
means the Kubernetes API server URL must be accessible to the downstream clusters. Any Kubernetes community
supported version of Kubernetes will work, in practice this means 1.15 or greater.
## API Server URL and CA certificate
In order for your Fleet management installation to properly work it is important
the correct API server URL and CA certificates are configured properly. The Fleet agents
will communicate to the Kubernetes API server URL. This means the Kubernetes
API server must be accessible to the downstream clusters. You will also need
to obtain the CA certificate of the API server. The easiest way to obtain this information
is typically from your kubeconfig file (`${HOME}/.kube/config`). The `server`,
`certificate-authority-data`, or `certificate-authority` fields will have these values.
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTi...
server: https://example.com:6443
```
Please note that the `certificate-authority-data` field is base64 encoded and will need to be
decoded before you save it into a file. This can be done by saving the base64 encoded contents to
a file and then running
```shell
base64 -d encoded-file > ca.pem
```
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
`KUBECONFIG` and place then in a file named `ca.pem`.
```shell
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
If you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
```
## Install
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
Run the following commands
Setup the environment with your specific values.
```shell
API_SERVER_URL="https://example.com:6443"
API_SERVER_CA="ca.pem"
```
If you have a multi-cluster setup, you can use this command:
```shell
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
# Leave empty if your API server is signed by a well known CA
API_SERVER_CA="ca.pem"
```
First validate the server URL is correct.
```shell
curl -fLk ${API_SERVER_URL}/version
```
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
6443 for Kubernetes.
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
well known CA then omit the `--cacert ${API_SERVER_CA}` part of the command.
```shell
curl -fL --cacert ${API_SERVER_CA} ${API_SERVER_URL}/version
```
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
only because the curl command is not setting proper credentials, but this validates that the TLS
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
the `ca.pem` is not correct. The contents of the `${API_SERVER_CA}` file should look similar to the below
```
-----BEGIN CERTIFICATE-----
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
KDs/pb3fnMTtpA==
-----END CERTIFICATE-----
```
Once you have validated the API server URL and API server CA parameters, install the following two
Helm charts.
First install the Fleet CustomResourcesDefintions.
```shell
helm -n cattle-fleet-system install --create-namespace --wait fleet-crd https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-crd-0.5.0-rc2.tgz
```
Second install the Fleet controllers.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
--set apiServerURL="${API_SERVER_URL}" \
--set-file apiServerCA="${API_SERVER_CA}" \
fleet https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-0.5.0-rc2.tgz
```
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
```shell
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-overview.md) and [git repos](./gitrepo-add.md) with
the Fleet manager.

View File

@ -0,0 +1,108 @@
# Namespaces
All types in the Fleet manager are namespaced. The namespaces of the manager types do not correspond to the namespaces
of the deployed resources in the downstream cluster. Understanding how namespaces are use in the Fleet manager is
important to understand the security model and how one can use Fleet in a multi-tenant fashion.
## GitRepos, Bundles, Clusters, ClusterGroups
The primary types are all scoped to a namespace. All selectors for `GitRepo` targets will be evaluated against
the `Clusters` and `ClusterGroups` in the same namespaces. This means that if you give `create` or `update` privileges
to a the `GitRepo` type in a namespace, that end user can modify the selector to match any cluster in that namespace.
This means in practice if you want to have two teams self manage their own `GitRepo` registrations but they should
not be able to target each others clusters, they should be in different namespaces.
## Namespace Creation Behavior in Bundles
When deploying a Fleet bundle, the specified namespace will automatically be created if it does not already exist.
## Special Namespaces
### fleet-local
The **fleet-local** namespace is a special namespace used for the single cluster use case or to bootstrap
the configuration of the Fleet manager.
When fleet is installed the `fleet-local` namespace is created along with one `Cluster` called `local` and one
`ClusterGroup` called `default`. If no targets are specified on a `GitRepo`, it is by default targeted to the
`ClusterGroup` named `default`. This means that all `GitRepos` created in `fleet-local` will
automatically target the `local` `Cluster`. The `local` `Cluster` refers to the cluster the Fleet manager is running
on.
**Note:** If you would like to migrate your cluster from `fleet-local` to `default`, please see this [documentation](./troubleshooting.md#migrate-the-local-cluster-to-the-fleet-default-cluster).
### cattle-fleet-system
The Fleet controller and Fleet agent run in this namespace. All service accounts referenced by `GitRepos` are expected
to live in this namespace in the downstream cluster.
### cattle-fleet-clusters-system
This namespace holds secrets for the cluster registration process. It should contain no other resources in it,
especially secrets.
### Cluster namespaces
For every cluster that is registered a namespace is created by the Fleet manager for that cluster.
These namespaces have are named in the form `cluster-${namespace}-${cluster}-${random}`. The purpose of this
namespace is that all `BundleDeployments` for that cluster are put into this namespace and
then the downstream cluster is given access to watch and update `BundleDeployments` in that namespace only.
## Cross namespace deployments
It is possible to create a GitRepo that will deploy across namespaces. The primary purpose of this is so that a
central privileged team can manage common configuration for many clusters that are managed by different teams. The way
this is accomplished is by creating a `BundleNamespaceMapping` resource in a cluster.
If you are creating a `BundleNamespaceMapping` resource it is best to do it in a namespace that only contains `GitRepos`
and no `Clusters`. It seems to get confusing if you have Clusters in the same repo as the cross namespace `GitRepos` will still
always be evaluated against the current namespace. So if you have clusters in the same namespace you may wish to make them
canary clusters.
A `BundleNamespaceMapping` has only two fields. Which are as below
```yaml
kind: BundleNamespaceMapping
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: not-important
namespace: typically-unique
# Bundles to match by label. The labels are defined in the fleet.yaml
# labels field or from the GitRepo metadata.labels field
bundleSelector:
matchLabels:
foo: bar
# Namespaces to match by label
namespaceSelector:
matchLabels:
foo: bar
```
If the `BundleNamespaceMappings` `bundleSelector` field matches a `Bundles` labels then that `Bundle` target criteria will
be evaluated against all clusters in all namespaces that match `namespaceSelector`. One can specify labels for the created
bundles from git by putting labels in the `fleet.yaml` file or on the `metadata.labels` field on the `GitRepo`.
## Restricting GitRepos
A namespace can contain multiple `GitRepoRestriction` resources. All `GitRepos`
created in that namespace will be checked against the list of restrictions.
If a `GitRepo` violates one of the constraints its `BundleDeployment` will be
in an error state and won't be deployed.
This can also be used to set the defaults for GitRepo's `serviceAccount` and `clientSecretName` fields.
```
kind: GitRepoRestriction
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: restriction
namespace: typically-unique
spec:
allowedClientSecretNames: []
allowedRepoPatterns: []
allowedServiceAccounts: []
defaultClientSecretName: ""
defaultServiceAccount: ""
```

View File

@ -0,0 +1,64 @@
# Quick Start
Who needs documentation, lets just run this thing!
## Install
Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure
things to your cluster.
```
brew install helm
```
Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet-crd https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-crd-v0.5.0-rc2.tgz
helm -n cattle-fleet-system install --create-namespace --wait \
fleet https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-v0.5.0-rc2.tgz
```
## Add a Git Repo to watch
Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
be deployed should be in `/manifests` in your repo.
```bash
cat > example.yaml << "EOF"
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: sample
# This namespace is special and auto-wired to deploy to the local cluster
namespace: fleet-local
spec:
# Everything from this repo will be ran in this cluster. You trust me right?
repo: "https://github.com/rancher/fleet-examples"
paths:
- simple
EOF
kubectl apply -f example.yaml
```
## Get Status
Get status of what fleet is doing
```shell
kubectl -n fleet-local get fleet
```
You should see something like this get created in your cluster.
```
kubectl get deploy frontend
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 3/3 3 3 116m
```
Enjoy and read the [docs](https://rancher.github.io/fleet).

View File

@ -0,0 +1,62 @@
# Single Cluster Install
![](/img/single-cluster.png)
In this use case you have only one cluster. The cluster will run both the Fleet
manager and the Fleet agent. The cluster will communicate with Git server to
deploy resources to this local cluster. This is the simplest setup and very
useful for dev/test and small scale setups. This use case is supported as a valid
use case for production.
## Prerequisites
### Helm 3
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
fairly straight forward. To install the Helm 3 CLI follow the
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
macOS
```
brew install helm
```
Windows
```
choco install kubernetes-helm
```
### Kubernetes
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
Any Kubernetes community supported version of Kubernetes will work, in practice this means 1.15 or greater.
## Install
Install the following two Helm charts.
First install the Fleet CustomResourcesDefintions.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet-crd https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-crd-0.5.0-rc2.tgz
```
Second install the Fleet controllers.
```shell
helm -n cattle-fleet-system install --create-namespace --wait \
fleet https://github.com/rancher/fleet/releases/download/v0.5.0-rc2/fleet-0.5.0-rc2.tgz
```
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
running the below commands.
```shell
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.

View File

@ -0,0 +1,226 @@
# Troubleshooting
This section contains commands and tips to troubleshoot Fleet.
## **How Do I...**
### Fetch the log from `fleet-controller`?
In the local management cluster where the `fleet-controller` is deployed, run the following command with your specific `fleet-controller` pod name filled in:
```
$ kubectl logs -l app=fleet-controller -n cattle-fleet-system
```
### Fetch the log from the `fleet-agent`?
Go to each downstream cluster and run the following command for the local cluster with your specific `fleet-agent` pod name filled in:
```
# Downstream cluster
$ kubectl logs -l app=fleet-agent -n cattle-fleet-system
# Local cluster
$ kubectl logs -l app=fleet-agent -n cattle-local-fleet-system
```
### Fetch detailed error logs from `GitRepos` and `Bundles`?
Normally, errors should appear in the Rancher UI. However, if there is not enough information displayed about the error there, you can research further by trying one or more of the following as needed:
- For more information about the bundle, click on `bundle`, and the YAML mode will be enabled.
- For more information about the GitRepo, click on `GitRepo`, then click on `View Yaml` in the upper right of the screen. After viewing the YAML, check `status.conditions`; a detailed error message should be displayed here.
- Check the `fleet-controller` for synching errors.
- Check the `fleet-agent` log in the downstream cluster if you encounter issues when deploying the bundle.
### Check a chart rendering error in `Kustomize`?
Check the [`fleet-controller` logs](./troubleshooting.md#fetch-the-log-from-fleet-controller) and the [`fleet-agent` logs](./troubleshooting.md#fetch-the-log-from-the-fleet-agent).
### Check errors about watching or checking out the `GitRepo`, or about the downloaded Helm repo in `fleet.yaml`?
Check the `gitjob-controller` logs using the following command with your specific `gitjob` pod name filled in:
```
$ kubectl logs -f $gitjob-pod-name -n cattle-fleet-system
```
Note that there are two containers inside the pod: the `step-git-source` container that clones the git repo, and the `fleet` container that applies bundles based on the git repo.
The pods will usually have images named `rancher/tekton-utils` with the `gitRepo` name as a prefix. Check the logs for these Kubernetes job pods in the local management cluster as follows, filling in your specific `gitRepoName` pod name and namespace:
```
$ kubectl logs -f $gitRepoName-pod-name -n namespace
```
### Check the status of the `fleet-controller`?
You can check the status of the `fleet-controller` pods by running the commands below:
```bash
kubectl -n cattle-fleet-system logs -l app=fleet-controller
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
```
```bash
NAME READY STATUS RESTARTS AGE
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
```
### Migrate the local cluster to the Fleet default cluster?
For users who want to deploy to the local cluster as well, they may move the cluster from `fleet-local` to `fleet-default` in the Rancher UI as follows:
- To get to Fleet in Rancher, click ☰ > Continuous Delivery.
- Under the **Clusters** menu, select the **local** cluster by checking the box to the left.
- Select **Assign to** from the tabs above the cluster.
- Select **`fleet-default`** from the **Assign Cluster To** dropdown.
**Result**: The cluster will be migrated to `fleet-default`.
### Enable debug logging for `fleet-controller` and `fleet-agent`?
Available in Rancher v2.6.3 (Fleet v0.3.8), the ability to enable debug logging has been added.
- Go to the **Dashboard**, then click on the **local cluster** in the left navigation menu
- Select **Apps & Marketplace**, then **Installed Apps** from the dropdown
- From there, you will upgrade the Fleet chart with the value `debug=true`. You can also set `debugLevel=5` if desired.
## **Additional Solutions for Other Fleet Issues**
### Naming conventions for CRDs
1. For CRD terms like `clusters` and `gitrepos`, you must reference the full CRD name. For example, the cluster CRD's complete name is `cluster.fleet.cattle.io`, and the gitrepo CRD's complete name is `gitrepo.fleet.cattle.io`.
1. `Bundles`, which are created from the `GitRepo`, follow the pattern `$gitrepoName-$path` in the same workspace/namespace where the `GitRepo` was created. Note that `$path` is the path directory in the git repository that contains the `bundle` (`fleet.yaml`).
1. `BundleDeployments`, which are created from the `bundle`, follow the pattern `$bundleName-$clusterName` in the namespace `clusters-$workspace-$cluster-$generateHash`. Note that `$clusterName` is the cluster to which the bundle will be deployed.
### HTTP secrets in Github
When testing Fleet with private git repositories, you will notice that HTTP secrets are no longer supported in Github. To work around this issue, follow these steps:
1. Create a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) in Github.
1. In Rancher, create an HTTP [secret](https://rancher.com/docs/rancher/v2.6/en/k8s-in-rancher/secrets/) with your Github username.
1. Use your token as the secret.
### Fleet fails with bad response code: 403
If your GitJob returns the error below, the problem may be that Fleet cannot access the Helm repo you specified in your [`fleet.yaml`](./gitrepo-structure.md):
```
time="2021-11-04T09:21:24Z" level=fatal msg="bad response code: 403"
```
Perform the following steps to assess:
- Check that your repo is accessible from your dev machine, and that you can download the Helm chart successfully
- Check that your credentials for the git repo are valid
### Helm chart repo: certificate signed by unknown authority
If your GitJob returns the error below, you may have added the wrong certificate chain:
```
time="2021-11-11T05:55:08Z" level=fatal msg="Get \"https://helm.intra/virtual-helm/index.yaml\": x509: certificate signed by unknown authority"
```
Please verify your certificate with the following command:
```bash
context=playground-local
kubectl get secret -n fleet-default helm-repo -o jsonpath="{['data']['cacerts']}" --context $context | base64 -d | openssl x509 -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
7a:1e:df:79:5f:b0:e0:be:49:de:11:5e:d9:9c:a9:71
Signature Algorithm: sha512WithRSAEncryption
Issuer: C = CH, O = MY COMPANY, CN = NOP Root CA G3
...
```
### Fleet deployment stuck in modified state
When you deploy bundles to Fleet, some of the components are modified, and this causes the "modified" flag in the Fleet environment.
To ignore the modified flag for the differences between the Helm install generated by `fleet.yaml` and the resource in your cluster, add a `diff.comparePatches` to the `fleet.yaml` for your Deployment, as shown in this example:
```yaml
defaultNamespace: <namespace name>
helm:
releaseName: <release name>
repo: <repo name>
chart: <chart name>
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
operations:
- {"op":"remove", "path":"/spec/template/spec/hostNetwork"}
- {"op":"remove", "path":"/spec/template/spec/nodeSelector"}
jsonPointers: # jsonPointers allows to ignore diffs at certain json path
- "/spec/template/spec/priorityClassName"
- "/spec/template/spec/tolerations"
```
To determine which operations should be removed, observe the logs from `fleet-agent` on the target cluster. You should see entries similar to the following:
```text
level=error msg="bundle monitoring-monitoring: deployment.apps monitoring/monitoring-monitoring-kube-state-metrics modified {\"spec\":{\"template\":{\"spec\":{\"hostNetwork\":false}}}}"
```
Based on the above log, you can add the following entry to remove the operation:
```json
{"op":"remove", "path":"/spec/template/spec/hostNetwork"}
```
### `GitRepo` or `Bundle` stuck in modified state
**Modified** means that there is a mismatch between the actual state and the desired state, the source of truth, which lives in the git repository.
1. Check the [bundle diffs documentation](./bundle-diffs.md) for more information.
1. You can also force update the `gitrepo` to perform a manual resync. Select **GitRepo** on the left navigation bar, then select **Force Update**.
### Bundle has a Horizontal Pod Autoscaler (HPA) in modified state
For bundles with an HPA, the expected state is `Modified`, as the bundle contains fields that differ from the state of the Bundle at deployment - usually `ReplicaSet`.
You must define a patch in the `fleet.yaml` to ignore this field according to [`GitRepo` or `Bundle` stuck in modified state](#gitrepo-or-bundle-stuck-in-modified-state).
Here is an example of such a patch for the deployment `nginx` in namespace `default`:
```yaml
diff:
comparePatches:
- apiVersion: apps/v1
kind: Deployment
name: nginx
namespace: default
operations:
- {"op": "remove", "path": "/spec/replicas"}
```
### What if the cluster is unavailable, or is in a `WaitCheckIn` state?
You will need to re-import and restart the registration process: Select **Cluster** on the left navigation bar, then select **Force Update**
:::caution
__WaitCheckIn status for Rancher v2.5__:
The cluster will show in `WaitCheckIn` status because the `fleet-controller` is attempting to communicate with Fleet using the Rancher service IP. However, Fleet must communicate directly with Rancher via the Kubernetes service DNS using service discovery, not through the proxy. For more, see the [Rancher docs](https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/#install-rancher).
:::
### GitRepo complains with `gzip: invalid header`
When you see an error like the one below ...
```sh
Error opening a gzip reader for /tmp/getter154967024/archive: gzip: invalid header
```
... the content of the helm chart is incorrect. Manually download the chart to your local machine and check the content.

View File

@ -0,0 +1,10 @@
# Uninstall
Fleet is packaged as two Helm charts so uninstall is accomplished by
uninstalling the appropriate Helm charts. To uninstall Fleet run the following
two commands:
```shell
helm -n cattle-fleet-system uninstall fleet
helm -n cattle-fleet-system uninstall fleet-crd
```

View File

@ -0,0 +1,70 @@
# Webhook
By default, Fleet utilizes polling (default: 15 seconds) to pull from a Git repo.However, this can be configured to utilize a webhook instead.Fleet currently supports Github,
GitLab, Bitbucket, Bitbucket Server and Gogs.
### 1. Configure the webhook service. Fleet uses a gitjob service to handle webhook requests. Create an ingress that points to the gitjob service.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webhook-ingress
namespace: cattle-fleet-system
spec:
rules:
- host: your.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitjob
port:
number: 80
```
:::info
You can configure [TLS](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) on ingress.
:::
### 2. Go to your webhook provider and configure the webhook callback url. Here is a Github example.
![](/img/webhook.png)
Configuring a secret is optional. This is used to validate the webhook payload as the payload should not be trusted by default.
If your webhook server is publicly accessible to the Internet, then it is recommended to configure the secret. If you do configure the
secret, follow step 3.
:::note
only application/json is supported due to the limitation of webhook library.
:::
:::caution
If you configured the webhook the polling interval will be automatically adjusted to 1 hour.
:::
### 3. (Optional) Configure webhook secret. The secret is for validating webhook payload. Make sure to put it in a k8s secret called `gitjob-webhook` in `cattle-fleet-system`.
| Provider | K8s Secret Key |
|-----------------| ---------------------------------|
| GitHub | `github` |
| GitLab | `gitlab` |
| BitBucket | `bitbucket` |
| BitBucketServer | `bitbucket-server` |
| Gogs | `gogs` |
For example, to create a secret containing a GitHub secret to validate the webhook payload, run:
```shell
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=github=webhooksecretvalue
```
### 4. Go to your git provider and test the connection. You should get a HTTP response code.

View File

@ -0,0 +1,110 @@
{
"docs": [
"index",
"quickstart",
"concepts",
"architecture",
"examples",
{
"type": "category",
"label": "Operator Guide",
"items": [
{
"Managing Clusters": {
"Registering": [
{
"type": "doc",
"id": "cluster-overview"
},
{
"type": "doc",
"id": "cluster-tokens"
},
{
"type": "doc",
"id": "agent-initiated"
},
{
"type": "doc",
"id": "manager-initiated"
}
]
},
"Cluster Groups": [
{
"type": "doc",
"id": "cluster-group"
}
]
},
"namespaces"
]
},
{
"type": "category",
"label": "User Guide",
"items": [
{
"Managing Git Repos": [
{
"type": "doc",
"id": "gitrepo-add"
},
{
"type": "doc",
"id": "gitrepo-structure"
},
{
"type": "doc",
"id": "gitrepo-targets"
},
{
"type": "doc",
"id": "bundle-diffs"
},
{
"type": "doc",
"id": "webhook"
},
{
"type": "doc",
"id": "imagescan"
},
{
"type": "doc",
"id": "cluster-bundles-state"
}
]
}
]
},
"troubleshooting",
{
"type": "category",
"label": "Advanced Users",
"items": [
"advanced-users",
{
"Installation": [
{
"type": "doc",
"id": "installation"
},
{
"type": "doc",
"id": "single-cluster-install"
},
{
"type": "doc",
"id": "multi-cluster-install"
},
{
"type": "doc",
"id": "uninstall"
}
]
}
]
}
]
}

View File

@ -0,0 +1,110 @@
{
"docs": [
"index",
"quickstart",
"concepts",
"architecture",
"examples",
{
"type": "category",
"label": "Operator Guide",
"items": [
{
"Managing Clusters": {
"Registering": [
{
"type": "doc",
"id": "cluster-overview"
},
{
"type": "doc",
"id": "cluster-tokens"
},
{
"type": "doc",
"id": "agent-initiated"
},
{
"type": "doc",
"id": "manager-initiated"
}
]
},
"Cluster Groups": [
{
"type": "doc",
"id": "cluster-group"
}
]
},
"namespaces"
]
},
{
"type": "category",
"label": "User Guide",
"items": [
{
"Managing Git Repos": [
{
"type": "doc",
"id": "gitrepo-add"
},
{
"type": "doc",
"id": "gitrepo-structure"
},
{
"type": "doc",
"id": "gitrepo-targets"
},
{
"type": "doc",
"id": "bundle-diffs"
},
{
"type": "doc",
"id": "webhook"
},
{
"type": "doc",
"id": "imagescan"
},
{
"type": "doc",
"id": "cluster-bundles-state"
}
]
}
]
},
"troubleshooting",
{
"type": "category",
"label": "Advanced Users",
"items": [
"advanced-users",
{
"Installation": [
{
"type": "doc",
"id": "installation"
},
{
"type": "doc",
"id": "single-cluster-install"
},
{
"type": "doc",
"id": "multi-cluster-install"
},
{
"type": "doc",
"id": "uninstall"
}
]
}
]
}
]
}

4
versions.json Normal file
View File

@ -0,0 +1,4 @@
[
"0.5",
"0.4"
]