chore: Set latest docusaurus version
Made with ❤️️ by updatecli
This commit is contained in:
parent
18e3e3e868
commit
94f9b5cd68
|
|
@ -0,0 +1,44 @@
|
|||
# Architecture
|
||||
|
||||
Fleet has two primary components. The Fleet manager and the cluster agents. These
|
||||
components work in a two-stage pull model. The Fleet manager will pull from git and the
|
||||
cluster agents will pull from the Fleet manager.
|
||||
|
||||
## Fleet Manager
|
||||
|
||||
The Fleet manager is a set of Kubernetes controllers running in any standard Kubernetes
|
||||
cluster. The only API exposed by the Fleet manager is the Kubernetes API, there is no
|
||||
custom API for the fleet controller.
|
||||
|
||||
## Cluster Agents
|
||||
|
||||
One cluster agent runs in each cluster and is responsible for talking to the Fleet manager.
|
||||
The only communication from cluster to Fleet manager is by this agent and all communication
|
||||
goes from the managed cluster to the Fleet manager. The fleet manager does not initiate
|
||||
connections to downstream clusters. This means managed clusters can run in private networks and behind
|
||||
NATs. The only requirement is the cluster agent needs to be able to communicate with the
|
||||
Kubernetes API of the cluster running the Fleet manager. The one exception to this is if you use
|
||||
the [manager initiated](./cluster-registration.md#manager-initiated) cluster registration flow. This is not required, but
|
||||
an optional pattern.
|
||||
|
||||
The cluster agents are not assumed to have an "always on" connection. They will resume operation as
|
||||
soon as they can connect. Future enhancements will probably add the ability to schedule times of when
|
||||
the agent checks in, as it stands right now they will always attempt to connect.
|
||||
|
||||
## Security
|
||||
|
||||
The Fleet manager dynamically creates service accounts, manages their RBAC and then gives the
|
||||
tokens to the downstream clusters. Clusters are registered by optionally expiring cluster registration tokens.
|
||||
The cluster registration token is used only during the registration process to generate a credential specific
|
||||
to that cluster. After the cluster credential is established the cluster "forgets" the cluster registration
|
||||
token.
|
||||
|
||||
The service accounts given to the clusters only have privileges to list `BundleDeployment` in the namespace created
|
||||
specifically for that cluster. It can also update the `status` subresource of `BundleDeployment` and the `status`
|
||||
subresource of it's `Cluster` resource.
|
||||
|
||||
## Component Overview
|
||||
|
||||
An overview of the components and how they interact on a high level.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,96 @@
|
|||
# Create a Bundle Resource
|
||||
|
||||
Bundles are automatically created by Fleet when a `GitRepo` is created. In most cases `Bundles` should not be created
|
||||
manually by the user. If you want to deploy resources from a git repository use a
|
||||
[GitRepo](https://fleet.rancher.io/gitrepo-add) instead.
|
||||
|
||||
If you want to deploy resources without a git repository follow this guide to create a `Bundle`.
|
||||
|
||||
When creating a `GitRepo` Fleet will fetch the resources from a git repository, and add them to a Bundle.
|
||||
When creating a `Bundle` resources need to be explicitly specified in the `Bundle` Spec.
|
||||
Resources can be compressed with gz. See [here](https://github.com/rancher/rancher/blob/v2.7.3/pkg/controllers/provisioningv2/managedchart/managedchart.go#L149-L153)
|
||||
an example of how Rancher uses compression in go code.
|
||||
|
||||
If you would like to deploy in downstream clusters, you need to define targets. Targets work similarly to targets in `GitRepo`.
|
||||
See [Mapping to Downstream Clusters](https://fleet.rancher.io/gitrepo-targets#defining-targets).
|
||||
|
||||
The following example creates a nginx `Deployment` in the local cluster:
|
||||
|
||||
```yaml
|
||||
kind: Bundle
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
# Any name can be used here
|
||||
name: my-bundle
|
||||
# For single cluster use fleet-local, otherwise use the namespace of
|
||||
# your choosing
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
resources:
|
||||
# List of all resources that will be deployed
|
||||
- content: |
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.14.2
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: nginx.yaml
|
||||
targets:
|
||||
- clusterName: local
|
||||
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
Helm options related to downloading the helm chart will be ignored. The helm chart is downloaded by the fleet-cli, which creates the bundles. The bundle has to contain all the resources from the chart. Therefore the bundle will ignore:
|
||||
|
||||
* `spec.helm.repo`
|
||||
* `spec.helm.charts`
|
||||
|
||||
You can't use a `fleet.yaml` in resources, it is only used by the fleet-cli to create bundles.
|
||||
|
||||
The `spec.targetRestrictions` field is not useful, as it is an allow list for targets specified in `spec.targets`. It is not needed, since `targets` are explicitly given in a bundle and an empty `targetRestrictions` defaults to allow.
|
||||
|
||||
## Convert a Helm Chart into a Bundle
|
||||
|
||||
You can use the Fleet CLI to convert a Helm chart into a bundle.
|
||||
|
||||
For example, you can download and convert the "external secrets" operator chart like this:
|
||||
```
|
||||
cat > targets.yaml <<EOF
|
||||
targets:
|
||||
- clusterSelector: {}
|
||||
EOF
|
||||
|
||||
mkdir app
|
||||
cat > app/fleet.yaml <<EOF
|
||||
defaultNamespace: external-secrets
|
||||
helm:
|
||||
repo: https://charts.external-secrets.io
|
||||
chart: external-secrets
|
||||
EOF
|
||||
|
||||
fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - external-secrets app > eso-bundle.yaml
|
||||
|
||||
kubectl apply -f eso-bundle.yaml
|
||||
```
|
||||
|
||||
Make sure you use a cluster selector in `targets.yaml`, that matches all clusters you want to deploy to.
|
||||
|
||||
The blog post on [Fleet: Multi-Cluster Deployment with the Help of External Secrets](https://www.suse.com/c/rancher_blog/fleet-multi-cluster-deployment-with-the-help-of-external-secrets/) has more information.
|
||||
|
|
@ -0,0 +1,276 @@
|
|||
# Generating Diffs to Ignore Modified GitRepos
|
||||
|
||||
|
||||
Continuous Delivery in Rancher is powered by fleet. When a user adds a GitRepo CR, then Continuous Delivery creates the associated fleet bundles.
|
||||
|
||||
You can access these bundles by navigating to the Cluster Explorer (Dashboard UI), and selecting the `Bundles` section.
|
||||
|
||||
The bundled charts may have some objects that are amended at runtime, for example in ValidatingWebhookConfiguration the `caBundle` is empty and the CA cert is injected by the cluster.
|
||||
|
||||
This leads the status of the bundle and associated GitRepo to be reported as "Modified"
|
||||
|
||||

|
||||
|
||||
Associated Bundle
|
||||

|
||||
|
||||
Fleet bundles support the ability to specify a custom [jsonPointer patch](http://jsonpatch.com/).
|
||||
|
||||
With the patch, users can instruct fleet to ignore object modifications.
|
||||
|
||||
## Simple Example
|
||||
|
||||
In this simple example, we create a Service and ConfigMap that we apply a bundle diff onto.
|
||||
|
||||
https://github.com/rancher/fleet-test-data/tree/master/bundle-diffs
|
||||
|
||||
|
||||
## Gatekeeper Example
|
||||
|
||||
In this example, we are trying to deploy opa-gatekeeper using Continuous Delivery to our clusters.
|
||||
|
||||
The opa-gatekeeper bundle associated with the opa GitRepo is in modified state.
|
||||
|
||||
Each path in the GitRepo CR, has an associated Bundle CR. The user can view the Bundles, and the associated diff needed in the Bundle status.
|
||||
|
||||
In our case the differences detected are as follows:
|
||||
|
||||
```yaml
|
||||
summary:
|
||||
desiredReady: 1
|
||||
modified: 1
|
||||
nonReadyResources:
|
||||
- bundleState: Modified
|
||||
modifiedStatus:
|
||||
- apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
name: gatekeeper-validating-webhook-configuration
|
||||
patch: '{"$setElementOrder/webhooks":[{"name":"validation.gatekeeper.sh"},{"name":"check-ignore-label.gatekeeper.sh"}],"webhooks":[{"clientConfig":{"caBundle":"Cg=="},"name":"validation.gatekeeper.sh","rules":[{"apiGroups":["*"],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["*"]}]},{"clientConfig":{"caBundle":"Cg=="},"name":"check-ignore-label.gatekeeper.sh","rules":[{"apiGroups":[""],"apiVersions":["*"],"operations":["CREATE","UPDATE"],"resources":["namespaces"]}]}]}'
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-audit
|
||||
namespace: cattle-gatekeeper-system
|
||||
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-controller-manager
|
||||
namespace: cattle-gatekeeper-system
|
||||
patch: '{"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"manager"}],"containers":[{"name":"manager","resources":{"limits":{"cpu":"1000m"}}}],"tolerations":[]}}}}'
|
||||
```
|
||||
|
||||
Based on this summary, there are three objects which need to be patched.
|
||||
|
||||
We will look at these one at a time.
|
||||
|
||||
### 1. ValidatingWebhookConfiguration:
|
||||
The gatekeeper-validating-webhook-configuration validating webhook has two ValidatingWebhooks in its spec.
|
||||
|
||||
In cases where more than one element in the field requires a patch, that patch will refer these to as `$setElementOrder/ELEMENTNAME`
|
||||
|
||||
From this information, we can see the two ValidatingWebhooks in question are:
|
||||
|
||||
```
|
||||
"$setElementOrder/webhooks": [
|
||||
{
|
||||
"name": "validation.gatekeeper.sh"
|
||||
},
|
||||
{
|
||||
"name": "check-ignore-label.gatekeeper.sh"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
Within each ValidatingWebhook, the fields that need to be ignore are as follows:
|
||||
|
||||
```
|
||||
{
|
||||
"clientConfig": {
|
||||
"caBundle": "Cg=="
|
||||
},
|
||||
"name": "validation.gatekeeper.sh",
|
||||
"rules": [
|
||||
{
|
||||
"apiGroups": [
|
||||
"*"
|
||||
],
|
||||
"apiVersions": [
|
||||
"*"
|
||||
],
|
||||
"operations": [
|
||||
"CREATE",
|
||||
"UPDATE"
|
||||
],
|
||||
"resources": [
|
||||
"*"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
{
|
||||
"clientConfig": {
|
||||
"caBundle": "Cg=="
|
||||
},
|
||||
"name": "check-ignore-label.gatekeeper.sh",
|
||||
"rules": [
|
||||
{
|
||||
"apiGroups": [
|
||||
""
|
||||
],
|
||||
"apiVersions": [
|
||||
"*"
|
||||
],
|
||||
"operations": [
|
||||
"CREATE",
|
||||
"UPDATE"
|
||||
],
|
||||
"resources": [
|
||||
"namespaces"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
In summary, we need to ignore the fields `rules` and `clientConfig.caBundle` in our patch specification.
|
||||
|
||||
The field webhook in the ValidatingWebhookConfiguration spec is an array, so we need to address the elements by their index values.
|
||||
|
||||

|
||||
|
||||
Based on this information, our diff patch would look as follows:
|
||||
|
||||
```yaml
|
||||
- apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
name: gatekeeper-validating-webhook-configuration
|
||||
operations:
|
||||
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
|
||||
- {"op": "remove", "path":"/webhooks/0/rules"}
|
||||
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
|
||||
- {"op": "remove", "path":"/webhooks/1/rules"}
|
||||
```
|
||||
|
||||
### 2. Deployment gatekeeper-controller-manager:
|
||||
The gatekeeper-controller-manager deployment is modified since there are cpu limits and tolerations applied (which are not in the actual bundle).
|
||||
|
||||
```
|
||||
{
|
||||
"spec": {
|
||||
"template": {
|
||||
"spec": {
|
||||
"$setElementOrder/containers": [
|
||||
{
|
||||
"name": "manager"
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "manager",
|
||||
"resources": {
|
||||
"limits": {
|
||||
"cpu": "1000m"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"tolerations": []
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In this case, there is only 1 container in the deployment container spec, and that container has cpu limits and tolerations added.
|
||||
|
||||
Based on this information, our diff patch would look as follows:
|
||||
```yaml
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-controller-manager
|
||||
namespace: cattle-gatekeeper-system
|
||||
operations:
|
||||
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
|
||||
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
|
||||
```
|
||||
|
||||
### 3. Deployment gatekeeper-audit:
|
||||
The gatekeeper-audit deployment is modified in a similarly, to the gatekeeper-controller-manager, with additional cpu limits and tolerations applied.
|
||||
|
||||
```
|
||||
{
|
||||
"spec": {
|
||||
"template": {
|
||||
"spec": {
|
||||
"$setElementOrder/containers": [
|
||||
{
|
||||
"name": "manager"
|
||||
}
|
||||
],
|
||||
"containers": [
|
||||
{
|
||||
"name": "manager",
|
||||
"resources": {
|
||||
"limits": {
|
||||
"cpu": "1000m"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"tolerations": []
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Similar to gatekeeper-controller-manager, there is only 1 container in the deployments container spec, and that has cpu limits and tolerations added.
|
||||
|
||||
Based on this information, our diff patch would look as follows:
|
||||
```yaml
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-audit
|
||||
namespace: cattle-gatekeeper-system
|
||||
operations:
|
||||
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
|
||||
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
|
||||
```
|
||||
|
||||
### Combining It All Together
|
||||
We can now combine all these patches as follows:
|
||||
|
||||
```yaml
|
||||
diff:
|
||||
comparePatches:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-audit
|
||||
namespace: cattle-gatekeeper-system
|
||||
operations:
|
||||
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
|
||||
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: gatekeeper-controller-manager
|
||||
namespace: cattle-gatekeeper-system
|
||||
operations:
|
||||
- {"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits/cpu"}
|
||||
- {"op": "remove", "path": "/spec/template/spec/tolerations"}
|
||||
- apiVersion: admissionregistration.k8s.io/v1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
name: gatekeeper-validating-webhook-configuration
|
||||
operations:
|
||||
- {"op": "remove", "path":"/webhooks/0/clientConfig/caBundle"}
|
||||
- {"op": "remove", "path":"/webhooks/0/rules"}
|
||||
- {"op": "remove", "path":"/webhooks/1/clientConfig/caBundle"}
|
||||
- {"op": "remove", "path":"/webhooks/1/rules"}
|
||||
```
|
||||
|
||||
We can add these now to the bundle directly to test and also commit the same to the `fleet.yaml` in your GitRepo.
|
||||
|
||||
Once these are added, the GitRepo should deploy and be in "Active" status.
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Next 🚧"
|
||||
---
|
||||
|
||||
We are still working on the next release.
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
title: "Next 🚧"
|
||||
---
|
||||
|
||||
We are still working on the next release.
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-agent"
|
||||
---
|
||||
## fleet-agent
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet-agent [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--agent-scope string An identifier used to scope the agent bundleID names, typically the same as namespace
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
-h, --help help for fleet-agent
|
||||
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
|
||||
--namespace string system namespace is the namespace, the agent runs in, e.g. cattle-fleet-system
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-agent clusterstatus](./fleet-agent_clusterstatus) - Continuously report resource status to the upstream cluster
|
||||
* [fleet-agent register](./fleet-agent_register) - Register agent with an upstream cluster
|
||||
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-agent clusterstatus"
|
||||
---
|
||||
## fleet-agent clusterstatus
|
||||
|
||||
Continuously report resource status to the upstream cluster
|
||||
|
||||
```
|
||||
fleet-agent clusterstatus [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--checkin-interval string How often to post cluster status
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
-h, --help help for clusterstatus
|
||||
--kubeconfig string kubeconfig file for agent's cluster
|
||||
--namespace string system namespace is the namespace, the agent runs in, e.g. cattle-fleet-system
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-agent](./fleet-agent) -
|
||||
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-agent register"
|
||||
---
|
||||
## fleet-agent register
|
||||
|
||||
Register agent with an upstream cluster
|
||||
|
||||
```
|
||||
fleet-agent register [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
-h, --help help for register
|
||||
--kubeconfig string kubeconfig file for agent's cluster
|
||||
--namespace string system namespace is the namespace, the agent runs in, e.g. cattle-fleet-system
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-agent](./fleet-agent) -
|
||||
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet"
|
||||
---
|
||||
## fleet
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for fleet
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet apply](./fleet_apply) - Create bundles from directories, and output them or apply them on a cluster
|
||||
* [fleet cleanup](./fleet_cleanup) - Clean up outdated cluster registrations
|
||||
* [fleet deploy](./fleet_deploy) - Deploy a bundledeployment/content resource to a cluster, by creating a Helm release. This will not deploy the bundledeployment/content resources directly to the cluster.
|
||||
* [fleet gitcloner](./fleet_gitcloner) - Clones a git repository
|
||||
* [fleet target](./fleet_target) - Print available targets for a bundle
|
||||
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet apply"
|
||||
---
|
||||
## fleet apply
|
||||
|
||||
Create bundles from directories, and output them or apply them on a cluster
|
||||
|
||||
```
|
||||
fleet apply [flags] BUNDLE_NAME PATH...
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-b, --bundle-file string Location of the raw Bundle resource yaml
|
||||
--cacerts-file string Path of custom cacerts for helm repo
|
||||
--commit string Commit to assign to the bundle
|
||||
-c, --compress Force all resources to be compress
|
||||
--context string kubeconfig context for authentication
|
||||
--correct-drift Rollback any change made from outside of Fleet
|
||||
--correct-drift-force Use --force when correcting drift. Resources can be deleted and recreated
|
||||
--correct-drift-keep-fail-history Keep helm history for failed rollbacks
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
-f, --file string Location of the fleet.yaml
|
||||
--helm-credentials-by-path-file string Path of file containing helm credentials for paths
|
||||
--helm-repo-url-regex string Helm credentials will be used if the helm repo matches this regex. Credentials will always be used if this is empty or not provided
|
||||
-h, --help help for apply
|
||||
--keep-resources Keep resources created after the GitRepo or Bundle is deleted
|
||||
-k, --kubeconfig string kubeconfig for authentication
|
||||
-l, --label strings Labels to apply to created bundles
|
||||
-n, --namespace string namespace (default "fleet-local")
|
||||
-o, --output string Output contents to file or - for stdout
|
||||
--password-file string Path of file containing basic auth password for helm repo
|
||||
--paused Create bundles in a paused state
|
||||
-a, --service-account string Service account to assign to bundle created
|
||||
--ssh-privatekey-file string Path of ssh-private-key for helm repo
|
||||
--sync-generation int Generation number used to force sync the deployment
|
||||
--target-namespace string Ensure this bundle goes to this target namespace
|
||||
--targets-file string Addition source of targets and restrictions to be append
|
||||
--username string Basic auth username for helm repo
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet cleanup"
|
||||
---
|
||||
## fleet cleanup
|
||||
|
||||
Clean up outdated cluster registrations
|
||||
|
||||
```
|
||||
fleet cleanup [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--context string kubeconfig context for authentication
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
--factor string Factor to increase delay between deletes (default: 1.1)
|
||||
-h, --help help for cleanup
|
||||
-k, --kubeconfig string kubeconfig for authentication
|
||||
--max string Maximum delay between deletes (default: 5s)
|
||||
--min string Minimum delay between deletes (default: 10ms)
|
||||
-n, --namespace string namespace (default "fleet-local")
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet deploy"
|
||||
---
|
||||
## fleet deploy
|
||||
|
||||
Deploy a bundledeployment/content resource to a cluster, by creating a Helm release. This will not deploy the bundledeployment/content resources directly to the cluster.
|
||||
|
||||
```
|
||||
fleet deploy [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-a, --agent-namespace string Set the agent namespace, normally cattle-fleet-system. If set, fleet agent will garbage collect the helm release, i.e. delete it if the bundledeployment is missing.
|
||||
-d, --dry-run Print the resources that would be deployed, but do not actually deploy them
|
||||
-h, --help help for deploy
|
||||
-i, --input-file string Location of the YAML file containing the content and the bundledeployment resource
|
||||
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
|
||||
-n, --namespace string Set the default namespace. Deploy helm chart into this namespace.
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet gitcloner"
|
||||
---
|
||||
## fleet gitcloner
|
||||
|
||||
Clones a git repository
|
||||
|
||||
```
|
||||
fleet gitcloner [REPO] [PATH] [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-b, --branch string git branch
|
||||
--ca-bundle-file string CA bundle file
|
||||
-h, --help help for gitcloner
|
||||
--insecure-skip-tls do not verify tls certificates
|
||||
--known-hosts-file string known hosts file
|
||||
--password-file string password file for basic auth
|
||||
--revision string git revision
|
||||
--ssh-private-key-file string ssh private key file path
|
||||
-u, --username string user name for basic auth
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet target"
|
||||
---
|
||||
## fleet target
|
||||
|
||||
Print available targets for a bundle
|
||||
|
||||
```
|
||||
fleet target [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-b, --bundle-file string Location of the Bundle resource yaml
|
||||
-l, --dump-input-list Dump the live resources, which impact targeting, like clusters, as YAML
|
||||
-h, --help help for target
|
||||
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
|
||||
-n, --namespace string Override the namespace of the bundle. Targeting searches this namespace for clusters.
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet test"
|
||||
---
|
||||
## fleet test
|
||||
|
||||
Match a bundle to a target and render the output (deprecated)
|
||||
|
||||
```
|
||||
fleet test [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-b, --bundle-file string Location of the raw Bundle resource yaml
|
||||
-f, --file string Location of the fleet.yaml
|
||||
-g, --group string Cluster group to match against
|
||||
-L, --group-label strings Cluster group labels to match against
|
||||
-h, --help help for test
|
||||
-l, --label strings Cluster labels to match against
|
||||
-N, --name string Cluster name to match against
|
||||
-q, --quiet Just print the match and don't print the resources
|
||||
-t, --target string Explicit target to match
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet](./fleet) -
|
||||
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-manager"
|
||||
---
|
||||
## fleet-manager
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet-manager [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
--disable-gitops disable gitops components
|
||||
--disable-metrics disable metrics
|
||||
-h, --help help for fleet-manager
|
||||
--kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
|
||||
--namespace string namespace to watch (default "cattle-fleet-system")
|
||||
--shard-id string only manage resources labeled with a specific shard ID
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-manager agentmanagement](./fleet-manager_agentmanagement) -
|
||||
* [fleet-manager cleanup](./fleet-manager_cleanup) -
|
||||
* [fleet-manager gitjob](./fleet-manager_gitjob) -
|
||||
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-manager agentmanagement"
|
||||
---
|
||||
## fleet-manager agentmanagement
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet-manager agentmanagement [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--disable-bootstrap disable local cluster components
|
||||
-h, --help help for agentmanagement
|
||||
--kubeconfig string kubeconfig file
|
||||
--namespace string namespace to watch
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-manager](./fleet-manager) -
|
||||
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-manager cleanup"
|
||||
---
|
||||
## fleet-manager cleanup
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet-manager cleanup [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
-h, --help help for cleanup
|
||||
--kubeconfig string kubeconfig file
|
||||
--namespace string namespace to watch
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-manager](./fleet-manager) -
|
||||
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
title: ""
|
||||
sidebar_label: "fleet-manager gitjob"
|
||||
---
|
||||
## fleet-manager gitjob
|
||||
|
||||
|
||||
|
||||
```
|
||||
fleet-manager gitjob [flags]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--debug Turn on debug logging
|
||||
--debug-level int If debugging is enabled, set klog -v=X
|
||||
--gitjob-image string The gitjob image that will be used in the generated job. (default "rancher/fleet:dev")
|
||||
-h, --help help for gitjob
|
||||
--kubeconfig string Kubeconfig file
|
||||
--leader-elect Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
|
||||
--listen string The port the webhook listens. (default ":8080")
|
||||
--metrics-bind-address string The address the metric endpoint binds to. (default ":8081")
|
||||
--namespace string namespace to watch (default "cattle-fleet-system")
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--disable-gitops disable gitops components
|
||||
--disable-metrics disable metrics
|
||||
--shard-id string only manage resources labeled with a specific shard ID
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
|
||||
* [fleet-manager](./fleet-manager) -
|
||||
|
||||
|
|
@ -0,0 +1,37 @@
|
|||
# Cluster and Bundle State
|
||||
|
||||
Clusters and Bundles have different states in each phase of applying Bundles.
|
||||
|
||||
## Bundles
|
||||
|
||||
**Ready**: Bundles have been deployed and all resources are ready.
|
||||
|
||||
**NotReady**: Bundles have been deployed and some resources are not ready.
|
||||
|
||||
**WaitApplied**: Bundles have been synced from Fleet controller and downstream cluster, but are waiting to be deployed.
|
||||
|
||||
**ErrApplied**: Bundles have been synced from the Fleet controller and the downstream cluster, but there were some errors when deploying the Bundle.
|
||||
|
||||
**OutOfSync**: Bundles have been synced from Fleet controller, but downstream agent hasn't synced the change yet.
|
||||
|
||||
**Pending**: Bundles are being processed by Fleet controller.
|
||||
|
||||
**Modified**: Bundles have been deployed and all resources are ready, but there are some changes that were not made from the Git Repository.
|
||||
|
||||
## Clusters
|
||||
|
||||
**WaitCheckIn**: Waiting for agent to report registration information and cluster status back.
|
||||
|
||||
**NotReady**: There are bundles in this cluster that are in NotReady state.
|
||||
|
||||
**WaitApplied**: There are bundles in this cluster that are in WaitApplied state.
|
||||
|
||||
**ErrApplied**: There are bundles in this cluster that are in ErrApplied state.
|
||||
|
||||
**OutOfSync**: There are bundles in this cluster that are in OutOfSync state.
|
||||
|
||||
**Pending**: There are bundles in this cluster that are in Pending state.
|
||||
|
||||
**Modified**: There are bundles in this cluster that are in Modified state.
|
||||
|
||||
**Ready**: Bundles in this cluster have been deployed and all resources are ready.
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
# Create Cluster Groups
|
||||
|
||||
Clusters in a namespace can be put into a cluster group. A cluster group is essentially a named selector.
|
||||
The only parameter for a cluster group is essentially the selector.
|
||||
When you get to a certain scale cluster groups become a more reasonable way to manage your clusters.
|
||||
Cluster groups serve the purpose of giving aggregated
|
||||
status of the deployments and then also a simpler way to manage targets.
|
||||
|
||||
A cluster group is created by creating a `ClusterGroup` resource like below
|
||||
|
||||
```yaml
|
||||
kind: ClusterGroup
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: production-group
|
||||
namespace: clusters
|
||||
spec:
|
||||
# This is the standard metav1.LabelSelector format to match clusters by labels
|
||||
selector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
|
@ -0,0 +1,347 @@
|
|||
import {versions} from '@site/src/fleetVersions';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Register Downstream Clusters
|
||||
|
||||
## Overview
|
||||
|
||||
There are two specific styles to registering clusters. These styles will be referred
|
||||
to as **agent-initiated** and **manager-initiated** registration. Typically one would
|
||||
go with the agent-initiated registration but there are specific use cases in which
|
||||
manager-initiated is a better workflow.
|
||||
|
||||
### Agent-Initiated Registration
|
||||
|
||||
Agent-initiated refers to a pattern in which the downstream cluster installs an agent with a
|
||||
[cluster registration token](#create-cluster-registration-tokens) and optionally a client ID. The cluster
|
||||
agent will then make a API request to the Fleet manager and initiate the registration process. Using
|
||||
this process the Manager will never make an outbound API request to the downstream clusters and will thus
|
||||
never need to have direct network access. The downstream cluster only needs to make outbound HTTPS
|
||||
calls to the manager.
|
||||
|
||||
### Manager-Initiated Registration
|
||||
|
||||
Manager-initiated registration is a process in which you register an existing Kubernetes cluster
|
||||
with the Fleet manager and the Fleet manager will make an API call to the downstream cluster to
|
||||
deploy the agent. This style can place additional network access requirements because the Fleet
|
||||
manager must be able to communicate with the downstream cluster API server for the registration process.
|
||||
After the cluster is registered there is no further need for the manager to contact the downstream
|
||||
cluster API. This style is more compatible if you wish to manage the creation of all your Kubernetes
|
||||
clusters through GitOps using something like [cluster-api](https://github.com/kubernetes-sigs/cluster-api)
|
||||
or [Rancher](https://github.com/rancher/rancher).
|
||||
|
||||
## Agent Initiated
|
||||
|
||||
A downstream cluster is registered by installing an agent via helm and using the **cluster registration token** and optionally a **client ID** or **cluster labels**.
|
||||
|
||||
:::info
|
||||
It's not necessary to configure the fleet manager for [multi cluster](./installation.md#configuration-for-multi-cluster), as the downstream agent we install via Helm will connect to the Kubernetes API of the upstream cluster directly.
|
||||
|
||||
Agent-initiated registration is normally not used with Rancher.
|
||||
:::
|
||||
|
||||
### Cluster Registration Token and Client ID
|
||||
|
||||
The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
|
||||
able to initiate the registration process. This is required.
|
||||
The [cluster registration token](./architecture.md#security) is manifested as a `values.yaml` file that will be passed to the `helm install` process.
|
||||
Alternatively one can pass the token directly to the helm install command via `--set token="$token"`.
|
||||
|
||||
There are two styles of registering an agent. You can have the cluster for this agent dynamically created, in which
|
||||
case you will probably want to specify **cluster labels** upon registration. Or you can have the agent register to a predefined
|
||||
cluster in the Fleet manager, in which case you will need a **client ID**. The former approach is typically the easiest.
|
||||
|
||||
### Install Agent For a New Cluster
|
||||
|
||||
The Fleet agent is installed as a Helm chart. Following are explanations how to determine and set its parameters.
|
||||
|
||||
First, follow the [cluster registration token instructions](#create-cluster-registration-tokens) to obtain the `values.yaml` which contains
|
||||
the registration token to authenticate against the Fleet cluster.
|
||||
|
||||
Second, optionally you can define labels that will assigned to the newly created cluster upon registration. After
|
||||
registration is completed an agent cannot change the labels of the cluster. To add cluster labels add
|
||||
`--set-string labels.KEY=VALUE` to the below Helm command. To add the labels `foo=bar` and `bar=baz` then you would
|
||||
add `--set-string labels.foo=bar --set-string labels.bar=baz` to the command line.
|
||||
|
||||
```shell
|
||||
# Leave blank if you do not want any labels
|
||||
CLUSTER_LABELS="--set-string labels.example=true --set-string labels.env=dev"
|
||||
```
|
||||
|
||||
Third, set variables with the Fleet cluster's API Server URL and CA, for the downstream cluster to use for connecting.
|
||||
|
||||
```shell
|
||||
API_SERVER_URL=https://...
|
||||
API_SERVER_CA_DATA=...
|
||||
```
|
||||
|
||||
Value in `API_SERVER_CA_DATA` can be obtained from a `.kube/config` file with valid data to connect to the upstream cluster
|
||||
(under the `certificate-authority-data` key). Alternatively it can be obtained from within the upstream cluster itself,
|
||||
by looking up the default ServiceAccount secret name (typically prefixed with `default-token-`, in the default namespace),
|
||||
under the `ca.crt` key.
|
||||
|
||||
|
||||
:::caution
|
||||
|
||||
__Use proper namespace and release name__:
|
||||
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
|
||||
|
||||
:::
|
||||
|
||||
:::warning Kubectl Context
|
||||
|
||||
__Ensure you are installing to the right cluster__:
|
||||
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
|
||||
to change which cluster Helm is installing to.
|
||||
|
||||
:::
|
||||
|
||||
:::caution Fleet in Rancher
|
||||
Rancher has separate helm charts for Fleet and uses a different repository.
|
||||
:::
|
||||
|
||||
Add Fleet's Helm repo.
|
||||
<CodeBlock language="bash">
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
|
||||
Finally, install the agent using Helm.
|
||||
<Tabs>
|
||||
<TabItem value="helm" label="Install" default>
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||
$CLUSTER_LABELS \\
|
||||
--values values.yaml \\
|
||||
--set apiServerCA="$API_SERVER_CA_DATA" \\
|
||||
--set apiServerURL="$API_SERVER_URL" \\
|
||||
fleet-agent fleet/fleet-agent`}
|
||||
</CodeBlock>
|
||||
</TabItem>
|
||||
<TabItem value="validate" label="Validate">
|
||||
You can check that status of the fleet pods by running the below commands.
|
||||
|
||||
```shell
|
||||
# Ensure kubectl is pointing to the right cluster
|
||||
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
||||
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
The agent should now be deployed.
|
||||
|
||||
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
||||
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
||||
manager to run this command.
|
||||
|
||||
```shell
|
||||
kubectl -n clusters get clusters.fleet.cattle.io
|
||||
```
|
||||
```
|
||||
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
|
||||
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
||||
```
|
||||
|
||||
### Install Agent For a Predefined Cluster
|
||||
|
||||
Client IDs are for the purpose of predefining clusters in the Fleet manager with existing labels and repos targeted to them.
|
||||
A client ID is not required and is just one approach to managing clusters.
|
||||
The **client ID** is a unique string that will identify the cluster.
|
||||
This string is user generated and opaque to the Fleet manager and agent. It is assumed to be sufficiently unique. For security reasons one should not be able to easily guess this value
|
||||
as then one cluster could impersonate another. The client ID is optional and if not specified the UID field of the `kube-system` namespace
|
||||
resource will be used as the client ID. Upon registration if the client ID is found on a `Cluster` resource in the Fleet manager it will associate
|
||||
the agent with that `Cluster`. If no `Cluster` resource is found with that client ID a new `Cluster` resource will be created with the specific
|
||||
client ID.
|
||||
|
||||
The Fleet agent is installed as a Helm chart. The only parameters to the helm chart installation should be the cluster registration token, which
|
||||
is represented by the `values.yaml` file and the client ID. The client ID is optional.
|
||||
|
||||
|
||||
First, create a `Cluster` in the Fleet Manager with the random client ID you have chosen.
|
||||
|
||||
```yaml
|
||||
kind: Cluster
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: my-cluster
|
||||
namespace: clusters
|
||||
spec:
|
||||
clientID: "really-random"
|
||||
```
|
||||
|
||||
Second, follow the [cluster registration token instructions]((#create-cluster-registration-tokens) to obtain the `values.yaml` file to be used.
|
||||
|
||||
Third, setup your environment to use the client ID.
|
||||
|
||||
```shell
|
||||
CLUSTER_CLIENT_ID="really-random"
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
__Use proper namespace and release name__:
|
||||
For the agent chart the namespace must be `cattle-fleet-system` and the release name `fleet-agent`
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
|
||||
__Ensure you are installing to the right cluster__:
|
||||
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
|
||||
to change which cluster Helm is installing to.
|
||||
|
||||
:::
|
||||
|
||||
Add Fleet's Helm repo.
|
||||
<CodeBlock language="bash">
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
|
||||
Finally, install the agent using Helm.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="helm2" label="Install" default>
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||
--set clientID="$CLUSTER_CLIENT_ID" \\
|
||||
--values values.yaml \\
|
||||
fleet-agent fleet/fleet-agent`}
|
||||
</CodeBlock>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="validate2" label="Validate">
|
||||
You can check that status of the fleet pods by running the below commands.
|
||||
|
||||
```shell
|
||||
# Ensure kubectl is pointing to the right cluster
|
||||
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
||||
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
The agent should now be deployed.
|
||||
|
||||
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
||||
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
||||
manager to run this command.
|
||||
|
||||
```shell
|
||||
kubectl -n clusters get clusters.fleet.cattle.io
|
||||
```
|
||||
```
|
||||
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
|
||||
my-cluster 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
||||
```
|
||||
|
||||
### Create Cluster Registration Tokens
|
||||
|
||||
:::info
|
||||
|
||||
__Not needed for Manager-initiated registration__:
|
||||
For manager-initiated registrations the token is managed by the Fleet manager and does
|
||||
not need to be manually created and obtained.
|
||||
|
||||
:::
|
||||
|
||||
For an agent-initiated registration the downstream cluster must have a [cluster registration token](./architecture.md#security).
|
||||
Cluster registration tokens are used to establish a new identity for a cluster. Internally
|
||||
cluster registration tokens are managed by creating Kubernetes service accounts that have the
|
||||
permissions to create `ClusterRegistrationRequests` within a specific namespace. Once the
|
||||
cluster is registered a new `ServiceAccount` is created for that cluster that is used as
|
||||
the unique identity of the cluster. The agent is designed to forget the cluster registration
|
||||
token after registration. While the agent will not maintain a reference to the cluster registration
|
||||
token after a successful registration please note that usually other system bootstrap scripts do.
|
||||
|
||||
Since the cluster registration token is forgotten, if you need to re-register a cluster you must
|
||||
give the cluster a new registration token.
|
||||
|
||||
#### Token TTL
|
||||
|
||||
Cluster registration tokens can be reused by any cluster in a namespace. The tokens can be given a TTL
|
||||
such that it will expire after a specific time.
|
||||
|
||||
#### Create a new Token
|
||||
|
||||
The `ClusterRegistationToken` is a namespaced type and should be created in the same namespace
|
||||
in which you will create `GitRepo` and `ClusterGroup` resources. For in depth details on how namespaces
|
||||
are used in Fleet refer to the documentation on [namespaces](./namespaces.md). Create a new
|
||||
token with the below YAML.
|
||||
|
||||
```yaml
|
||||
kind: ClusterRegistrationToken
|
||||
apiVersion: "fleet.cattle.io/v1alpha1"
|
||||
metadata:
|
||||
name: new-token
|
||||
namespace: clusters
|
||||
spec:
|
||||
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
|
||||
ttl: 240h
|
||||
```
|
||||
|
||||
After the `ClusterRegistrationToken` is created, Fleet will create a corresponding `Secret` with the same name.
|
||||
As the `Secret` creation is performed asynchronously, you will need to wait until it's available before using it.
|
||||
|
||||
One way to do so is via the following one-liner:
|
||||
```shell
|
||||
while ! kubectl --namespace=clusters get secret new-token; do sleep 5; done
|
||||
```
|
||||
|
||||
#### Obtaining Token Value (Agent values.yaml)
|
||||
|
||||
The token value contains YAML content for a `values.yaml` file that is expected to be passed to `helm install`
|
||||
to install the Fleet agent on a downstream cluster.
|
||||
|
||||
Such value is contained in the `values` field of the `Secret` mentioned above. To obtain the YAML content for the
|
||||
above example one can run the following one-liner:
|
||||
```shell
|
||||
kubectl --namespace clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
|
||||
```
|
||||
|
||||
Once the `values.yaml` is ready it can be used repeatedly by clusters to register until the TTL expires.
|
||||
|
||||
## Manager Initiated
|
||||
|
||||
The manager-initiated registration flow is accomplished by creating a
|
||||
`Cluster` resource in the Fleet Manager that refers to a Kubernetes
|
||||
`Secret` containing a valid kubeconfig file in the data field called `value`.
|
||||
|
||||
|
||||
:::info
|
||||
If you are using Fleet standalone *without Rancher*, it must be installed as described in [installation details](./installation.md#configuration-for-multi-cluster).
|
||||
|
||||
The manager-initiated registration is used when you add a cluster from the Rancher dashboard.
|
||||
:::
|
||||
|
||||
### Create Kubeconfig Secret
|
||||
|
||||
The format of this secret is intended to match the [format](https://cluster-api.sigs.k8s.io/developer/architecture/controllers/cluster.html#secrets) of the kubeconfig
|
||||
secret used in [cluster-api](https://github.com/kubernetes-sigs/cluster-api).
|
||||
This means you can use `cluster-api` to create a cluster that is dynamically registered with Fleet.
|
||||
|
||||
```yaml title="Kubeconfig Secret Example"
|
||||
kind: Secret
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: my-cluster-kubeconfig
|
||||
namespace: clusters
|
||||
data:
|
||||
value: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIHNlcnZlcjogaHR0cHM6Ly9leGFtcGxlLmNvbTo2NDQzCiAgbmFtZTogY2x1c3Rlcgpjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogY2x1c3RlcgogICAgdXNlcjogdXNlcgogIG5hbWU6IGRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0CmtpbmQ6IENvbmZpZwpwcmVmZXJlbmNlczoge30KdXNlcnM6Ci0gbmFtZTogdXNlcgogIHVzZXI6CiAgICB0b2tlbjogc29tZXRoaW5nCg==
|
||||
```
|
||||
|
||||
### Create Cluster Resource
|
||||
|
||||
The cluster resource needs to reference the kubeconfig secret.
|
||||
|
||||
```yaml title="Cluster Resource Example"
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: my-cluster
|
||||
namespace: clusters
|
||||
labels:
|
||||
demo: "true"
|
||||
env: dev
|
||||
spec:
|
||||
kubeConfigSecret: my-cluster-kubeconfig
|
||||
```
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Core Concepts
|
||||
|
||||
Fleet is fundamentally a set of Kubernetes custom resource definitions (CRDs) and controllers
|
||||
to manage GitOps for a single Kubernetes cluster or a large-scale deployment of Kubernetes clusters.
|
||||
|
||||
:::info
|
||||
|
||||
For more on the naming conventions of CRDs, click [here](./troubleshooting.md#naming-conventions-for-crds).
|
||||
|
||||
:::
|
||||
|
||||
Below are some of the concepts of Fleet that will be useful throughout this documentation:
|
||||
|
||||
* **Fleet Manager**: The centralized component that orchestrates the deployments of Kubernetes assets
|
||||
from git. In a multi-cluster setup, this will typically be a dedicated Kubernetes cluster. In a
|
||||
single cluster setup, the Fleet manager will be running on the same cluster you are managing with GitOps.
|
||||
* **Fleet controller**: The controller(s) running on the Fleet manager orchestrating GitOps. In practice,
|
||||
the Fleet manager and Fleet controllers are used fairly interchangeably.
|
||||
* **Single Cluster Style**: This is a style of installing Fleet in which the manager and downstream cluster are the
|
||||
same cluster. This is a very simple pattern to quickly get up and running with GitOps.
|
||||
* **Multi Cluster Style**: This is a style of running Fleet in which you have a central manager that manages a large
|
||||
number of downstream clusters.
|
||||
* **Fleet agent**: Every managed downstream cluster will run an agent that communicates back to the Fleet manager.
|
||||
This agent is just another set of Kubernetes controllers running in the downstream cluster.
|
||||
* **GitRepo**: Git repositories that are watched by Fleet are represented by the type `GitRepo`.
|
||||
|
||||
>**Example installation order via `GitRepo` custom resources when using Fleet for the configuration management of downstream clusters:**
|
||||
>
|
||||
> 1. Install [Calico](https://github.com/projectcalico/calico) CRDs and controllers.
|
||||
> 2. Set one or multiple cluster-level global network policies.
|
||||
> 3. Install [GateKeeper](https://github.com/open-policy-agent/gatekeeper). Note that **cluster labels** and **overlays** are critical features in Fleet as they determine which clusters will get each part of the bundle.
|
||||
> 4. Set up and configure ingress and system daemons.
|
||||
|
||||
* **Bundle**: An internal unit used for the orchestration of resources from git.
|
||||
When a `GitRepo` is scanned it will produce one or more bundles. Bundles are a collection of
|
||||
resources that get deployed to a cluster. `Bundle` is the fundamental deployment unit used in Fleet. The
|
||||
contents of a `Bundle` may be Kubernetes manifests, Kustomize configuration, or Helm charts.
|
||||
Regardless of the source the contents are dynamically rendered into a Helm chart by the agent
|
||||
and installed into the downstream cluster as a helm release.
|
||||
|
||||
- To see the **life cycle of a bundle**, click [here](./ref-bundle-stages.md).
|
||||
|
||||
* **BundleDeployment**: When a `Bundle` is deployed to a cluster an instance of a `Bundle` is called a `BundleDeployment`.
|
||||
A `BundleDeployment` represents the state of that `Bundle` on a specific cluster with its cluster specific
|
||||
customizations. The Fleet agent is only aware of `BundleDeployment` resources that are created for
|
||||
the cluster the agent is managing.
|
||||
|
||||
- For an example of how to deploy Kubernetes manifests across clusters using Fleet customization, click [here](./gitrepo-targets.md#customization-per-cluster).
|
||||
|
||||
* **Downstream Cluster**: Clusters to which Fleet deploys manifests are referred to as downstream clusters. In the single cluster use case, the Fleet manager Kubernetes cluster is both the manager and downstream cluster at the same time.
|
||||
* **Cluster Registration Token**: Tokens used by agents to register a new cluster.
|
||||
|
|
@ -0,0 +1,188 @@
|
|||
# Create a GitRepo Resource
|
||||
|
||||
## Create GitRepo Instance
|
||||
|
||||
Git repositories are registered by creating a `GitRepo` resource in Kubernetes. Refer
|
||||
to the [creating a deployment tutorial](./tut-deployment.md) for examples.
|
||||
|
||||
[Git Repository Contents](./gitrepo-content.md) has detail about the content of the Git repository.
|
||||
|
||||
The available fields of the GitRepo custom resource are documented in the [GitRepo resource reference](./ref-gitrepo.md)
|
||||
|
||||
### Proper Namespace
|
||||
|
||||
Git repos are added to the Fleet manager using the `GitRepo` custom resource type. The `GitRepo` type is namespaced. By default, Rancher will create two Fleet workspaces: **fleet-default** and **fleet-local**.
|
||||
|
||||
- `fleet-default` will contain all the downstream clusters that are already registered through Rancher.
|
||||
- `fleet-local` will contain the local cluster by default.
|
||||
|
||||
If you are using Fleet in a [single cluster](./concepts.md) style, the namespace will always be **fleet-local**. Check [here](https://fleet.rancher.io/namespaces/#fleet-local) for more on the `fleet-local` namespace.
|
||||
|
||||
For a [multi-cluster](./concepts.md) style, please ensure you use the correct repo that will map to the right target clusters.
|
||||
|
||||
## Override Workload's Namespace
|
||||
|
||||
The `targetNamespace` field will override any namespace in the bundle. If the deployment contains cluster scoped resources, it will fail.
|
||||
|
||||
It takes precendence over all other namespace definitions:
|
||||
|
||||
`gitRepo.targetNamespace > fleet.yaml namespace > namespace in workload's manifest > fleet.yaml defaultNamespace`
|
||||
|
||||
|
||||
Workload namespace definitions can be restricted with `allowedTargetNamespaces` in the `GitRepoRestriction` resource.
|
||||
|
||||
## Adding Private Git Repository
|
||||
|
||||
Fleet supports both http and ssh auth key for private repository. To use this you have to create a secret in the same namespace.
|
||||
|
||||
For example, to generate a private ssh key
|
||||
|
||||
```text
|
||||
ssh-keygen -t rsa -b 4096 -m pem -C "user@email.com"
|
||||
```
|
||||
|
||||
Note: The private key format has to be in `EC PRIVATE KEY`, `RSA PRIVATE KEY` or `PRIVATE KEY` and should not contain a passphase.
|
||||
|
||||
Put your private key into secret, use the namespace the GitRepo is in:
|
||||
|
||||
```text
|
||||
kubectl create secret generic ssh-key -n fleet-default --from-file=ssh-privatekey=/file/to/private/key --type=kubernetes.io/ssh-auth
|
||||
```
|
||||
|
||||
Now the `clientSecretName` must be specified in the repo definition:
|
||||
|
||||
```text
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
kind: GitRepo
|
||||
metadata:
|
||||
name: sample-ssh
|
||||
# This namespace is special and auto-wired to deploy to the local cluster
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
# Everything from this repo will be run in this cluster. You trust me right?
|
||||
repo: "git@github.com:rancher/fleet-examples"
|
||||
# or
|
||||
# repo: "ssh://git@github.com/rancher/fleet-examples"
|
||||
clientSecretName: ssh-key
|
||||
paths:
|
||||
- simple
|
||||
```
|
||||
|
||||
:::caution
|
||||
|
||||
Private key with passphrase is not supported.
|
||||
|
||||
:::
|
||||
|
||||
:::caution
|
||||
|
||||
The key has to be in PEM format.
|
||||
|
||||
:::
|
||||
|
||||
### Known hosts
|
||||
|
||||
:::warning
|
||||
|
||||
If you don't add one or more public keys into the secret, any server's public key will be trusted and added. (`ssh -o stricthostkeychecking=accept-new` will be used)
|
||||
|
||||
:::
|
||||
|
||||
Fleet supports putting `known_hosts` into ssh secret. Here is an example of how to add it:
|
||||
|
||||
Fetch the public key hash(take github as an example)
|
||||
|
||||
```text
|
||||
ssh-keyscan -H github.com
|
||||
```
|
||||
|
||||
And add it into secret:
|
||||
|
||||
```text
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: ssh-key
|
||||
type: kubernetes.io/ssh-auth
|
||||
stringData:
|
||||
ssh-privatekey: <private-key>
|
||||
known_hosts: |-
|
||||
|1|YJr1VZoi6dM0oE+zkM0do3Z04TQ=|7MclCn1fLROZG+BgR4m1r8TLwWc= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==
|
||||
```
|
||||
|
||||
### Using HTTP Auth
|
||||
|
||||
Create a secret containing username and password. You can replace the password with a personal access token if necessary. Also see [HTTP secrets in Github](./troubleshooting#http-secrets-in-github).
|
||||
|
||||
kubectl create secret generic basic-auth-secret -n fleet-default --type=kubernetes.io/basic-auth --from-literal=username=$user --from-literal=password=$pat
|
||||
|
||||
Just like with SSH, reference the secret in your GitRepo resource via `clientSecretName`.
|
||||
|
||||
spec:
|
||||
repo: https://github.com/fleetrepoci/gitjob-private.git
|
||||
branch: main
|
||||
clientSecretName: basic-auth-secret
|
||||
|
||||
## Using Private Helm Repositories
|
||||
|
||||
:::warning
|
||||
The credentials will be used unconditionally for all Helm repositories referenced by the gitrepo resource.
|
||||
Make sure you don't leak credentials by mixing public and private repositories. Use [different helm credentials for each path](#use-different-helm-credentials-for-each-path),
|
||||
or split them into different gitrepos, or use `helmRepoURLRegex` to limit the scope of credentials to certain servers.
|
||||
:::
|
||||
|
||||
For a private Helm repo, users can reference a secret with the following keys:
|
||||
|
||||
1. `username` and `password` for basic http auth if the Helm HTTP repo is behind basic auth.
|
||||
|
||||
2. `cacerts` for custom CA bundle if the Helm repo is using a custom CA.
|
||||
|
||||
3. `ssh-privatekey` for ssh private key if repo is using ssh protocol. Private key with passphase is not supported currently.
|
||||
|
||||
For example, to add a secret in kubectl, run
|
||||
|
||||
`kubectl create secret -n $namespace generic helm --from-literal=username=foo --from-literal=password=bar --from-file=cacerts=/path/to/cacerts --from-file=ssh-privatekey=/path/to/privatekey.pem`
|
||||
|
||||
After secret is created, specify the secret to `gitRepo.spec.helmSecretName`. Make sure secret is created under the same namespace with gitrepo.
|
||||
|
||||
### Use different helm credentials for each path
|
||||
|
||||
:::info
|
||||
`gitRepo.spec.helmSecretName` will be ignored if `gitRepo.spec.helmSecretNameForPaths` is provided
|
||||
:::
|
||||
|
||||
Create a file `secrets-path.yaml` that contains credentials for each path defined in a `GitRepo`. Credentials will not be used
|
||||
for paths that are not present in this file.
|
||||
The path is the actual path to the bundle (ie to a folder containing a `fleet.yaml` file) within the git repository, which might have more segments than the entry under `paths:`.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
path-one: # path path-one must exist in the repository
|
||||
username: user
|
||||
password: pass
|
||||
path-two: # path path-one must exist in the repository
|
||||
username: user2
|
||||
password: pass2
|
||||
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCiAgICBNSUlEblRDQ0FvV2dBd0lCQWdJVUNwMHB2SVJTb2c0eHJKN2Q1SUI2ME1ka0k1WXdEUVlKS29aSWh2Y05BUUVMCiAgICBCUUF3WGpFTE1Ba0dBMVVFQmhNQ1FWVXhFekFSQmdOVkJBZ01DbE52YldVdFUzUmhkR1V4SVRBZkJnTlZCQW9NCiAgICBHRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MFpERVhNQlVHQTFVRUF3d09jbUZ1WTJobGNpNXRlUzV2CiAgICBjbWN3SGhjTk1qTXdOREkzTVRVd056VXpXaGNOTWpnd05ESTFNVFV3TnpVeldqQmVNUXN3Q1FZRFZRUUdFd0pCCiAgICBWVEVUTUJFR0ExVUVDQXdLVTI5dFpTMVRkR0YwWlRFaE1COEdBMVVFQ2d3WVNXNTBaWEp1WlhRZ1YybGtaMmwwCiAgICBjeUJRZEhrZ1RIUmtNUmN3RlFZRFZRUUREQTV5WVc1amFHVnlMbTE1TG05eVp6Q0NBU0l3RFFZSktvWklodmNOCiAgICBBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXBvZE5TMDB6NDc1dnVSc2ZZcTFRYTFHQVl3QU92anV4MERKTHY5CiAgICBrZFhwT091dGdjMU8yWUdqNUlCVGQzVmpISmFJYUg3SDR2Rm84RlBaMG9zcU9YaFg3eUM4STdBS3ZhOEE5VmVmCiAgICBJVXp6Vlo1cCs1elNxRjdtZTlOaUNiL0pVSkZLT0ZsTkF4cjZCcXhoMEIyN1VZTlpjaUIvL1V0L0I2eHJuVE55CiAgICBoRzJiNzk4bjg4bFZqY3EzbEE0djFyM3VzWGYxVG5aS2t2UEN4ZnFHYk5OdTlpTjdFZnZHOWoyekdHcWJvcDRYCiAgICBXY3VSa3N3QkgxZlRNS0ZrbGcrR1VsZkZPMGFzL3phalVOdmdweTlpdVBMZUtqZTVWcDBiMlBLd09qUENpV2d4CiAgICBabDJlVDlNRnJjV0F3NTg3emE5NDBlT1Era2pkdmVvUE5sU2k3eVJMMW96YlRka0NBd0VBQWFOVE1GRXdIUVlECiAgICBWUjBPQkJZRUZEQkNkYjE4M1hsU0tWYzBxNmJSTCt0dVNTV3lNQjhHQTFVZEl3UVlNQmFBRkRCQ2RiMTgzWGxTCiAgICBLVmMwcTZiUkwrdHVTU1d5TUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCCiAgICBBQ1BCVERkZ0dCVDVDRVoxd1pnQmhKdm9GZTk2MUJqVCtMU2RxSlpsSmNRZnlnS0hyNks5ZmZaY1ZlWlBoMVU0CiAgICB3czBuWGNOZiszZGJlTjl4dVBiY0VqUWlQaFJCcnRzalE1T1JiVHdYWEdBdzlYbDZYTkl6YjN4ZDF6RWFzQXZPCiAgICBJMjM2ZHZXQ1A0dWoycWZqR0FkQjJnaXU2b2xHK01CWHlneUZKMElzRENraldLZysyWEdmU3lyci9KZU1vZlFBCiAgICB1VU9wcFVGdERYd0lrUW1VTGNVVUxWcTdtUVNQb0lzVkNNM2hKNVQzczdUSWtHUDZVcGVSSjgzdU9LbURYMkRHCiAgICBwVWVQVHBuVWVLOVMzUEVKTi9XcmJSSVd3WU1OR29qdDRKWitaK1N6VE1aVkh0SlBzaGpjL1hYOWZNU1ZXQmlzCiAgICBQRW5MU256MDQ4OGFUQm5SUFlnVXFsdz0KICAgIC0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=
|
||||
sshPrivateKey: ICAgIC0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQogICAgTUlJRFF6Q0NBaXNDRkgxTm5YUWI5SlV6anNBR3FSc3RCYncwRlFpak1BMEdDU3FHU0liM0RRRUJDd1VBTUY0eAogICAgQ3pBSkJnTlZCQVlUQWtGVk1STXdFUVlEVlFRSURBcFRiMjFsTFZOMFlYUmxNU0V3SHdZRFZRUUtEQmhKYm5SbAogICAgY201bGRDQlhhV1JuYVhSeklGQjBlU0JNZEdReEZ6QVZCZ05WQkFNTURuSmhibU5vWlhJdWJYa3ViM0puTUI0WAogICAgRFRJek1EUXlOekUxTVRBMU5Gb1hEVEkwTURReU5qRTFNVEExTkZvd1hqRUxNQWtHQTFVRUJoTUNRVlV4RXpBUgogICAgQmdOVkJBZ01DbE52YldVdFUzUmhkR1V4SVRBZkJnTlZCQW9NR0VsdWRHVnlibVYwSUZkcFpHZHBkSE1nVUhSNQogICAgSUV4MFpERVhNQlVHQTFVRUF3d09jbUZ1WTJobGNpNXRlUzV2Y21jd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQogICAgQTRJQkR3QXdnZ0VLQW9JQkFRRGd6UUJJTW8xQVFHNnFtYmozbFlYUTFnZjhYcURTbjdyM2lGcVZZZldDVWZOSwogICAgaGZwampTRGpOMmRWWEV2UXA3R0t3akFHUElFbXR5RmxyUW5rUGtnTGFSaU9jSDdNN0p2c3ZIa0Ewd0g0dzJ2QgogICAgUEp6aVlINWh2MUE2WS9NcFM5bVkvQUVxVm80TUJkdnNZQzc3MFpCbzVBMitIUEtMd1YzMVZyYlhhTytWeUJtNAogICAgSmJhZHlNUk40N3BKRWdPMjJaYVRXL3Y3S1dKdjNydGJTMlZVSkNlU0piWlpsN09ocHhLRTVocStmK0RWaU1mcQogICAgTWx4ODNEV2pVSlVkV3lqVUZYVlk0bEdVaUtrRWVtSlVuSlVyY1ErOXE1SzVaWmhyRjhoRXhKRjhiZTZjemVzeAogICAga1VWN3dKb1RjWkd2bUhYSk1FNmtrQXh4Mmh3bU8wSFcyQWdDdTJZekFnTUJBQUV3RFFZSktvWklodmNOQVFFTAogICAgQlFBRGdnRUJBS1BpTWdXc1dCTnJvRkY2aWpYL2xMM3FxaWc4TjlkR1VPWDIyRVJDU1RTekNONjM0ZTFkZUhsdQogICAgbTc5OU11Q3hvWSsyZWluNlV1cFMvTEV6cnpvU2dDVWllQzQrT3ZralF5eGJpTFR6bW1OWEFnd09TM3RvTHRGWAogICAgbytmWWpSMU9xcHVPS29kMkhiYjliczRWcXdaNHEvMlVKbXE2Q01pYjZKZUE2VFJvK2Rkc0pUM2dDOFhWL1Z1MAogICAgNnkwdjJxdTM0bm1MYjFxOHFTS1RwZXYyQmwzQUJGY3NyS0JvNHFieUM2bnBTbnpZenNYcS90SlFLclplNE4vMgogICAgUXIzd1dxQ0pDVWUrMWVsT3A2b0JVcXNWSnc3aHk3YzRLc1Fna09ERDJkc2NuNEF1NGJhWlY2QmpySm1USVY0aQogICAgeXJ1dk9oZ2lINklGUVdDWmVQM2s0MU5obWRzRTNHQT0KICAgIC0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
```
|
||||
|
||||
Create the secret
|
||||
```
|
||||
kubectl create secret generic path-auth-secret -n fleet-default --from-file=secrets-path.yaml
|
||||
```
|
||||
|
||||
In the previous example credentials for username `user` will be used for the path `path-one` and credentials for username
|
||||
`user2` will be used for the path `path-two`.
|
||||
|
||||
`caBundle` and `sshPrivateKey` must be base64 encoded.
|
||||
|
||||
|
||||
:::note
|
||||
If you are using ["rancher-backups"](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher) and want this secret to be included the backup, please add the label `resources.cattle.io/backup: true` to the secret. In that case, make sure to encrypt the backup to protect sensitive credentials.
|
||||
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
See Fleet Troubleshooting section [here](./troubleshooting.md).
|
||||
|
|
@ -0,0 +1,229 @@
|
|||
# Git Repository Contents
|
||||
|
||||
Fleet will create bundles from a git repository. This happens either explicitly by specifying paths, or when a `fleet.yaml` is found.
|
||||
|
||||
Each bundle is created from paths in a GitRepo and modified further by reading the discovered `fleet.yaml` file.
|
||||
Bundle lifecycles are tracked between releases by the helm releaseName field added to each bundle. If the releaseName is not
|
||||
specified within fleet.yaml it is generated from `GitRepo.name + path`. Long names are truncated and a `-<hash>` prefix is added.
|
||||
|
||||
**The git repository has no explicitly required structure.** It is important
|
||||
to realize the scanned resources will be saved as a resource in Kubernetes so
|
||||
you want to make sure the directories you are scanning in git do not contain
|
||||
arbitrarily large resources. Right now there is a limitation that the resources
|
||||
deployed must **gzip to less than 1MB**.
|
||||
|
||||
## How repos are scanned
|
||||
|
||||
Multiple paths can be defined for a `GitRepo` and each path is scanned independently.
|
||||
Internally each scanned path will become a [bundle](./concepts.md) that Fleet will manage,
|
||||
deploy, and monitor independently.
|
||||
|
||||
The following files are looked for to determine the how the resources will be deployed.
|
||||
|
||||
| File | Location | Meaning |
|
||||
|------|----------|---------|
|
||||
| **Chart.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed as a Helm chart. Refer to the `fleet.yaml` for more options. |
|
||||
| **kustomization.yaml**:| / relative to `path` or custom path from `fleet.yaml` | The resources will be deployed using Kustomize. Refer to the `fleet.yaml` for more options. |
|
||||
| **fleet.yaml** | Any subpath | If any fleet.yaml is found a new [bundle](./concepts.md) will be defined. This allows mixing charts, kustomize, and raw YAML in the same repo |
|
||||
| ** *.yaml ** | Any subpath | If a `Chart.yaml` or `kustomization.yaml` is not found then any `.yaml` or `.yml` file will be assumed to be a Kubernetes resource and will be deployed. |
|
||||
| **overlays/{name}** | / relative to `path` | When deploying using raw YAML (not Kustomize or Helm) `overlays` is a special directory for customizations. |
|
||||
|
||||
### Excluding files and directories from bundles
|
||||
|
||||
Fleet supports file and directory exclusion by means of `.fleetignore` files, in a similar fashion to how `.gitignore`
|
||||
files behave in git repositories:
|
||||
* Glob syntax is used to match files or directories, using Golang's
|
||||
[`filepath.Match`](https://pkg.go.dev/path/filepath#Match)
|
||||
* Empty lines are skipped, and can therefore be used to improve readability
|
||||
* Characters like white spaces and `#` can be escaped with a backslash
|
||||
* Trailing spaces are ignored, unless escaped
|
||||
* Comments, ie lines starting with unescaped `#`, are skipped
|
||||
* A given line can match a file or a directory, even if no separator is provided: eg. `subdir/*` and `subdir` are both
|
||||
valid `.fleetignore` lines, and `subdir` matches both files and directories called `subdir`
|
||||
* A match may be found for a file or directory at any level below the directory where a `.fleetignore` lives, ie
|
||||
`foo.yaml` will match `./foo.yaml` as well as `./path/to/foo.yaml`
|
||||
* Multiple `.fleetignore` files are supported. For instance, in the following directory structure, only
|
||||
`root/something.yaml`, `bar/something2.yaml` and `foo/something.yaml` will end up in a bundle:
|
||||
```
|
||||
root/
|
||||
├── .fleetignore # contains `ignore-always.yaml'
|
||||
├── something.yaml
|
||||
├── bar
|
||||
│ ├── .fleetignore # contains `something.yaml`
|
||||
│ ├── ignore-always.yaml
|
||||
│ ├── something2.yaml
|
||||
│ └── something.yaml
|
||||
└── foo
|
||||
├── ignore-always.yaml
|
||||
└── something.yaml
|
||||
```
|
||||
|
||||
This currently comes with a few limitations, the following not being supported:
|
||||
* Double asterisks (`**`)
|
||||
* Explicit inclusions with `!`
|
||||
|
||||
## `fleet.yaml`
|
||||
|
||||
The `fleet.yaml` is an optional file that can be included in the git repository to change the behavior of how
|
||||
the resources are deployed and customized. The `fleet.yaml` is always at the root relative to the `path` of the `GitRepo`
|
||||
and if a subdirectory is found with a `fleet.yaml` a new [bundle](./concepts.md) is defined that will then be
|
||||
configured differently from the parent bundle.
|
||||
|
||||
:::caution
|
||||
|
||||
__Helm chart dependencies__:
|
||||
It is up to the user to fulfill the dependency list for the Helm charts. As such, you must manually run `helm dependencies update $chart` OR run `helm dependencies build $chart` prior to install. See the [Fleet docs](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/#helm-chart-dependencies) in Rancher for more information.
|
||||
|
||||
:::
|
||||
|
||||
The available fields are documented in the [fleet.yaml reference](./ref-fleet-yaml.md)
|
||||
|
||||
For a private Helm repo, users can reference a secret from the git repo resource.
|
||||
See [Using Private Helm Repositories](./gitrepo-add.md#using-private-helm-repositories) for more information.
|
||||
|
||||
## Using Helm Values
|
||||
|
||||
__How changes are applied to `values.yaml`__:
|
||||
|
||||
- Note that the most recently applied changes to the `values.yaml` will override any previously existing values.
|
||||
|
||||
- When changes are applied to the `values.yaml` from multiple sources at the same time, the values will update in the following order: `helm.values` -> `helm.valuesFiles` -> `helm.valuesFrom`. That means `valuesFrom` will take precedence over both, `valuesFiles` and `values`.
|
||||
|
||||
### Using ValuesFrom
|
||||
|
||||
These examples showcase the style and format for using `valuesFrom`. ConfigMaps and Secrets should be created in *downstream clusters*.
|
||||
|
||||
Example [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/):
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: configmap-values
|
||||
namespace: default
|
||||
data:
|
||||
values.yaml: |-
|
||||
replication: true
|
||||
replicas: 2
|
||||
serviceType: NodePort
|
||||
```
|
||||
|
||||
Example [Secret](https://kubernetes.io/docs/concepts/configuration/secret/):
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: secret-values
|
||||
namespace: default
|
||||
stringData:
|
||||
values.yaml: |-
|
||||
replication: true
|
||||
replicas: 3
|
||||
serviceType: NodePort
|
||||
```
|
||||
|
||||
A secret like that, can be created from a YAML file `secretdata.yaml` by running the following kubectl command: `kubectl create secret generic secret-values --from-file=values.yaml=secretdata.yaml`
|
||||
|
||||
The resources can then be referenced from a `fleet.yaml`:
|
||||
|
||||
```yaml
|
||||
helm:
|
||||
chart: simple-chart
|
||||
valuesFrom:
|
||||
- secretKeyRef:
|
||||
name: secret-values
|
||||
namespace: default
|
||||
key: values.yaml
|
||||
- configMapKeyRef:
|
||||
name: configmap-values
|
||||
namespace: default
|
||||
key: values.yaml
|
||||
values:
|
||||
replicas: "4"
|
||||
```
|
||||
|
||||
## Per Cluster Customization
|
||||
|
||||
The `GitRepo` defines which clusters a git repository should be deployed to and the `fleet.yaml` in the repository
|
||||
determines how the resources are customized per target.
|
||||
|
||||
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets of that
|
||||
`GitRepo`. The targets list is evaluated one by one and if there is a match the resource will be deployed to the cluster.
|
||||
If no match is made against the target list on the `GitRepo` then the resources will not be deployed to that cluster.
|
||||
Once a target cluster is matched the `fleet.yaml` from the git repository is then consulted for customizations. The
|
||||
`targetCustomizations` in the `fleet.yaml` will be evaluated one by one and the first match will define how the
|
||||
resource is to be configured. If no match is made the resources will be deployed with no additional customizations.
|
||||
|
||||
There are three approaches to matching clusters for both `GitRepo` `targets` and `fleet.yaml` `targetCustomizations`.
|
||||
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
|
||||
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
|
||||
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
|
||||
that the value `{}` for a selector means "match everything."
|
||||
|
||||
```yaml
|
||||
targetCustomizations:
|
||||
- name: all
|
||||
# Match everything
|
||||
clusterSelector: {}
|
||||
- name: none
|
||||
# Selector ignored
|
||||
clusterSelector: null
|
||||
```
|
||||
|
||||
When matching a cluster by name, make sure to use the name of the
|
||||
`clusters.fleet.cattle.io` resource. The Rancher UI also has a provisioning and
|
||||
a management cluster resource. Since the management cluster resource is not
|
||||
namespaced, its name is different and contains a random suffix.
|
||||
|
||||
```yaml
|
||||
targetCustomizations:
|
||||
- name: prod
|
||||
clusterName: fleetname
|
||||
```
|
||||
|
||||
See [Mapping to Downstream Clusters](gitrepo-targets#customization-per-cluster) for more information and a list of supported customizations.
|
||||
|
||||
## Raw YAML Resource Customization
|
||||
|
||||
When using Kustomize or Helm the `kustomization.yaml` or the `helm.values` will control how the resource are
|
||||
customized per target cluster. If you are using raw YAML then the following simple mechanism is built-in and can
|
||||
be used. The `overlays/` folder in the git repo is treated specially as folder containing folders that
|
||||
can be selected to overlay on top per target cluster. The resource overlay content
|
||||
uses a file name based approach. This is different from kustomize which uses a resource based approach. In kustomize
|
||||
the resource Group, Kind, Version, Name, and Namespace identify resources and are then merged or patched. For Fleet
|
||||
the overlay resources will override or patch content with a matching file name.
|
||||
|
||||
```shell
|
||||
# Base files
|
||||
deployment.yaml
|
||||
svc.yaml
|
||||
|
||||
# Overlay files
|
||||
|
||||
# The following file will be added
|
||||
overlays/custom/configmap.yaml
|
||||
# The following file will replace svc.yaml
|
||||
overlays/custom/svc.yaml
|
||||
# The following file will patch deployment.yaml
|
||||
overlays/custom/deployment_patch.yaml
|
||||
```
|
||||
|
||||
A file named `foo` will replace a file called `foo` from the base resources or a previous overlay. In order to patch
|
||||
the contents of a file the convention of adding `_patch.` (notice the trailing period) to the filename is used. The string `_patch.`
|
||||
will be replaced with `.` from the file name and that will be used as the target. For example `deployment_patch.yaml`
|
||||
will target `deployment.yaml`. The patch will be applied using JSON Merge, Strategic Merge Patch, or JSON Patch.
|
||||
Which strategy is used is based on the file content. Even though JSON strategies are used, the files can be written
|
||||
using YAML syntax.
|
||||
|
||||
## Cluster and Bundle State
|
||||
|
||||
See [Cluster and Bundle state](./cluster-bundles-state.md).
|
||||
|
||||
## Nested GitRepo CRs
|
||||
|
||||
Nested `GitRepo CRs` (defining a `GitRepo` that points to a repository containing one or more `GitRepo` resources) is supported.
|
||||
You can use this feature to take advantage of `GitOps` in your `GitRepo` resources or, for example, to split complex scenarios into more than one `GitRepo` resource.
|
||||
When finding a `GitRepo` in a `Bundle` Fleet will simply deploy it as any other resource.
|
||||
|
||||
See [this example](https://github.com/rancher/fleet-examples/tree/master/single-cluster/multi-gitrepo).
|
||||
|
|
@ -0,0 +1,192 @@
|
|||
# Mapping to Downstream Clusters
|
||||
|
||||
[Fleet in Rancher](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/) allows users to manage clusters easily as if they were one cluster. Users can deploy bundles, which can be comprised of deployment manifests or any other Kubernetes resource, across clusters using grouping configuration.
|
||||
|
||||
:::info
|
||||
|
||||
__Multi-cluster Only__:
|
||||
This approach only applies if you are running Fleet in a multi-cluster style
|
||||
If no targets are specified, i.e. when using a single-cluster, the bundles target the default cluster group.
|
||||
|
||||
:::
|
||||
|
||||
When deploying `GitRepos` to downstream clusters the clusters must be mapped to a target.
|
||||
|
||||
## Defining Targets
|
||||
|
||||
The deployment targets of `GitRepo` is done using the `spec.targets` field to
|
||||
match clusters or cluster groups. The YAML specification is as below.
|
||||
|
||||
```yaml
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: myrepo
|
||||
namespace: clusters
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- simple
|
||||
|
||||
# Targets are evaluated in order and the first one to match is used. If
|
||||
# no targets match then the evaluated cluster will not be deployed to.
|
||||
targets:
|
||||
# The name of target. This value is largely for display and logging.
|
||||
# If not specified a default name of the format "target000" will be used
|
||||
- name: prod
|
||||
# A selector used to match clusters. The structure is the standard
|
||||
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
|
||||
# clusterSelector will be used only to further refine the selection after
|
||||
# clusterGroupSelector and clusterGroup is evaluated.
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
# A selector used to match cluster groups.
|
||||
clusterGroupSelector:
|
||||
matchLabels:
|
||||
region: us-east
|
||||
# A specific clusterGroup by name that will be selected
|
||||
clusterGroup: group1
|
||||
# A specific cluster by name that will be selected
|
||||
clusterName: cluster1
|
||||
```
|
||||
|
||||
## Target Matching
|
||||
|
||||
All clusters and cluster groups in the same namespace as the `GitRepo` will be evaluated against all targets.
|
||||
If any of the targets match the cluster then the `GitRepo` will be deployed to the downstream cluster. If
|
||||
no match is made, then the `GitRepo` will not be deployed to that cluster.
|
||||
|
||||
There are three approaches to matching clusters.
|
||||
One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so
|
||||
the final match is evaluated as "clusterSelector && clusterGroupSelector && clusterGroup". If any of the three have the
|
||||
default value it is dropped from the criteria. The default value is either null or "". It is important to realize
|
||||
that the value `{}` for a selector means "match everything."
|
||||
|
||||
```yaml
|
||||
targets:
|
||||
# Match everything
|
||||
- clusterSelector: {}
|
||||
# Selector ignored
|
||||
- clusterSelector: null
|
||||
```
|
||||
|
||||
You can also match clusters by name:
|
||||
|
||||
```yaml
|
||||
targets:
|
||||
- clusterName: fleetname
|
||||
```
|
||||
When using Fleet in Rancher, make sure to put the name of the `clusters.fleet.cattle.io` resource.
|
||||
|
||||
## Default Target
|
||||
|
||||
If no target is set for the `GitRepo` then the default targets value is applied. The default targets value is as below.
|
||||
|
||||
```yaml
|
||||
targets:
|
||||
- name: default
|
||||
clusterGroup: default
|
||||
```
|
||||
|
||||
This means if you wish to setup a default location non-configured GitRepos will go to, then just create a cluster group called default
|
||||
and add clusters to it.
|
||||
|
||||
## Customization per Cluster
|
||||
|
||||
:::info
|
||||
|
||||
The `targets:` in the `GitRepo` resource select clusters to deploy on. The `targetCustomizations:` in `fleet.yaml` override Helm values only and do not change targeting.
|
||||
|
||||
:::
|
||||
|
||||
To demonstrate how to deploy Kubernetes manifests across different clusters with customization using Fleet, we will use [multi-cluster/helm/fleet.yaml](https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm/fleet.yaml).
|
||||
|
||||
**Situation:** User has three clusters with three different labels: `env=dev`, `env=test`, and `env=prod`. User wants to deploy a frontend application with a backend database across these clusters.
|
||||
|
||||
**Expected behavior:**
|
||||
|
||||
- After deploying to the `dev` cluster, database replication is not enabled.
|
||||
- After deploying to the `test` cluster, database replication is enabled.
|
||||
- After deploying to the `prod` cluster, database replication is enabled and Load balancer services are exposed.
|
||||
|
||||
**Advantage of Fleet:**
|
||||
|
||||
Instead of deploying the app on each cluster, Fleet allows you to deploy across all clusters following these steps:
|
||||
|
||||
1. Deploy gitRepo `https://github.com/rancher/fleet-examples.git` and specify the path `multi-cluster/helm`.
|
||||
2. Under `multi-cluster/helm`, a Helm chart will deploy the frontend app service and backend database service.
|
||||
3. The following rule will be defined in `fleet.yaml`:
|
||||
|
||||
```
|
||||
targetCustomizations:
|
||||
- name: dev
|
||||
helm:
|
||||
values:
|
||||
replication: false
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
helm:
|
||||
values:
|
||||
replicas: 3
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
helm:
|
||||
values:
|
||||
serviceType: LoadBalancer
|
||||
replicas: 3
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
**Result:**
|
||||
|
||||
Fleet will deploy the Helm chart with your customized `values.yaml` to the different clusters.
|
||||
|
||||
>**Note:** Configuration management is not limited to deployments but can be expanded to general configuration management. Fleet is able to apply configuration management through customization among any set of clusters automatically.
|
||||
|
||||
### Supported Customizations
|
||||
|
||||
* [DefaultNamespace](/ref-crds#bundledeploymentoptions)
|
||||
* [ForceSyncGeneration](/ref-crds#bundledeploymentoptions)
|
||||
* [KeepResources](/ref-crds#bundledeploymentoptions)
|
||||
* [ServiceAccount](/ref-crds#bundledeploymentoptions)
|
||||
* [TargetNamespace](/ref-crds#bundledeploymentoptions)
|
||||
* [Helm.Atomic](/ref-crds#helmoptions)
|
||||
* [Helm.Chart](/ref-crds#helmoptions)
|
||||
* [Helm.DisablePreProcess](/ref-crds#helmoptions)
|
||||
* [Helm.Force](/ref-crds#helmoptions)
|
||||
* [Helm.ReleaseName](/ref-crds#helmoptions)
|
||||
* [Helm.Repo](/ref-crds#helmoptions)
|
||||
* [Helm.TakeOwnership](/ref-crds#helmoptions)
|
||||
* [Helm.TimeoutSeconds](/ref-crds#helmoptions)
|
||||
* [Helm.ValuesFrom](/ref-crds#helmoptions)
|
||||
* [Helm.Values](/ref-crds#helmoptions)
|
||||
* [Helm.Version](/ref-crds#helmoptions)
|
||||
|
||||
:::warning important information
|
||||
Overriding the version of a Helm chart via target customizations will lead to bundles containing _all_ versions, ie the
|
||||
default one and the custom one(s), of the chart, to accommodate all clusters. This in turn means that Fleet will
|
||||
deploy larger bundles.
|
||||
|
||||
As Fleet stores bundles via etcd, this may cause issues on some clusters where resultant bundle sizes may exceed
|
||||
etcd's configured maximum blob size. See [this issue](https://github.com/rancher/fleet/issues/1650) for more details.
|
||||
:::
|
||||
|
||||
* [Helm.WaitForJobs](/ref-crds#helmoptions)
|
||||
* [Kustomize.Dir](/ref-crds#kustomizeoptions)
|
||||
* [YAML.Overlays](/ref-crds#yamloptions)
|
||||
* [Diff.ComparePatches](/ref-crds#diffoptions)
|
||||
|
||||
|
||||
## Additional Examples
|
||||
|
||||
Examples using raw Kubernetes YAML, Helm charts, Kustomize, and combinations
|
||||
of the three are in the [Fleet Examples repo](https://github.com/rancher/fleet-examples/).
|
||||
|
|
@ -0,0 +1,122 @@
|
|||
# Using Image Scan to Update Container Image References
|
||||
|
||||
Image scan in fleet allows you to scan your image repository, fetch the desired image and update your git repository,
|
||||
without the need to manually update your manifests.
|
||||
|
||||
:::caution
|
||||
|
||||
This feature is considered as experimental feature.
|
||||
|
||||
:::
|
||||
|
||||
Go to `fleet.yaml` and add the following section.
|
||||
|
||||
```yaml
|
||||
imageScans:
|
||||
# specify the policy to retrieve images, can be semver or alphabetical order
|
||||
- policy:
|
||||
# if range is specified, it will take the latest image according to semver order in the range
|
||||
# for more details on how to use semver, see https://github.com/Masterminds/semver
|
||||
semver:
|
||||
range: "*"
|
||||
# can use ascending or descending order
|
||||
alphabetical:
|
||||
order: asc
|
||||
|
||||
# specify images to scan
|
||||
image: "your.registry.com/repo/image"
|
||||
|
||||
# Specify the tag name, it has to be unique in the same bundle
|
||||
tagName: test-scan
|
||||
|
||||
# specify secret to pull image if in private registry
|
||||
secretRef:
|
||||
name: dockerhub-secret
|
||||
|
||||
# Specify the scan interval
|
||||
interval: 5m
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
You can create multiple image scans in fleet.yaml.
|
||||
|
||||
:::
|
||||
|
||||
:::note
|
||||
|
||||
Semver will ignore pre-release versions (for example, 0.0.1-10) unless a pre-release version is explicitly used in the range definition.
|
||||
For example, the "*" range will ignore pre-releases while ">= 0.0.1-10" will take them into account.
|
||||
|
||||
:::
|
||||
|
||||
Go to your manifest files and update the field that you want to replace. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: redis-slave
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: redis
|
||||
role: slave
|
||||
tier: backend
|
||||
spec:
|
||||
containers:
|
||||
- name: slave
|
||||
image: <image>:<tag> # {"$imagescan": "test-scan"}
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 6379
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
There are multiple form of tagName you can reference. For example
|
||||
|
||||
`{"$imagescan": "test-scan"}`: Use full image name(foo/bar:tag)
|
||||
|
||||
`{"$imagescan": "test-scan:name"}`: Only use image name without tag(foo/bar)
|
||||
|
||||
`{"$imagescan": "test-scan:tag"}`: Only use image tag
|
||||
|
||||
`{"$imagescan": "test-scan:digest"}`: Use full image name with digest(foo/bar:tag@sha256...)
|
||||
|
||||
:::
|
||||
|
||||
Create a GitRepo that includes your fleet.yaml
|
||||
|
||||
```yaml
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: my-repo
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
# change this to be your own repo
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
# define how long it will sync all the images and decide to apply change
|
||||
imageScanInterval: 5m
|
||||
# user must properly provide a secret that have write access to git repository
|
||||
clientSecretName: secret
|
||||
# specify the commit pattern
|
||||
imageScanCommit:
|
||||
authorName: foo
|
||||
authorEmail: foo@bar.com
|
||||
messageTemplate: "update image"
|
||||
```
|
||||
|
||||
Try pushing a new image tag, for example, `<image>:<new-tag>`. Wait for a while and there should be a new commit pushed into your git repository to change tag in deployment.yaml.
|
||||
Once change is made into git repository, fleet will read through the change and deploy the change into your cluster.
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
# Overview
|
||||
|
||||

|
||||
|
||||
### What is Fleet?
|
||||
|
||||
- **Cluster engine**: Fleet is a container management and deployment engine designed to offer users more control on the local cluster and constant monitoring through **GitOps**. Fleet focuses not only on the ability to scale, but it also gives users a high degree of control and visibility to monitor exactly what is installed on the cluster.
|
||||
|
||||
- **Deployment management**: Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, Kustomize, or any combination of the three. Regardless of the source, all resources are dynamically turned into Helm charts, and Helm is used as the engine to deploy all resources in the cluster. As a result, users can enjoy a high degree of control, consistency, and auditability of their clusters.
|
||||
|
||||
### Configuration Management
|
||||
|
||||
Fleet is fundamentally a set of Kubernetes [custom resource definitions (CRDs)](https://fleet.rancher.io/concepts/) and controllers that manage GitOps for a single Kubernetes cluster or a large scale deployment of Kubernetes clusters. It is a distributed initialization system that makes it easy to customize applications and manage HA clusters from a single point.
|
||||
|
|
@ -0,0 +1,314 @@
|
|||
import {versions} from '@site/src/fleetVersions';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Installation Details
|
||||
|
||||
The installation is broken up into two different use cases: single and multi-cluster.
|
||||
The single cluster install is for if you wish to use GitOps to manage a single cluster,
|
||||
in which case you do not need a centralized manager cluster. In the multi-cluster use case
|
||||
you will setup a centralized manager cluster to which you can register clusters.
|
||||
|
||||
If you are just learning Fleet the single cluster install is the recommended starting
|
||||
point. After which you can move from single cluster to multi-cluster setup down the line.
|
||||
|
||||

|
||||
|
||||
Single-cluster is the default installation. The same cluster will run both the Fleet
|
||||
manager and the Fleet agent. The cluster will communicate with Git server to
|
||||
deploy resources to this local cluster. This is the simplest setup and very
|
||||
useful for dev/test and small scale setups. This use case is supported as a valid
|
||||
use case for production.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="helm" label="Helm 3" default>
|
||||
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
|
||||
fairly straight forward. To install the Helm 3 CLI follow the <a href="https://helm.sh/docs/intro/install">official install instructions</a>.
|
||||
</TabItem>
|
||||
<TabItem value="kubernetes" label="Kubernetes" default>
|
||||
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
|
||||
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
|
||||
Any Kubernetes community supported version of Kubernetes will work, in practice this means {versions.next.kubernetes} or greater.
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Default Install
|
||||
|
||||
Install the following two Helm charts.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="install" label="Install" default>
|
||||
|
||||
:::caution Fleet in Rancher
|
||||
Rancher has separate helm charts for Fleet and uses a different repository.
|
||||
:::
|
||||
|
||||
First add Fleet's Helm repository.
|
||||
<CodeBlock language="bash">
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
|
||||
Second install the Fleet CustomResourcesDefintions.
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait fleet-crd \\
|
||||
fleet/fleet-crd`}
|
||||
</CodeBlock>
|
||||
|
||||
Third install the Fleet controllers.
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait fleet \\
|
||||
fleet/fleet`}
|
||||
</CodeBlock>
|
||||
</TabItem>
|
||||
<TabItem value="verify" label="Verify">
|
||||
|
||||
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
|
||||
running the below commands.
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
||||
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.
|
||||
|
||||
## Multi-controller install: sharding
|
||||
|
||||
### Deployment
|
||||
|
||||
From 0.10 onwards, Fleet supports static sharding. The Fleet controller chart can be installed with `--set
|
||||
shards={<comma-separated shard IDs>}`, which will result in:
|
||||
* as many Fleet controller deployments as specified unique shard IDs,
|
||||
* plus the usual unsharded Fleet controller pod. That latter pod will be the only one containing agent management and
|
||||
cleanup containers.
|
||||
|
||||
For instance:
|
||||
```bash
|
||||
$ helm -n cattle-fleet-system install --create-namespace --wait --set shards="{foo,bar,baz}" \
|
||||
fleet fleet/fleet
|
||||
|
||||
$ kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
fleet-controller-78c74fdb85-b6q64 3/3 Running 0 77s
|
||||
fleet-controller-shard-bar-777d888865-w2dks 1/1 Running 0 77s
|
||||
fleet-controller-shard-baz-6595bd9cb9-27whg 1/1 Running 0 77s
|
||||
fleet-controller-shard-foo-85d49b446f-pzxkw 1/1 Running 0 77s
|
||||
|
||||
$ kubectl -n cattle-fleet-system get pods -l app=fleet-controller \
|
||||
-o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.fleet\.cattle\.io/shard-id}{"\n"}{end}'
|
||||
fleet-controller-78c74fdb85-b6q64
|
||||
fleet-controller-shard-bar-777d888865-w2dks bar
|
||||
fleet-controller-shard-baz-6595bd9cb9-27whg baz
|
||||
fleet-controller-shard-foo-85d49b446f-pzxkw foo
|
||||
```
|
||||
|
||||
### How it works
|
||||
|
||||
With sharding in place, each Fleet controller will process resources bearing its own shard ID. This also holds for the
|
||||
unsharded controller, which has no set shard ID and will therefore process all unsharded resources.
|
||||
|
||||
To deploy a GitRepo for a specific shard, simply add label `fleet.cattle.io/shard-ref` with your desired shard ID as a
|
||||
value.
|
||||
Here is an example:
|
||||
```bash
|
||||
$ kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: sharding-test
|
||||
labels:
|
||||
fleet.cattle.io/shard-ref: foo
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/helm
|
||||
EOF
|
||||
```
|
||||
|
||||
A GitRepo with a label ID for which a Fleet controller is deployed (eg. `foo` in the above example) will then be
|
||||
processed by that controller.
|
||||
|
||||
On the other hand, a GitRepo with an unknown label ID (eg. `boo` in the above example) will _not_ be processed by any
|
||||
Fleet controller, hence no resources other than the GitRepo itself will be created.
|
||||
|
||||
Removing or adding supported shard IDs currently requires redeploying Fleet with a new set of shard IDs.
|
||||
|
||||
## Configuration for Multi-Cluster
|
||||
|
||||
:::caution
|
||||
Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.
|
||||
|
||||
The multi-cluster install described below is **only** covered in standalone Fleet, which is untested by Rancher QA.
|
||||
:::
|
||||
|
||||
|
||||
:::info
|
||||
The setup is the same as for a single cluster.
|
||||
After installing the Fleet manager, you will then need to register remote downstream clusters with the Fleet manager.
|
||||
|
||||
However, to allow for [manager-initiated registration](./cluster-registration.md#manager-initiated) of downstream clusters, a few extra settings are required. Without the API server URL and the CA, only [agent-initiated registration](./cluster-registration.md#agent-initiated) of downstream clusters is possible.
|
||||
:::
|
||||
|
||||
### API Server URL and CA certificate
|
||||
|
||||
In order for your Fleet management installation to properly work it is important
|
||||
the correct API server URL and CA certificates are configured properly. The Fleet agents
|
||||
will communicate to the Kubernetes API server URL. This means the Kubernetes
|
||||
API server must be accessible to the downstream clusters. You will also need
|
||||
to obtain the CA certificate of the API server. The easiest way to obtain this information
|
||||
is typically from your kubeconfig file (`$HOME/.kube/config`). The `server`,
|
||||
`certificate-authority-data`, or `certificate-authority` fields will have these values.
|
||||
|
||||
```yaml title="$HOME/.kube/config"
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: LS0tLS1CRUdJTi...
|
||||
server: https://example.com:6443
|
||||
```
|
||||
|
||||
#### Extract CA certificate
|
||||
|
||||
Please note that the `certificate-authority-data` field is base64 encoded and will need to be
|
||||
decoded before you save it into a file. This can be done by saving the base64 encoded contents to
|
||||
a file and then running
|
||||
|
||||
```shell
|
||||
base64 -d encoded-file > ca.pem
|
||||
```
|
||||
|
||||
Next, retrieve the CA certificate from your kubeconfig.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="extractca" label="Extract First">
|
||||
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
|
||||
`KUBECONFIG` and place then in a file named `ca.pem`.
|
||||
|
||||
```shell
|
||||
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
||||
```
|
||||
</TabItem>
|
||||
<TabItem value="extractcas" label="Multiple Entries">
|
||||
Or, if you have a multi-cluster setup, you can use this command:
|
||||
|
||||
```shell
|
||||
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
||||
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
#### Extract API Server
|
||||
|
||||
If you have a multi-cluster setup, you can use this command:
|
||||
|
||||
```shell
|
||||
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
||||
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
|
||||
# Leave empty if your API server is signed by a well known CA
|
||||
API_SERVER_CA="ca.pem"
|
||||
```
|
||||
|
||||
#### Validate
|
||||
|
||||
First validate the server URL is correct.
|
||||
|
||||
```shell
|
||||
curl -fLk "$API_SERVER_URL/version"
|
||||
```
|
||||
|
||||
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
|
||||
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
|
||||
6443 for Kubernetes.
|
||||
|
||||
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
|
||||
well known CA then omit the `--cacert "$API_SERVER_CA"` part of the command.
|
||||
|
||||
```shell
|
||||
curl -fL --cacert "$API_SERVER_CA" "$API_SERVER_URL/version"
|
||||
```
|
||||
|
||||
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
|
||||
only because the curl command is not setting proper credentials, but this validates that the TLS
|
||||
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
|
||||
the `ca.pem` is not correct. The contents of the `$API_SERVER_CA` file should look similar to the below:
|
||||
|
||||
```pem title="ca.pem"
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
|
||||
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
|
||||
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
|
||||
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
|
||||
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
|
||||
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
|
||||
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
|
||||
KDs/pb3fnMTtpA==
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
### Install for Multi-Cluster
|
||||
|
||||
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
|
||||
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
|
||||
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
|
||||
|
||||
Setup the environment with your specific values, e.g.:
|
||||
|
||||
```shell
|
||||
API_SERVER_URL="https://example.com:6443"
|
||||
API_SERVER_CA="ca.pem"
|
||||
```
|
||||
|
||||
Once you have validated the API server URL and API server CA parameters, install the following two
|
||||
Helm charts.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="install2" label="Install" default>
|
||||
First add Fleet's Helm repository.
|
||||
<CodeBlock language="bash">
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
|
||||
Second install the Fleet CustomResourcesDefintions.
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||
fleet-crd`} {versions.next.fleetCRD}
|
||||
</CodeBlock>
|
||||
|
||||
Third install the Fleet controllers.
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||
--set apiServerURL="$API_SERVER_URL" \\
|
||||
--set-file apiServerCA="$API_SERVER_CA" \\
|
||||
fleet`} {versions.next.fleet}
|
||||
</CodeBlock>
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="verifiy2" label="Verify">
|
||||
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
||||
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-registration.md) and [git repos](./gitrepo-add.md#create-gitrepo-instance) with
|
||||
the Fleet manager.
|
||||
|
|
@ -0,0 +1,200 @@
|
|||
# Setup Multi User
|
||||
|
||||
Fleet uses Kubernetes RBAC where possible.
|
||||
|
||||
One addition on top of RBAC is the [`GitRepoRestriction`](./namespaces.md#restricting-gitrepos) resource, which can be used to control GitRepo resources in a namespace.
|
||||
|
||||
A multi-user fleet setup looks like this:
|
||||
|
||||
* tenants don't share namespaces, each tenant has one or more namespaces on the
|
||||
upstream cluster, where they can create GitRepo resources
|
||||
* tenants can't deploy cluster wide resources and are limited to a set of
|
||||
namespaces on downstream clusters
|
||||
* clusters are in a separate namespace
|
||||
|
||||

|
||||
|
||||
:::warning important information
|
||||
|
||||
The isolation of tenants is not complete and relies on Kubernetes RBAC to be
|
||||
set up correctly. Without manual setup from an operator tenants can still
|
||||
deploy cluster wide resources. Even with the available Fleet restrictions,
|
||||
users are only restricted to namespaces, but namespaces don't provide much
|
||||
isolation on their own. E.g. they can still consume as many resources as they
|
||||
like.
|
||||
|
||||
However, the existing Fleet restrictions allow users to share clusters, and
|
||||
deploy resources without conflicts.
|
||||
|
||||
:::
|
||||
|
||||
## Example Fleet Standalone
|
||||
|
||||
This would create a user 'fleetuser', who can only manage GitRepo resources in the 'project1' namespace.
|
||||
|
||||
kubectl create serviceaccount fleetuser
|
||||
kubectl create namespace project1
|
||||
kubectl create -n project1 role fleetuser --verb=get --verb=list --verb=create --verb=delete --resource=gitrepos.fleet.cattle.io
|
||||
kubectl create -n project1 rolebinding fleetuser --serviceaccount=default:fleetuser --role=fleetuser
|
||||
|
||||
If we want to give access to multiple namespaces, we can use a single cluster role with two role bindings:
|
||||
|
||||
kubectl create clusterrole fleetuser --verb=get --verb=list --verb=create --verb=delete --resource=gitrepos.fleet.cattle.io
|
||||
kubectl create -n project1 rolebinding fleetuser --serviceaccount=default:fleetuser --clusterrole=fleetuser
|
||||
kubectl create -n project2 rolebinding fleetuser --serviceaccount=default:fleetuser --clusterrole=fleetuser
|
||||
|
||||
This makes sure, tenants can't interfere with GitRepo resources from other tenants, since they don't have access to their namespaces.
|
||||
|
||||
## Example Fleet in Rancher
|
||||
|
||||
When a new fleet workspace is created, a corresponding namespace with an identical name is automatically generated within the Rancher local cluster.
|
||||
For a user to see and deploy fleet resources in a specific workspace, they need at least the following permissions:
|
||||
- list/get the `fleetworkspace` cluster-wide resource in the local cluster
|
||||
- Permissions to create fleet resources (such as `bundles`, `gitrepos`, ...) in the backing namespace for the workspace in the local cluster.
|
||||
|
||||
Let's grant permissions to deploy fleet resources in the `project1` and `project2` fleet workspaces:
|
||||
|
||||
- To create the `project1` and `project2` fleet workspaces, you can either do it in the [Rancher UI](https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui) or use the following YAML resources:
|
||||
|
||||
```
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: FleetWorkspace
|
||||
metadata:
|
||||
name: project1
|
||||
```
|
||||
|
||||
```
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: FleetWorkspace
|
||||
metadata:
|
||||
name: project2
|
||||
```
|
||||
|
||||
- Create a `GlobalRole` that grants permission to deploy fleet resources in the `project1` and `project2` fleet workspaces:
|
||||
|
||||
```
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: GlobalRole
|
||||
metadata:
|
||||
name: fleet-projects1and2
|
||||
namespacedRules:
|
||||
project1:
|
||||
- apiGroups:
|
||||
- fleet.cattle.io
|
||||
resources:
|
||||
- gitrepos
|
||||
- bundles
|
||||
- clusterregistrationtokens
|
||||
- gitreporestrictions
|
||||
- clusters
|
||||
- clustergroups
|
||||
verbs:
|
||||
- '*'
|
||||
project2:
|
||||
- apiGroups:
|
||||
- fleet.cattle.io
|
||||
resources:
|
||||
- gitrepos
|
||||
- bundles
|
||||
- clusterregistrationtokens
|
||||
- gitreporestrictions
|
||||
- clusters
|
||||
- clustergroups
|
||||
verbs:
|
||||
- '*'
|
||||
rules:
|
||||
- apiGroups:
|
||||
- management.cattle.io
|
||||
resourceNames:
|
||||
- project1
|
||||
- project2
|
||||
resources:
|
||||
- fleetworkspaces
|
||||
verbs:
|
||||
- '*'
|
||||
```
|
||||
|
||||
Assign the `GlobalRole` to users or groups, more info can be found in the [Rancher docs](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions#configuring-global-permissions-for-individual-users)
|
||||
|
||||
The user now has access to the `Continuous Delivery` tab in Rancher and can deploy resources to both the `project1` and `project2` workspaces.
|
||||
|
||||
## Allow Access to Clusters
|
||||
|
||||
This assumes all GitRepos created by 'fleetuser' have the `team: one` label. Different labels could be used, to select different cluster namespaces.
|
||||
|
||||
In each of the user's namespaces, as an admin create a [`BundleNamespaceMapping`](./namespaces.md#cross-namespace-deployments).
|
||||
|
||||
kind: BundleNamespaceMapping
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: mapping
|
||||
namespace: project1
|
||||
|
||||
# Bundles to match by label.
|
||||
# The labels are defined in the fleet.yaml # labels field or from the
|
||||
# GitRepo metadata.labels field
|
||||
bundleSelector:
|
||||
matchLabels:
|
||||
team: one
|
||||
# or target one repo
|
||||
#fleet.cattle.io/repo-name: simpleapp
|
||||
|
||||
# Namespaces, containing clusters, to match by label
|
||||
namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: fleet-default
|
||||
# the label is on the namespace
|
||||
#workspace: prod
|
||||
|
||||
The [`target` section](./gitrepo-targets.md) in the GitRepo resource can be used to deploy only to a subset of the matched clusters.
|
||||
|
||||
## Restricting Access to Downstream Clusters
|
||||
|
||||
Admins can further restrict tenants by creating a `GitRepoRestriction` in each of their namespaces.
|
||||
|
||||
kind: GitRepoRestriction
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: restriction
|
||||
namespace: project1
|
||||
|
||||
allowedTargetNamespaces:
|
||||
- project1simpleapp
|
||||
|
||||
This will deny the creation of cluster wide resources, which may interfere with other tenants and limit the deployment to the 'project1simpleapp' namespace.
|
||||
|
||||
## An Example GitRepo Resource
|
||||
|
||||
A GitRepo resource created by a tenant, without admin access could look like this:
|
||||
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: simpleapp
|
||||
namespace: project1
|
||||
labels:
|
||||
team: one
|
||||
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- bundle-diffs
|
||||
|
||||
targetNamespace: project1simpleapp
|
||||
|
||||
# do not match the upstream/local cluster, won't work
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
This includes the `team: one` label and and the required `targetNamespace`.
|
||||
|
||||
Together with the previous `BundleNamespaceMapping` it would target all clusters with a `env: dev` label in the 'fleet-default' namespace.
|
||||
|
||||
:::note
|
||||
|
||||
`BundleNamespaceMappings` do not work with local clusters, so make sure not to target them.
|
||||
|
||||
:::
|
||||
|
|
@ -0,0 +1,132 @@
|
|||
# Namespaces
|
||||
|
||||
All types in the Fleet manager are namespaced. The namespaces of the manager types do not correspond to the namespaces
|
||||
of the deployed resources in the downstream cluster. Understanding how namespaces are used in the Fleet manager is
|
||||
important to understand the security model and how one can use Fleet in a multi-tenant fashion.
|
||||
|
||||
## GitRepos, Bundles, Clusters, ClusterGroups
|
||||
|
||||
The primary types are all scoped to a namespace. All selectors for `GitRepo` targets will be evaluated against
|
||||
the `Clusters` and `ClusterGroups` in the same namespaces. This means that if you give `create` or `update` privileges
|
||||
to a `GitRepo` type in a namespace, that end user can modify the selector to match any cluster in that namespace.
|
||||
This means in practice if you want to have two teams self manage their own `GitRepo` registrations but they should
|
||||
not be able to target each others clusters, they should be in different namespaces.
|
||||
|
||||
### GitRepo Namespace
|
||||
|
||||
Git repos are added to the Fleet manager using the `GitRepo` custom resource type. The `GitRepo` type is namespaced. By default, Rancher will create two Fleet workspaces: **fleet-default** and **fleet-local**.
|
||||
|
||||
- `Fleet-default` will contain all the downstream clusters that are already registered through Rancher.
|
||||
- `Fleet-local` will contain the local cluster by default.
|
||||
|
||||
If you are using Fleet in a [single cluster](./concepts.md) style, the namespace will always be **fleet-local**. Check [here](https://fleet.rancher.io/namespaces/#fleet-local) for more on the `fleet-local` namespace.
|
||||
|
||||
For a [multi-cluster](./concepts.md) style, please ensure you use the correct repo that will map to the right target clusters.
|
||||
|
||||
|
||||
## Namespace Creation Behavior in Bundles
|
||||
|
||||
When deploying a Fleet bundle, the specified namespace will automatically be created if it does not already exist.
|
||||
|
||||
## Special Namespaces
|
||||
|
||||
An overview of the [namespaces](./namespaces.md) used by fleet and their resources.
|
||||
|
||||

|
||||
|
||||
### fleet-local (local workspace, cluster registration namespace)
|
||||
|
||||
The **fleet-local** namespace is a special namespace used for the single cluster use case or to bootstrap
|
||||
the configuration of the Fleet manager.
|
||||
|
||||
When fleet is installed the `fleet-local` namespace is created along with one `Cluster` called `local` and one
|
||||
`ClusterGroup` called `default`. If no targets are specified on a `GitRepo`, it is by default targeted to the
|
||||
`ClusterGroup` named `default`. This means that all `GitRepos` created in `fleet-local` will
|
||||
automatically target the `local` `Cluster`. The `local` `Cluster` refers to the cluster the Fleet manager is running
|
||||
on.
|
||||
|
||||
The cluster registration namespace contains the cluster and the clusterregistration resources, as well as any gitrepos and bundles.
|
||||
|
||||
### cattle-fleet-system (system namespace)
|
||||
|
||||
The Fleet controller and Fleet agent run in this namespace. All service accounts referenced by `GitRepos` are expected
|
||||
to live in this namespace in the downstream cluster.
|
||||
|
||||
### cattle-fleet-clusters-system (system registration namespace)
|
||||
|
||||
This namespace holds secrets for the cluster registration process. It should contain no other resources in it,
|
||||
especially secrets.
|
||||
|
||||
### Cluster Namespaces
|
||||
|
||||
For every cluster that is registered a namespace is created by the Fleet manager for that cluster.
|
||||
These namespaces are named in the form `cluster-${namespace}-${cluster}-${random}`. The purpose of this
|
||||
namespace is that all `BundleDeployments` for that cluster are put into this namespace and
|
||||
then the downstream cluster is given access to watch and update `BundleDeployments` in that namespace only.
|
||||
|
||||
## Cross Namespace Deployments
|
||||
|
||||
It is possible to create a GitRepo that will deploy across namespaces. The primary purpose of this is so that a
|
||||
central privileged team can manage common configuration for many clusters that are managed by different teams. The way
|
||||
this is accomplished is by creating a `BundleNamespaceMapping` resource in a cluster.
|
||||
|
||||
If you are creating a `BundleNamespaceMapping` resource it is best to do it in a namespace that only contains `GitRepos`
|
||||
and no `Clusters`. It seems to get confusing if you have Clusters in the same repo as the cross namespace `GitRepos` will still
|
||||
always be evaluated against the current namespace. So if you have clusters in the same namespace you may wish to make them
|
||||
canary clusters.
|
||||
|
||||
A `BundleNamespaceMapping` has only two fields. Which are as below
|
||||
|
||||
```yaml
|
||||
kind: BundleNamespaceMapping
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: not-important
|
||||
namespace: typically-unique
|
||||
|
||||
# Bundles to match by label. The labels are defined in the fleet.yaml
|
||||
# labels field or from the GitRepo metadata.labels field
|
||||
bundleSelector:
|
||||
matchLabels:
|
||||
foo: bar
|
||||
|
||||
# Namespaces to match by label
|
||||
namespaceSelector:
|
||||
matchLabels:
|
||||
foo: bar
|
||||
```
|
||||
|
||||
If the `BundleNamespaceMappings` `bundleSelector` field matches a `Bundles` labels then that `Bundle` target criteria will
|
||||
be evaluated against all clusters in all namespaces that match `namespaceSelector`. One can specify labels for the created
|
||||
bundles from git by putting labels in the `fleet.yaml` file or on the `metadata.labels` field on the `GitRepo`.
|
||||
|
||||
## Restricting GitRepos
|
||||
|
||||
A namespace can contain multiple `GitRepoRestriction` resources. All `GitRepos`
|
||||
created in that namespace will be checked against the list of restrictions.
|
||||
If a `GitRepo` violates one of the constraints its `BundleDeployment` will be
|
||||
in an error state and won't be deployed.
|
||||
|
||||
This can also be used to set the defaults for GitRepo's `serviceAccount` and `clientSecretName` fields.
|
||||
|
||||
```yaml
|
||||
kind: GitRepoRestriction
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: restriction
|
||||
namespace: typically-unique
|
||||
allowedClientSecretNames: []
|
||||
allowedRepoPatterns: []
|
||||
allowedServiceAccounts: []
|
||||
allowedTargetNamespaces: []
|
||||
defaultClientSecretName: ""
|
||||
defaultServiceAccount: ""
|
||||
```
|
||||
|
||||
### Allowed Target Namespaces
|
||||
|
||||
This can be used to limit a deployment to a set of namespaces on a downstream cluster.
|
||||
If an allowedTargetNamespaces restriction is present, all `GitRepos` must
|
||||
specify a `targetNamespace` and the specified namespace must be in the allow
|
||||
list.
|
||||
This also prevents the creation of cluster wide resources.
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
import {versions} from '@site/src/fleetVersions';
|
||||
import CodeBlock from '@theme/CodeBlock';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Quick Start
|
||||
|
||||

|
||||
|
||||
Who needs documentation, lets just run this thing!
|
||||
|
||||
## Install
|
||||
|
||||
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and its use is
|
||||
fairly straightforward. To install the Helm 3 CLI follow the <a href="https://helm.sh/docs/intro/install">official install instructions</a>.
|
||||
|
||||
|
||||
:::caution Fleet in Rancher
|
||||
Rancher has separate helm charts for Fleet and uses a different repository.
|
||||
:::
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="linux" label="Linux/Mac" default>
|
||||
<CodeBlock language="bash">
|
||||
{`brew install helm\n`}
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
</TabItem>
|
||||
<TabItem value="windows" label="Windows" default>
|
||||
<CodeBlock language="bash">
|
||||
{`choco install kubernetes-helm\n`}
|
||||
{`helm repo add fleet https://rancher.github.io/fleet-helm-charts/`}
|
||||
</CodeBlock>
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)
|
||||
|
||||
<CodeBlock language="bash">
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait fleet-crd \\
|
||||
fleet/fleet-crd\n`}
|
||||
{`helm -n cattle-fleet-system install --create-namespace --wait fleet \\
|
||||
fleet/fleet`}
|
||||
</CodeBlock>
|
||||
|
||||
## Add a Git Repo to Watch
|
||||
|
||||
Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
|
||||
be deployed should be in `/manifests` in your repo.
|
||||
|
||||
```bash
|
||||
cat > example.yaml << "EOF"
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
kind: GitRepo
|
||||
metadata:
|
||||
name: sample
|
||||
# This namespace is special and auto-wired to deploy to the local cluster
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
# Everything from this repo will be run in this cluster. You trust me right?
|
||||
repo: "https://github.com/rancher/fleet-examples"
|
||||
paths:
|
||||
- simple
|
||||
EOF
|
||||
|
||||
kubectl apply -f example.yaml
|
||||
```
|
||||
|
||||
## Get Status
|
||||
|
||||
Get status of what fleet is doing
|
||||
|
||||
```shell
|
||||
kubectl -n fleet-local get fleet
|
||||
```
|
||||
|
||||
You should see something like this get created in your cluster.
|
||||
|
||||
```
|
||||
kubectl get deploy frontend
|
||||
```
|
||||
```
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
frontend 3/3 3 3 116m
|
||||
```
|
||||
|
||||
Enjoy and read the [docs](https://rancher.github.io/fleet).
|
||||
|
|
@ -0,0 +1,56 @@
|
|||
# Bundle Lifecycle
|
||||
|
||||
A bundle is an internal resource used for the orchestration of resources from git. When a GitRepo is scanned it will produce one or more bundles.
|
||||
|
||||
To demonstrate the life cycle of a Fleet bundle, we will use [multi-cluster/helm](https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm) as a case study.
|
||||
|
||||
1. User will create a [GitRepo](./gitrepo-add.md#create-gitrepo-instance) that points to the multi-cluster/helm repository.
|
||||
2. The `gitjob-controller` will sync changes from the GitRepo and detect changes from the polling or [webhook event](./webhook.md). With every commit change, the `gitjob-controller` will create a job that clones the git repository, reads content from the repo such as `fleet.yaml` and other manifests, and creates the Fleet [bundle](./cluster-bundles-state.md#bundles).
|
||||
|
||||
>**Note:** The job pod with the image name `rancher/tekton-utils` will be under the same namespace as the GitRepo.
|
||||
|
||||
3. The `fleet-controller` then syncs changes from the bundle. According to the targets, the `fleet-controller` will create `BundleDeployment` resources, which are a combination of a bundle and a target cluster.
|
||||
4. The `fleet-agent` will then pull the `BundleDeployment` from the Fleet controlplane. The agent deploys bundle manifests as a [Helm chart](https://helm.sh/docs/intro/install/) from the `BundleDeployment` into the downstream clusters.
|
||||
5. The `fleet-agent` will continue to monitor the application bundle and report statuses back in the following order: bundledeployment > bundle > GitRepo > cluster.
|
||||
|
||||
|
||||
This diagram shows the different rendering stages a bundle goes through until deployment.
|
||||
|
||||

|
||||
|
||||
## Examining the Bundle Lifecycle With the CLI
|
||||
|
||||
Several fleet CLI commands help with debugging bundles.
|
||||
|
||||
### fleet apply
|
||||
|
||||
[Apply](./cli/fleet-cli/fleet_apply.md) renders a folder with Kubernetes resources, such as a Helm chart, manifests, or kustomize folders, into a Fleet bundle resource.
|
||||
|
||||
```
|
||||
git clone https://github.com/rancher/fleet-test-data
|
||||
cd fleet-test-data
|
||||
fleet apply -n fleet-local -o bundle.yaml testbundle simple-chart/
|
||||
```
|
||||
|
||||
More information on how to create bundles with `fleet apply` can be found in the [section on bundles](https://fleet.rancher.io/bundle-add).
|
||||
|
||||
### fleet target
|
||||
|
||||
[Target](./cli/fleet-cli/fleet_target.md) reads a bundle from a file and works with a live cluster to print out the `bundledeployment` & `content` resource, which fleetcontroller would create. It takes a namespace as an argument, so it can look in that namespace for e.g. cluster resources. It can also dump the data structure which is used during "targeting", so decisions taken regarding labels and cluster names can be checked.
|
||||
|
||||
### fleet deploy
|
||||
|
||||
[Deploy](./cli/fleet-cli/fleet_deploy.md) takes the output of `fleet target`, or a dumped bundledeployment/content resource and deploys it to a cluster, just like fleet-agent would. It supports a dry run mode, to print out the resources which would be created, instead of installing them with helm. Since the command doesn't create the input resources, a running fleet-agent would likely garbage collect the deployment.
|
||||
|
||||
The deploy command can be used to bring bundles to air-gapped clusters.
|
||||
|
||||
### Lifecycle CLI Example
|
||||
|
||||
```
|
||||
git clone https://github.com/rancher/fleet-test-data
|
||||
cd fleet-test-data
|
||||
# for information about apply see https://fleet.rancher.io/bundle-add
|
||||
fleet apply -n fleet-local -o bundle.yaml testbundle simple-chart/
|
||||
fleet target --bundle-file bundle.yaml --list-inputs > bd.yaml
|
||||
fleet deploy --input-file bd.yaml --dry-run
|
||||
```
|
||||
|
|
@ -0,0 +1,94 @@
|
|||
# Bundle Resource
|
||||
|
||||
Bundles are automatically created by Fleet when a `GitRepo` is created.
|
||||
|
||||
The content of the resource corresponds to the [BundleSpec](./ref-crds#bundlespec).
|
||||
For more information on how to use the Bundle resource [Create a Bundle Resource](./bundle-add.md).
|
||||
|
||||
```yaml
|
||||
kind: Bundle
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
# Any name can be used here
|
||||
name: my-bundle
|
||||
# For single cluster use fleet-local, otherwise use the namespace of
|
||||
# your choosing
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
# Namespace used for resources that do not specify a namespace.
|
||||
# This field is not used to enforce or lock down the deployment to a specific namespace.
|
||||
# defaultNamespace: test
|
||||
|
||||
# If present will assign all resource to this
|
||||
# namespace and if any cluster scoped resource exists the deployment will fail.
|
||||
# targetNamespace: app
|
||||
|
||||
# Kustomize options for the deployment, like the dir containing the kustomization.yaml file.
|
||||
# kustomize: ...
|
||||
|
||||
# Helm options for the deployment, like the chart name, repo and values.
|
||||
# helm: ...
|
||||
|
||||
# ServiceAccount which will be used to perform this deployment.
|
||||
# serviceAccount: sa
|
||||
|
||||
# ForceSyncGeneration is used to force a redeployment.
|
||||
# forceSyncGeneration: 0
|
||||
|
||||
# YAML options, if using raw YAML these are names that map to overlays/{name} that will be used to replace or patch a resource.
|
||||
# yaml: ...
|
||||
|
||||
# Diff can be used to ignore the modified state of objects which are amended at runtime.
|
||||
# A specific commit or tag can also be watched.
|
||||
#
|
||||
# diff: ...
|
||||
|
||||
# KeepResources can be used to keep the deployed resources when removing the bundle.
|
||||
# keepResources: false
|
||||
|
||||
# If set to true, will stop any BundleDeployments from being updated. It will be marked as out of sync.
|
||||
# paused: false
|
||||
|
||||
# Controls the rollout of bundles, by defining partitions, canaries and percentages for cluster availability.
|
||||
# rolloutStrategy: ...
|
||||
|
||||
# Contain the actual resources from the git repo which will be deployed.
|
||||
resources:
|
||||
- content: |
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:1.14.2
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: nginx.yaml
|
||||
|
||||
# Target clusters to deploy to if running Fleet in a multi-cluster
|
||||
# style. Refer to the "Mapping to Downstream Clusters" docs for
|
||||
# more information.
|
||||
#
|
||||
# targets: ...
|
||||
|
||||
# This field is used by Fleet internally, and it should not be modified manually.
|
||||
# Fleet will copy all targets into targetRestrictions when a Bundle is created for a GitRepo.
|
||||
# targetRestrictions: ...
|
||||
|
||||
# Refers to the bundles which must be ready before this bundle can be deployed.
|
||||
# dependsOn: ...
|
||||
|
||||
```
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
# Configuration
|
||||
|
||||
A reference list of, mostly internal, configuration options.
|
||||
|
||||
## Helm Charts
|
||||
|
||||
The Helm charts accept, at least, the options as shown with their default in `values.yaml`:
|
||||
|
||||
* https://github.com/rancher/fleet/blob/main/charts/fleet/values.yaml
|
||||
* https://github.com/rancher/fleet/blob/main/charts/fleet-crd/values.yaml
|
||||
* https://github.com/rancher/fleet/blob/main/charts/fleet-agent/values.yaml
|
||||
|
||||
## Environment Variables
|
||||
|
||||
The controllers can be started with these environment variables:
|
||||
|
||||
* `CATTLE_DEV_MODE` - used to debug wrangler, not usable
|
||||
* `FLEET_CLUSTER_ENQUEUE_DELAY` - tune how often non-ready clusters are checked
|
||||
* `FLEET_CPU_PPROF_PERIOD` - used to turn on [performance profiling](https://github.com/rancher/fleet/blob/main/docs/performance.md)
|
||||
|
||||
## Configuration
|
||||
|
||||
In cluster configuration for the agent and fleet manager. Changing these can lead to full re-deployments.
|
||||
|
||||
The config [struct](https://github.com/rancher/fleet/blob/main/internal/config/config.go#L57) is used in both config maps:
|
||||
|
||||
* cattle-fleet-system/fleet-agent
|
||||
* cattle-fleet-system/fleet-controller
|
||||
|
||||
## Labels
|
||||
|
||||
Labels used by fleet:
|
||||
|
||||
* `fleet.cattle.io/agent=true` - NodeSelector label for agent's deployment affinity setting
|
||||
* `fleet.cattle.io/non-managed-agent` - managed agent bundle won't target Clusters with this label
|
||||
* `fleet.cattle.io/repo-name` - used on Bundle to reference the git repo resource
|
||||
* `fleet.cattle.io/bundle-namespace` - used on BundleDeployment to reference the Bundle resource
|
||||
* `fleet.cattle.io/bundle-name` - used on BundleDeployment to reference the Bundle resource
|
||||
* `fleet.cattle.io/managed=true` - cluster namespaces with this label will be cleaned up. Other resources will be cleaned up if it is in a label. Used in Rancher to identify fleet namespaces.
|
||||
* `fleet.cattle.io/bootstrap-token` - unused
|
||||
* `fleet.cattle.io/shard-id=<shard-id>` - The shard ID of a fleet controller pod.
|
||||
* `fleet.cattle.io/shard-default=true` - true if this is the controller managing resources without a shard reference label.
|
||||
* `fleet.cattle.io/shard-ref=<shard-id>` - references the Shard ID assigned by
|
||||
Fleet to resources, inherited from a `GitRepo`, which determines which Fleet controller deployment will reconcile them.
|
||||
* If this label is not provided or has an empty value, then the unsharded Fleet controller will process the resource.
|
||||
* If this label has a value which does not match any shard ID for which a Fleet controller is deployed, then the
|
||||
resource will not be processed.
|
||||
|
||||
|
||||
## Annotations
|
||||
|
||||
Annotations used by fleet:
|
||||
|
||||
* `fleet.cattle.io/agent-namespace`
|
||||
* `fleet.cattle.io/bundle-id`
|
||||
* `fleet.cattle.io/cluster`, `fleet.cattle.io/cluster-namespace` - used on a cluster namespace to reference the cluster registration namespace and cluster name
|
||||
* `fleet.cattle.io/cluster-group`
|
||||
* `fleet.cattle.io/cluster-registration-namespace`
|
||||
* `fleet.cattle.io/cluster-registration`
|
||||
* `fleet.cattle.io/commit`
|
||||
* `fleet.cattle.io/managed` - appears unused
|
||||
* `fleet.cattle.io/service-account`
|
||||
|
||||
## Fleet agent configuration
|
||||
|
||||
Tolerations, affinity and resources can be customized for the Fleet agent. These fields can be provided when creating a
|
||||
[Cluster](https://fleet.rancher.io/ref-crds#clusterspec), see [Registering Downstream Cluster](https://fleet.rancher.io/cluster-registration) for more info on how to create
|
||||
Clusters. Default configuration will be used if these fields are not provided.
|
||||
|
||||
If you change the resources limits, make sure the limits allow the fleet-agent to work normally.
|
||||
|
||||
Keep in mind that if you downgrade Fleet to a previous version than v0.7.0 Fleet will fallback to the built-in defaults.
|
||||
Agents will redeploy if they had custom affinity. If Fleet version number does not change, redeployment might not be immediate.
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,420 @@
|
|||
# fleet.yaml
|
||||
|
||||
The `fleet.yaml` file adds options to a bundle. Any directory with a
|
||||
`fleet.yaml` is automatically turned into bundle.
|
||||
|
||||
For more information on how to use the `fleet.yaml` to customize bundles see
|
||||
[Git Repository Contents](./gitrepo-content.md).
|
||||
|
||||
The content of the fleet.yaml corresponds to the struct at
|
||||
[pkg/bundlereader/read.go](https://github.com/rancher/fleet/blob/b501b7e7864d37e310dfcdb109c73e5aec4240bb/pkg/bundlereader/read.go#L132-L139),
|
||||
which contains the [BundleSpec](./ref-crds#bundlespec).
|
||||
|
||||
### Reference
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
# The default namespace to be applied to resources. This field is not used to
|
||||
# enforce or lock down the deployment to a specific namespace, but instead
|
||||
# provide the default value of the namespace field if one is not specified in
|
||||
# the manifests.
|
||||
#
|
||||
# Default: default
|
||||
defaultNamespace: default
|
||||
|
||||
# All resources will be assigned to this namespace and if any cluster scoped
|
||||
# resource exists the deployment will fail.
|
||||
#
|
||||
# Default: ""
|
||||
namespace: default
|
||||
|
||||
# namespaceLabels are labels that will be appended to the namespace created by
|
||||
# Fleet.
|
||||
namespaceLabels:
|
||||
key: value
|
||||
|
||||
# namespaceAnnotations are annotations that will be appended to the namespace
|
||||
# created by Fleet.
|
||||
namespaceAnnotations:
|
||||
key: value
|
||||
|
||||
# Optional map of labels, that are set at the bundle and can be used in a
|
||||
# dependsOn.selector
|
||||
labels:
|
||||
key: value
|
||||
|
||||
kustomize:
|
||||
# Use a custom folder for kustomize resources. This folder must contain a
|
||||
# kustomization.yaml file.
|
||||
dir: ./kustomize
|
||||
|
||||
helm:
|
||||
|
||||
# These options control how "fleet apply" downloads the chart
|
||||
#
|
||||
# Use a custom location for the Helm chart. This can refer to any go-getter
|
||||
# URL or OCI registry based helm chart URL e.g.
|
||||
# "oci://ghcr.io/fleetrepoci/guestbook". This allows one to download charts
|
||||
# from most any location. Also know that go-getter URL supports adding a
|
||||
# digest to validate the download. If repo is set below this field is the name
|
||||
# of the chart to lookup.
|
||||
#
|
||||
# It is possible to download the chart from a Git repository, e.g. by using
|
||||
# `git@github.com:rancher/fleet-examples//single-cluster/helm`. If a secret
|
||||
# for the SSH key was defined in the GitRepo via `helmSecretName`, it will be
|
||||
# injected into the chart URL.
|
||||
#
|
||||
# Git repositories can be downloaded via unauthenticated http, by using for
|
||||
# example:
|
||||
#
|
||||
# `git::http://github.com/rancher/fleet-examples/single-cluster/helm`.
|
||||
chart: ./chart
|
||||
|
||||
# A https URL to a Helm repo to download the chart from. It's typically easier
|
||||
# to just use `chart` field and refer to a tgz file. If repo is used the
|
||||
# value of `chart` will be used as the chart name to lookup in the Helm
|
||||
# repository.
|
||||
repo: https://charts.rancher.io
|
||||
|
||||
# The version of the chart or semver constraint of the chart to find. If a
|
||||
# constraint is specified it is evaluated each time git changes.
|
||||
#
|
||||
# The version also determines which chart to download from OCI registries.
|
||||
# Note: OCI registries don't support the '+' character, which is supported by
|
||||
# semver. When pushing a helm chart with a tag containing the '+' character
|
||||
# helm automatically replaces '+' to '_' before uploading it.
|
||||
#
|
||||
# You should use the version with the '+' in this file, as the '_' character
|
||||
# is not supported by semver and Fleet also replaces '+' to '_' when accessing
|
||||
# the OCI registry.
|
||||
version: 0.1.0
|
||||
|
||||
# By default fleet downloads any dependency found in a helm chart. Use
|
||||
# disableDependencyUpdate: true to disable this feature.
|
||||
disableDependencyUpdate: false
|
||||
|
||||
### These options only work for helm-type bundles.
|
||||
#
|
||||
# Any values that should be placed in the `values.yaml` and passed to helm
|
||||
# during install.
|
||||
values:
|
||||
|
||||
any-custom: value
|
||||
|
||||
# All labels on Rancher clusters are available using
|
||||
# global.fleet.clusterLabels.LABELNAME These can now be accessed directly as
|
||||
# variables The variable's value will be an empty string if the referenced
|
||||
# cluster label does not exist on the targeted cluster.
|
||||
variableName: global.fleet.clusterLabels.LABELNAME
|
||||
|
||||
# See Templating notes below for more information on templating.
|
||||
templatedLabel: "${ .ClusterLabels.LABELNAME }-foo"
|
||||
|
||||
valueFromEnv:
|
||||
"${ .ClusterLabels.ENV }": ${ .ClusterValues.someValue | upper | quote }
|
||||
|
||||
# Path to any values files that need to be passed to helm during install.
|
||||
valuesFiles:
|
||||
- values1.yaml
|
||||
- values2.yaml
|
||||
|
||||
# Allow to use values files from configmaps or secrets defined in the
|
||||
# downstream clusters.
|
||||
valuesFrom:
|
||||
- configMapKeyRef:
|
||||
name: configmap-values
|
||||
# default to namespace of bundle
|
||||
namespace: default
|
||||
key: values.yaml
|
||||
- secretKeyRef:
|
||||
name: secret-values
|
||||
namespace: default
|
||||
key: values.yaml
|
||||
|
||||
### These options control how fleet-agent deploys the bundle, they also apply
|
||||
### for kustomize- and manifest-style bundles.
|
||||
#
|
||||
# A custom release name to deploy the chart as. If not specified a release name
|
||||
# will be generated by combining the invoking GitRepo.name + GitRepo.path.
|
||||
releaseName: my-release
|
||||
#
|
||||
# Makes helm skip the check for its own annotations
|
||||
takeOwnership: false
|
||||
#
|
||||
# Override immutable resources. This could be dangerous.
|
||||
force: false
|
||||
#
|
||||
# Set the Helm --atomic flag when upgrading
|
||||
atomic: false
|
||||
#
|
||||
# Disable go template pre-processing on the fleet values
|
||||
disablePreProcess: false
|
||||
#
|
||||
# Disable DNS resolution in Helm's template functions
|
||||
disableDNS: false
|
||||
#
|
||||
# Skip evaluation of the values.schema.json file
|
||||
skipSchemaValidation: false
|
||||
#
|
||||
# If set and timeoutSeconds provided, will wait until all Jobs have been
|
||||
# completed before marking the GitRepo as ready. It will wait for as long as
|
||||
# timeoutSeconds.
|
||||
waitForJobs: true
|
||||
|
||||
# A paused bundle will not update downstream clusters but instead mark the bundle
|
||||
# as OutOfSync. One can then manually confirm that a bundle should be deployed to
|
||||
# the downstream clusters.
|
||||
#
|
||||
# Default: false
|
||||
paused: false
|
||||
|
||||
rolloutStrategy:
|
||||
|
||||
# A number or percentage of clusters that can be unavailable during an update
|
||||
# of a bundle. This follows the same basic approach as a deployment rollout
|
||||
# strategy. Once the number of clusters meets unavailable state update will be
|
||||
# paused. Default value is 100% which doesn't take effect on update.
|
||||
#
|
||||
# default: 100%
|
||||
maxUnavailable: 15%
|
||||
|
||||
# A number or percentage of cluster partitions that can be unavailable during
|
||||
# an update of a bundle.
|
||||
#
|
||||
# default: 0
|
||||
maxUnavailablePartitions: 20%
|
||||
|
||||
# A number of percentage of how to automatically partition clusters if not
|
||||
# specific partitioning strategy is configured.
|
||||
#
|
||||
# default: 25%
|
||||
autoPartitionSize: 10%
|
||||
|
||||
# A list of definitions of partitions. If any target clusters do not match
|
||||
# the configuration they are added to partitions at the end following the
|
||||
# autoPartitionSize.
|
||||
partitions:
|
||||
|
||||
# A user friend name given to the partition used for Display (optional).
|
||||
# default: ""
|
||||
- name: canary
|
||||
|
||||
# A number or percentage of clusters that can be unavailable in this
|
||||
# partition before this partition is treated as done.
|
||||
# default: 10%
|
||||
maxUnavailable: 10%
|
||||
|
||||
# Selector matching cluster labels to include in this partition
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
|
||||
# A cluster group name to include in this partition
|
||||
clusterGroup: agroup
|
||||
|
||||
# Selector matching cluster group labels to include in this partition
|
||||
clusterGroupSelector:
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
|
||||
# Target customization are used to determine how resources should be modified
|
||||
# per target Targets are evaluated in order and the first one to match a cluster
|
||||
# is used for that cluster.
|
||||
targetCustomizations:
|
||||
|
||||
# The name of target. If not specified a default name of the format
|
||||
# "target000" will be used. This value is mostly for display
|
||||
- name: prod
|
||||
|
||||
# Custom namespace value overriding the value at the root.
|
||||
namespace: newvalue
|
||||
|
||||
# Custom defaultNamespace value overriding the value at the root.
|
||||
defaultNamespace: newdefaultvalue
|
||||
|
||||
# Custom kustomize options overriding the options at the root.
|
||||
kustomize: {}
|
||||
|
||||
# Custom Helm options override the options at the root.
|
||||
helm: {}
|
||||
|
||||
# If using raw YAML these are names that map to overlays/{name} that will be
|
||||
# used to replace or patch a resource. If you wish to customize the file
|
||||
# ./subdir/resource.yaml then a file
|
||||
# ./overlays/myoverlay/subdir/resource.yaml will replace the base file. A
|
||||
# file named ./overlays/myoverlay/subdir/resource_patch.yaml will patch the
|
||||
# base file. A patch can in JSON Patch or JSON Merge format or a strategic
|
||||
# merge patch for builtin Kubernetes types. Refer to "Raw YAML Resource
|
||||
# Customization" below for more information.
|
||||
yaml:
|
||||
overlays:
|
||||
- custom2
|
||||
- custom3
|
||||
|
||||
# A selector used to match clusters. The structure is the standard
|
||||
# metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is
|
||||
# specified, clusterSelector will be used only to further refine the
|
||||
# selection after clusterGroupSelector and clusterGroup is evaluated.
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
|
||||
# A selector used to match a specific cluster by name. When using Fleet in
|
||||
# Rancher, make sure to put the name of the clusters.fleet.cattle.io
|
||||
# resource.
|
||||
clusterName: dev-cluster
|
||||
|
||||
# A selector used to match cluster groups.
|
||||
clusterGroupSelector:
|
||||
matchLabels:
|
||||
region: us-east
|
||||
|
||||
# A specific clusterGroup by name that will be selected.
|
||||
clusterGroup: group1
|
||||
|
||||
# Resources will not be deployed in the matched clusters if doNotDeploy is
|
||||
# true.
|
||||
doNotDeploy: false
|
||||
|
||||
# Drift correction removes any external change made to resources managed by
|
||||
# Fleet. It performs a helm rollback, which uses a three-way merge strategy
|
||||
# by default. It will try to update all resources by doing a PUT request if
|
||||
# force is enabled. Three-way strategic merge might fail when updating an
|
||||
# item inside of an array as it will try to add a new item instead of
|
||||
# replacing the existing one. This can be fixed by using force. Keep in
|
||||
# mind that resources might be recreated if force is enabled. Failed
|
||||
# rollback will be removed from the helm history unless keepFailHistory is
|
||||
# set to true.
|
||||
correctDrift:
|
||||
enabled: false
|
||||
force: false # Warning: it might recreate resources if set to true
|
||||
keepFailHistory: false
|
||||
|
||||
# dependsOn allows you to configure dependencies to other bundles. The current
|
||||
# bundle will only be deployed, after all dependencies are deployed and in a
|
||||
# Ready state.
|
||||
dependsOn:
|
||||
|
||||
# Format:
|
||||
# <GITREPO-NAME>-<BUNDLE_PATH> with all path separators replaced by "-"
|
||||
#
|
||||
# Example:
|
||||
#
|
||||
# GitRepo name "one", Bundle path "/multi-cluster/hello-world"
|
||||
# results in "one-multi-cluster-hello-world".
|
||||
#
|
||||
# Note:
|
||||
#
|
||||
# Bundle names are limited to 53 characters long. If longer they will be
|
||||
# shortened:
|
||||
#
|
||||
# opni-fleet-examples-fleets-opni-ui-plugin-operator-crd becomes
|
||||
# opni-fleet-examples-fleets-opni-ui-plugin-opera-021f7
|
||||
- name: one-multi-cluster-hello-world
|
||||
|
||||
# Select bundles to depend on based on their label.
|
||||
- selector:
|
||||
matchLabels:
|
||||
app: weak-monkey
|
||||
|
||||
# Ignore fields when monitoring a Bundle. This can be used when Fleet thinks
|
||||
# some conditions in Custom Resources makes the Bundle to be in an error state
|
||||
# when it shouldn't.
|
||||
ignore:
|
||||
|
||||
# Conditions to be ignored
|
||||
conditions:
|
||||
|
||||
# In this example a condition will be ignored if it contains
|
||||
# {"type": "Active", "status", "False"}
|
||||
- type: Active
|
||||
status: "False"
|
||||
|
||||
# Override targets defined in the GitRepo. The Bundle will not have any targets
|
||||
# from the GitRepo if overrideTargets is provided.
|
||||
overrideTargets:
|
||||
- clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
```
|
||||
|
||||
### Helm Options
|
||||
|
||||
#### How fleet-agent deploys the bundle
|
||||
|
||||
These options also apply to kustomize- and manifest-style bundles. They control
|
||||
how the fleet-agent deploys the bundle. All bundles are converted into Helm
|
||||
charts and deployed with the Helm SDK. These options are often similar to the
|
||||
Helm CLI options for install and update.
|
||||
|
||||
- releaseName
|
||||
- takeOwnership
|
||||
- force
|
||||
- atomic
|
||||
- disablePreProcess
|
||||
- disableDNS
|
||||
- skipSchemaValidation
|
||||
- waitForJobs
|
||||
|
||||
#### Helm Chart Download Options
|
||||
|
||||
These options are for Helm-style bundles, they specify how to download the
|
||||
chart.
|
||||
|
||||
- chart
|
||||
- repo
|
||||
- version
|
||||
|
||||
The reference to the chart can be either:
|
||||
|
||||
- a local path in the cloned Git repository, specified by `chart`.
|
||||
- a [go-getter URL](https://github.com/hashicorp/go-getter?tab=readme-ov-file#url-format),
|
||||
specified by `chart`. This can be used to download a tarball
|
||||
of the chart. go-getter also allows to download a chart from a Git repo.
|
||||
- a Helm repository, specified by `repo` and optionally `version`.
|
||||
- an OCI Helm repository, specified by `repo` and optionally `version`.
|
||||
|
||||
#### Helm Chart Value Options
|
||||
|
||||
Options for the downloaded Helm chart.
|
||||
|
||||
- values
|
||||
- valuesFiles
|
||||
- valueFrom
|
||||
|
||||
### Templating
|
||||
|
||||
It is possible to specify the keys and values as go template strings for
|
||||
advanced templating needs. Most of the functions from the [sprig templating
|
||||
library](https://masterminds.github.io/sprig/) are available.
|
||||
|
||||
Note that if the functions output changes with every call, e.g. `uuidv4`, the
|
||||
bundle will get redeployed.
|
||||
|
||||
The template context has the following keys:
|
||||
|
||||
* `.ClusterValues` are retrieved from target cluster's `spec.templateValues`
|
||||
* `.ClusterLabels` and `.ClusterAnnotations` are the labels and annotations in
|
||||
the cluster resource.
|
||||
* `.ClusterName` as the fleet's cluster resource name.
|
||||
* `.ClusterNamespace` as the namespace in which the cluster resource exists.
|
||||
|
||||
To access Labels or Annotations by their key name:
|
||||
|
||||
```
|
||||
${ get .ClusterLabels "management.cattle.io/cluster-display-name" }
|
||||
```
|
||||
|
||||
Note: The fleet.yaml must be valid yaml. Templating uses `${ }` as delims,
|
||||
unlike Helm which uses `{{ }}`. These fleet.yaml template delimiters can be
|
||||
escaped using backticks, eg.:
|
||||
|
||||
```
|
||||
foo-bar-${`${PWD}`}
|
||||
```
|
||||
|
||||
will result in the following text:
|
||||
|
||||
```
|
||||
foo-bar-${PWD}
|
||||
```
|
||||
|
|
@ -0,0 +1,134 @@
|
|||
# GitRepo Resource
|
||||
|
||||
The GitRepo resource describes git repositories, how to access them and where the bundles are located.
|
||||
|
||||
The content of the resource corresponds to the [GitRepoSpec](./ref-crds#gitrepospec).
|
||||
For more information on how to use GitRepo resource, e.g. how to watch private repositories, see [Create a GitRepo Resource](./gitrepo-add.md).
|
||||
|
||||
```yaml
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
# Any name can be used here
|
||||
name: my-repo
|
||||
# For single cluster use fleet-local, otherwise use the namespace of
|
||||
# your choosing
|
||||
namespace: fleet-local
|
||||
spec:
|
||||
# This can be a HTTPS or git URL. If you are using a git URL then
|
||||
# clientSecretName will probably need to be set to supply a credential.
|
||||
# repo is the only required parameter for a repo to be monitored.
|
||||
#
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
|
||||
# Enforce all resources go to this target namespace. If a cluster scoped
|
||||
# resource is found the deployment will fail.
|
||||
#
|
||||
# targetNamespace: app1
|
||||
|
||||
# Any branch can be watched, this field is optional. If not specified the
|
||||
# branch is assumed to be master
|
||||
#
|
||||
# branch: master
|
||||
|
||||
# A specific commit or tag can also be watched.
|
||||
#
|
||||
# revision: v0.3.0
|
||||
|
||||
# For a private git repository you must supply a clientSecretName. A default
|
||||
# secret can be set at the namespace level using the GitRepoRestriction
|
||||
# type. Secrets must be of the type "kubernetes.io/ssh-auth" or
|
||||
# "kubernetes.io/basic-auth". The secret is assumed to be in the
|
||||
# same namespace as the GitRepo
|
||||
#
|
||||
# clientSecretName: my-ssh-key
|
||||
|
||||
# If fleet.yaml contains a private Helm repo that requires authentication,
|
||||
# provide the credentials in a K8s secret and specify them here.
|
||||
# Danger: the credentials will be sent to all repositories referenced from
|
||||
# this gitrepo. See section below for more information.
|
||||
#
|
||||
# helmSecretName: my-helm-secret
|
||||
|
||||
# Helm credentials from helmSecretName will be used if the helm repository url matches this regular expression.
|
||||
# Credentials will always be used if it is empty or not provided
|
||||
#
|
||||
# helmRepoURLRegex: https://charts.rancher.io/*
|
||||
|
||||
# Contains the auth secret for private Helm repository for each path.
|
||||
# See [Create a GitRepo Resource](.gitrepo-add#use-different-helm-credentials-for-each-path)
|
||||
#
|
||||
# helmSecretNameForPaths: multi-helm-secret
|
||||
|
||||
# To add additional ca-bundle for self-signed certs, caBundle can be
|
||||
# filled with base64 encoded pem data. For example:
|
||||
# `cat /path/to/ca.pem | base64 -w 0`
|
||||
#
|
||||
# caBundle: my-ca-bundle
|
||||
|
||||
# Disable SSL verification for git repo
|
||||
#
|
||||
# insecureSkipTLSVerify: true
|
||||
|
||||
# A git repo can read multiple paths in a repo at once.
|
||||
# The below field is expected to be an array of paths and
|
||||
# supports path globbing (ex: some/*/path)
|
||||
#
|
||||
# Example:
|
||||
# paths:
|
||||
# - single-path
|
||||
# - multiple-paths/*
|
||||
paths:
|
||||
- simple
|
||||
|
||||
# PollingInterval configures how often fleet checks the git repo. The default
|
||||
# is 15 seconds.
|
||||
# Setting this to zero does not disable polling. It results in a 15s
|
||||
# interval, too.
|
||||
# As checking a git repo incurs a CPU cost, raising this value can help
|
||||
# lowering fleetcontroller's CPU usage if tens of git repos are used or more
|
||||
#
|
||||
# pollingInterval: 15s
|
||||
|
||||
# When disablePolling is set to true the git repo won't be checked periodically.
|
||||
# It will rely on webhooks only.
|
||||
# See [Using Webhooks Instead of Polling](https://fleet.rancher.io/webhook)
|
||||
# disablePolling: false
|
||||
|
||||
# Paused causes changes in Git to not be propagated down to the clusters but
|
||||
# instead mark resources as OutOfSync
|
||||
#
|
||||
# paused: false
|
||||
|
||||
# Increment this number to force a redeployment of contents from Git
|
||||
#
|
||||
# forceSyncGeneration: 0
|
||||
|
||||
# The service account that will be used to perform this deployment.
|
||||
# This is the name of the service account that exists in the
|
||||
# downstream cluster in the cattle-fleet-system namespace. It is assumed
|
||||
# this service account already exists so it should be create before
|
||||
# hand, most likely coming from another git repo registered with
|
||||
# the Fleet manager.
|
||||
#
|
||||
# serviceAccount: moreSecureAccountThanClusterAdmin
|
||||
|
||||
# Target clusters to deploy to if running Fleet in a multi-cluster
|
||||
# style. Refer to the "Mapping to Downstream Clusters" docs for
|
||||
# more information.
|
||||
# If empty, the "default" cluster group is used.
|
||||
#
|
||||
# targets: ...
|
||||
|
||||
# Drift correction removes any external change made to resources managed by Fleet. It performs a helm rollback, which uses
|
||||
# a three-way merge strategy by default.
|
||||
# It will try to update all resources by doing a PUT request if force is enabled. Three-way strategic merge might fail when updating
|
||||
# an item inside of an array as it will try to add a new item instead of replacing the existing one. This can be fixed by using force.
|
||||
# Keep in mind that resources might be recreated if force is enabled.
|
||||
# Failed rollback will be removed from the helm history unless keepFailHistory is set to true.
|
||||
#
|
||||
# correctDrift:
|
||||
# enabled: false
|
||||
# force: false #Warning: it might recreate resources if set to true
|
||||
# keepFailHistory: false
|
||||
```
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
# Cluster Registration Internals
|
||||
|
||||
## How does cluster registration work?
|
||||
|
||||
This text describes cluster registration with more technical details. The text ignores agent initiated registration, as it’s not commonly used.
|
||||
[Agent initiated registration](./cluster-registration.md#agent-initiated) is ["`ClusterRegistrationToken` first"](./cluster-registration.md#create-cluster-registration-tokens), which means pre-creating a cluster is optional.
|
||||
|
||||
See "[Register Downstream Clusters](./cluster-registration.md)" to learn how to register clusters.
|
||||
|
||||
### Cluster first
|
||||
|
||||
`fleet-controller` starts up and may "bootstrap" the local cluster resource. In Rancher creating the local cluster resource is handled by the fleetcluster controller instead, but otherwise the process is identical.
|
||||
|
||||
The process is identical for the local cluster or any downstream cluster. It starts by creating a cluster resource, which refers to a kubeconfig secret.
|
||||
|
||||
### Creating the Bootstrap Secret for the Downstream Cluster
|
||||
|
||||
In this step a `ClusterRegistationToken` and an "import" service account are created based on a `Cluster` resource.
|
||||
|
||||
The Fleet controller creates a [`ClusterRegistrationToken`](https://fleet.rancher.io/architecture#security)
|
||||
and waits for it to be complete. The `ClusterRegistationToken` triggers the creation of the "import" service account, which can create
|
||||
`ClusterRegistrations` and read any secret in the system registration namespace (eg "cattle-fleet-clusters-system"). The `import.go` controller will
|
||||
enqueue itself until the "import" service account exists, because that account is needed to create the `fleet-agent-bootstrap` secret.
|
||||
|
||||
|
||||
### Creating the Fleet Agent Deployment
|
||||
|
||||
The Fleet controller will now create the Fleet agent deployment and the bootstrap secret on the downstream cluster.
|
||||
|
||||
The bootstrap secret contains the API server URL of the upstream cluster and is used to build a kubeconfig to access the upstream cluster. Both values are taken from the Fleet controller config configmap. That configmap is part of the helm chart.
|
||||
|
||||
|
||||
### Fleet Agent Starts Registration, Upgrades to Request Account
|
||||
|
||||
The agent uses the "import" account to upgrade to a request account.
|
||||
|
||||
Immediately the Fleet agent checks for a `fleet-agent-bootstrap` secret. If the bootstrap secret, which contains the "import" kubeconfig, is present the agent starts registering.
|
||||
|
||||
Then agent creates the final `ClusterRegistration` resource in fleet-default on the management cluster, with a random number. The random number will be used for the registration secret's name.
|
||||
|
||||
The Fleet controller triggers and tries to grant the `ClusterRegistration` request to create agent's service account and create the 'c-\*' registration secret with the client's new kubeconfig. The registration secret name is `hash("clientID-clientRandom")`.
|
||||
|
||||
The new kubeconfig uses the "request" account. The "request" account can access the cluster status, `BundleDeployments` and `Contents`.
|
||||
|
||||
### Fleet Agent is Registered, Watches for `BundleDeployments`
|
||||
|
||||
At this point the agent is fully registered and will persist the "request" account into a `fleet-agent` secret.
|
||||
The API server URL and CA are copied from the bootstrap secret, which inherited these values from the Fleet controller's helm chart values.
|
||||
|
||||
The bootstrap secret is deleted. When the agent restarts, it will not re-register, since the bootstrap secret is missing.
|
||||
|
||||
The agent starts watching its "[Cluster Namespace](https://fleet.rancher.io/namespaces#cluster-namespaces)" for `BundleDeployments`. At this point the agent is ready to deploy workloads.
|
||||
|
||||
### Notes
|
||||
|
||||
* The registration starts with the "import" account and pivots to the "request" account.
|
||||
* The fleet-default namespace has all the cluster registrations, the "import" account uses a separate namespace.
|
||||
* Once the agent is registered, `fleet-controller` will trigger on a cluster or namespace change. The `manageagent` controller will then create a bundle to adopt the existing agent deployment. The agent will update itself to the bundle and since the "generation" environment variable changes, it will restart.
|
||||
* If no bootstrap secret exists, the agent will not re-register.
|
||||
|
||||
|
||||
## Diagram
|
||||
|
||||
### Registration Process and Controllers
|
||||
|
||||
Detailed analysis of the registration process for clusters. This shows the interaction of controllers, resources and service accounts during the registration of a new downstream cluster or the local cluster.
|
||||
|
||||
It is important to note that there are multiple ways to start this:
|
||||
|
||||
* Creating a bootstrap config. Fleet does this for the local agent.
|
||||
* Creating a `Cluster` resource with a kubeconfig. Rancher does this for downstream clusters. See [manager-initiated registration](./cluster-registration.md#manager-initiated).
|
||||
* Create a `ClusterRegistrationToken` resource, optionally create a `Cluster` resource for a pre-defined (`clientID`) cluster. See [agent-initiated registration](./cluster-registration.md#agent-initiated).
|
||||
|
||||

|
||||
|
||||
### Secrets during Agent Deployment
|
||||
|
||||
This diagram shows the resources created during registration and focuses on the k8s API server configuration.
|
||||
|
||||
The `import.go` controller triggers on Cluster creation/update events and deploys the agent.
|
||||
|
||||
**This image shows how the API server URL and CA propagates through the secrets during registration:**
|
||||
|
||||
The arrows in the diagram show how the API server values are copied from
|
||||
the Helm values to the cluster registration secret on the upstream
|
||||
cluster and finally downstream to the bootstrap secret of the agent.
|
||||
|
||||
There is one special case, if the agent is for the local/"bootstrap"
|
||||
cluster, the server values also exist in the kubeconfig secret,
|
||||
referenced by the Cluster resource. In this case the kubeconfig secret
|
||||
contains the upstream server URL and CA, next to the downstream's
|
||||
kubeconfig. If the settings are present in the kubeconfig secret, they
|
||||
override the configured values.
|
||||
|
||||

|
||||
|
||||
## Fleet Cluster Registration in Rancher
|
||||
|
||||
Rancher installs the fleet helm chart. The API server URL and CA are [derived from Rancher's settings](https://github.com/rancher/rancher/blob/release/v2.9/pkg/controllers/dashboard/fleetcharts/controller.go#L111-L112).
|
||||
|
||||
Fleet will pass these values to a Fleet agent, so it can connect back to the Fleet controller.
|
||||
|
||||
### Import Cluster into Rancher
|
||||
|
||||
When the user runs `curl | kubectl apply`, the applied manifest includes the rancher agent deployment.
|
||||
|
||||
The deployment contains a secret `cattle-credentials-` which contains the API URL and a token.
|
||||
|
||||
The Rancher agent starts up and reports downstream's kubeconfig to upstream.
|
||||
|
||||
Rancher then creates the fleet Cluster resource, which references a [kubeconfig secret](https://github.com/rancher/rancher/blob/871b6d9137246bd93733f01184ea435f40c5d56c/pkg/provisioningv2/kubeconfig/manager.go#L69).
|
||||
|
||||
👉Fleet will use this kubeconfig to deploy the agent on the downstream cluster.
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# List of Deployed Resources
|
||||
|
||||
After installing Fleet in Rancher these resources are created in the upstream cluster.
|
||||
|
||||
| Type | Name | Namespace |
|
||||
| ----- | ----------- | --------- |
|
||||
| From Helm, intial setup: | | |
|
||||
| ClusterRole | fleet-controller | - |
|
||||
| ClusterRole | gitjob | - |
|
||||
| ClusterRoleBinding | fleet-controller | - |
|
||||
| ClusterRoleBinding | gitjob-binding | - |
|
||||
| ConfigMap | fleet-controller | cattle-fleet-system |
|
||||
| Deployment | fleet-controller | cattle-fleet-system |
|
||||
| Deployment | gitjob | cattle-fleet-system |
|
||||
| Role | fleet-controller | cattle-fleet-system |
|
||||
| Role | gitjob | cattle-fleet-system |
|
||||
| RoleBinding | fleet-controller | cattle-fleet-system |
|
||||
| RoleBinding | gitjob | cattle-fleet-system |
|
||||
| Service | gitjob | cattle-fleet-system |
|
||||
| ServiceAccount | fleet-controller | cattle-fleet-system |
|
||||
| ServiceAccount | gitjob | cattle-fleet-system |
|
||||
| Generated: | | |
|
||||
| clusters.fleet.cattle.io | local | fleet-local |
|
||||
| clusters.provisioning.cattle.io | local | fleet-local |
|
||||
| clusters.management.cattle.io | local | - |
|
||||
| ClusterGroup | default | fleet-local |
|
||||
| Bundle | fleet-agent-local | fleet-local |
|
||||
| For each registered cluster: | | |
|
||||
| clusters.provisioning.cattle.io | | by default fleet-default |
|
||||
| clusters.management.cattle.io | generated | - |
|
||||
| clusters.fleet.cattle.io | fleet-default | |
|
||||
| Bundle | fleet-default | |
|
||||
| BundleDeployment | cluster-fleet-local-local-ID | fleet-agent-local
|
||||
|
||||
|
||||
Also see [namespaces]
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
# Resources List
|
||||
|
||||
This document outlines the deployed resources, categorized under `Bundles` and `GitRepos`.
|
||||
|
||||
## Bundles
|
||||
|
||||
The deployed resources within bundles can be found in `status.ResourceKey`. This key represents the actual resources deployed via `bundleDeployments`.
|
||||
|
||||
## GitRepos
|
||||
|
||||
Similar to bundles, the deployed resources in `GitRepos` are listed in `status.Resources`. This list is also derived from `bundleDeployments`.
|
||||
|
||||
# Resource Counts
|
||||
|
||||
## GitRepos
|
||||
|
||||
The `status.ResourceCounts` list for GitRepos is derived from `bundleDeployments`.
|
||||
|
||||
## Clusters
|
||||
|
||||
In Clusters, the `status.ResourceCounts` list is derived from GitRepos.
|
||||
|
||||
## ClusterGroups
|
||||
|
||||
In ClusterGroups, the `status.ResourceCounts` list is also derived from GitRepos.
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# Custom Resources During Deployment
|
||||
|
||||
This shows the resources, also the internal ones, involved in creating a deployment from a git repository.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,252 @@
|
|||
# Troubleshooting
|
||||
|
||||
This section contains commands and tips to troubleshoot Fleet.
|
||||
|
||||
## **How Do I...**
|
||||
|
||||
|
||||
### Fetch the log from `fleet-controller`?
|
||||
|
||||
In the local management cluster where the `fleet-controller` is deployed, run the following command with your specific `fleet-controller` pod name filled in:
|
||||
|
||||
```
|
||||
$ kubectl logs -l app=fleet-controller -n cattle-fleet-system
|
||||
```
|
||||
|
||||
### Fetch the log from the `fleet-agent`?
|
||||
|
||||
Go to each downstream cluster and run the following command for the local cluster with your specific `fleet-agent` pod name filled in:
|
||||
|
||||
```
|
||||
# Downstream cluster
|
||||
$ kubectl logs -l app=fleet-agent -n cattle-fleet-system
|
||||
# Local cluster
|
||||
$ kubectl logs -l app=fleet-agent -n cattle-local-fleet-system
|
||||
```
|
||||
|
||||
### Fetch detailed error logs from `GitRepos` and `Bundles`?
|
||||
|
||||
Normally, errors should appear in the Rancher UI. However, if there is not enough information displayed about the error there, you can research further by trying one or more of the following as needed:
|
||||
|
||||
- For more information about the bundle, click on `bundle`, and the YAML mode will be enabled.
|
||||
- For more information about the GitRepo, click on `GitRepo`, then click on `View Yaml` in the upper right of the screen. After viewing the YAML, check `status.conditions`; a detailed error message should be displayed here.
|
||||
- Check the `fleet-controller` for synching errors.
|
||||
- Check the `fleet-agent` log in the downstream cluster if you encounter issues when deploying the bundle.
|
||||
|
||||
### Fetch detailed status from `GitRepos` and `Bundles`?
|
||||
|
||||
For debugging and bug reports the raw JSON of the resources status fields is most useful.
|
||||
This can be accessed in the Rancher UI, or through `kubectl`:
|
||||
|
||||
```
|
||||
kubectl get bundle -n fleet-local fleet-agent-local -o=jsonpath={.status}
|
||||
kubectl get gitrepo -n fleet-default gitrepo-name -o=jsonpath={.status}
|
||||
```
|
||||
|
||||
### Check a chart rendering error in `Kustomize`?
|
||||
|
||||
Check the [`fleet-controller` logs](./troubleshooting.md#fetch-the-log-from-fleet-controller) and the [`fleet-agent` logs](./troubleshooting.md#fetch-the-log-from-the-fleet-agent).
|
||||
|
||||
### Check errors about watching or checking out the `GitRepo`, or about the downloaded Helm repo in `fleet.yaml`?
|
||||
|
||||
Check the `gitjob-controller` logs using the following command with your specific `gitjob` pod name filled in:
|
||||
|
||||
```
|
||||
$ kubectl logs -f $gitjob-pod-name -n cattle-fleet-system
|
||||
```
|
||||
|
||||
Note that there are two containers inside the pod: the `step-git-source` container that clones the git repo, and the `fleet` container that applies bundles based on the git repo.
|
||||
|
||||
The pods will usually have images named `rancher/tekton-utils` with the `gitRepo` name as a prefix. Check the logs for these Kubernetes job pods in the local management cluster as follows, filling in your specific `gitRepoName` pod name and namespace:
|
||||
|
||||
```
|
||||
$ kubectl logs -f $gitRepoName-pod-name -n namespace
|
||||
```
|
||||
|
||||
### Check the status of the `fleet-controller`?
|
||||
|
||||
You can check the status of the `fleet-controller` pods by running the commands below:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
||||
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||
```
|
||||
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
||||
```
|
||||
|
||||
### Enable debug logging for `fleet-controller` and `fleet-agent`?
|
||||
|
||||
Available in Rancher v2.6.3 (Fleet v0.3.8), the ability to enable debug logging has been added.
|
||||
|
||||
- Go to the **Dashboard**, then click on the **local cluster** in the left navigation menu
|
||||
- Select **Apps & Marketplace**, then **Installed Apps** from the dropdown
|
||||
- From there, you will upgrade the Fleet chart with the value `debug=true`. You can also set `debugLevel=5` if desired.
|
||||
|
||||
## **Additional Solutions for Other Fleet Issues**
|
||||
|
||||
### Naming conventions for CRDs
|
||||
|
||||
1. For CRD terms like `clusters` and `gitrepos`, you must reference the full CRD name. For example, the cluster CRD's complete name is `cluster.fleet.cattle.io`, and the gitrepo CRD's complete name is `gitrepo.fleet.cattle.io`.
|
||||
|
||||
1. `Bundles`, which are created from the `GitRepo`, follow the pattern `$gitrepoName-$path` in the same workspace/namespace where the `GitRepo` was created. Note that `$path` is the path directory in the git repository that contains the `bundle` (`fleet.yaml`).
|
||||
|
||||
1. `BundleDeployments`, which are created from the `bundle`, follow the pattern `$bundleName-$clusterName` in the namespace `clusters-$workspace-$cluster-$generateHash`. Note that `$clusterName` is the cluster to which the bundle will be deployed.
|
||||
|
||||
### HTTP secrets in Github
|
||||
|
||||
When testing Fleet with private git repositories, you will notice that HTTP secrets are no longer supported in Github. To work around this issue, follow these steps:
|
||||
|
||||
1. Create a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) in Github.
|
||||
1. In Rancher, create an HTTP [secret](https://rancher.com/docs/rancher/v2.6/en/k8s-in-rancher/secrets/) with your Github username.
|
||||
1. Use your token as the secret.
|
||||
|
||||
### Fleet fails with bad response code: 403
|
||||
|
||||
If your GitJob returns the error below, the problem may be that Fleet cannot access the Helm repo you specified in your [`fleet.yaml`](./ref-fleet-yaml.md):
|
||||
|
||||
```
|
||||
time="2021-11-04T09:21:24Z" level=fatal msg="bad response code: 403"
|
||||
```
|
||||
|
||||
Perform the following steps to assess:
|
||||
|
||||
- Check that your repo is accessible from your dev machine, and that you can download the Helm chart successfully
|
||||
- Check that your credentials for the git repo are valid
|
||||
|
||||
### Helm chart repo: certificate signed by unknown authority
|
||||
|
||||
If your GitJob returns the error below, you may have added the wrong certificate chain:
|
||||
|
||||
```
|
||||
time="2021-11-11T05:55:08Z" level=fatal msg="Get \"https://helm.intra/virtual-helm/index.yaml\": x509: certificate signed by unknown authority"
|
||||
```
|
||||
|
||||
Please verify your certificate with the following command:
|
||||
|
||||
```bash
|
||||
context=playground-local
|
||||
kubectl get secret -n fleet-default helm-repo -o jsonpath="{['data']['cacerts']}" --context $context | base64 -d | openssl x509 -text -noout
|
||||
Certificate:
|
||||
Data:
|
||||
Version: 3 (0x2)
|
||||
Serial Number:
|
||||
7a:1e:df:79:5f:b0:e0:be:49:de:11:5e:d9:9c:a9:71
|
||||
Signature Algorithm: sha512WithRSAEncryption
|
||||
Issuer: C = CH, O = MY COMPANY, CN = NOP Root CA G3
|
||||
...
|
||||
|
||||
```
|
||||
### Fleet deployment stuck in modified state
|
||||
|
||||
When you deploy bundles to Fleet, some of the components are modified, and this causes the "modified" flag in the Fleet environment.
|
||||
|
||||
To ignore the modified flag for the differences between the Helm install generated by `fleet.yaml` and the resource in your cluster, add a `diff.comparePatches` to the `fleet.yaml` for your Deployment, as shown in this example:
|
||||
|
||||
|
||||
```yaml
|
||||
defaultNamespace: <namespace name>
|
||||
helm:
|
||||
releaseName: <release name>
|
||||
repo: <repo name>
|
||||
chart: <chart name>
|
||||
diff:
|
||||
comparePatches:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
operations:
|
||||
- {"op":"remove", "path":"/spec/template/spec/hostNetwork"}
|
||||
- {"op":"remove", "path":"/spec/template/spec/nodeSelector"}
|
||||
jsonPointers: # jsonPointers allows to ignore diffs at certain json path
|
||||
- "/spec/template/spec/priorityClassName"
|
||||
- "/spec/template/spec/tolerations"
|
||||
```
|
||||
|
||||
To determine which operations should be removed, observe the logs from `fleet-agent` on the target cluster. You should see entries similar to the following:
|
||||
|
||||
```text
|
||||
level=error msg="bundle monitoring-monitoring: deployment.apps monitoring/monitoring-monitoring-kube-state-metrics modified {\"spec\":{\"template\":{\"spec\":{\"hostNetwork\":false}}}}"
|
||||
```
|
||||
|
||||
Based on the above log, you can add the following entry to remove the operation:
|
||||
|
||||
```json
|
||||
{"op":"remove", "path":"/spec/template/spec/hostNetwork"}
|
||||
```
|
||||
|
||||
### `GitRepo` or `Bundle` stuck in modified state
|
||||
|
||||
**Modified** means that there is a mismatch between the actual state and the desired state, the source of truth, which lives in the git repository.
|
||||
|
||||
1. Check the [bundle diffs documentation](./bundle-diffs.md) for more information.
|
||||
|
||||
1. You can also force update the `gitrepo` to perform a manual resync. Select **GitRepo** on the left navigation bar, then select **Force Update**.
|
||||
|
||||
### Bundle has a Horizontal Pod Autoscaler (HPA) in modified state
|
||||
|
||||
For bundles with an HPA, the expected state is `Modified`, as the bundle contains fields that differ from the state of the Bundle at deployment - usually `ReplicaSet`.
|
||||
|
||||
You must define a patch in the `fleet.yaml` to ignore this field according to [`GitRepo` or `Bundle` stuck in modified state](#gitrepo-or-bundle-stuck-in-modified-state).
|
||||
|
||||
Here is an example of such a patch for the deployment `nginx` in namespace `default`:
|
||||
|
||||
```yaml
|
||||
diff:
|
||||
comparePatches:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: nginx
|
||||
namespace: default
|
||||
operations:
|
||||
- {"op": "remove", "path": "/spec/replicas"}
|
||||
```
|
||||
|
||||
### What if the cluster is unavailable, or is in a `WaitCheckIn` state?
|
||||
|
||||
You will need to re-import and restart the registration process: Select **Cluster** on the left navigation bar, then select **Force Update**
|
||||
|
||||
:::caution
|
||||
|
||||
__WaitCheckIn status for Rancher v2.5__:
|
||||
The cluster will show in `WaitCheckIn` status because the `fleet-controller` is attempting to communicate with Fleet using the Rancher service IP. However, Fleet must communicate directly with Rancher via the Kubernetes service DNS using service discovery, not through the proxy. For more, see the [Rancher docs](https://rancher.com/docs/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/#install-rancher).
|
||||
|
||||
:::
|
||||
|
||||
### GitRepo complains with `gzip: invalid header`
|
||||
|
||||
When you see an error like the one below ...
|
||||
|
||||
```sh
|
||||
Error opening a gzip reader for /tmp/getter154967024/archive: gzip: invalid header
|
||||
```
|
||||
|
||||
... the content of the helm chart is incorrect. Manually download the chart to your local machine and check the content.
|
||||
|
||||
### Agent is no longer registered
|
||||
|
||||
You can force a redeployment of an agent for a given cluster by setting `redeployAgentGeneration`.
|
||||
|
||||
```sh
|
||||
kubectl patch clusters.fleet.cattle.io -n fleet-local local --type=json -p '[{"op": "add", "path": "/spec/redeployAgentGeneration", "value": -1}]'
|
||||
```
|
||||
|
||||
### Migrate the local cluster to the Fleet default cluster workspace?
|
||||
|
||||
Users can create new workspaces and move clusters across workspaces.
|
||||
It's currently not possible to move the local cluster from `fleet-local` to another workspace.
|
||||
|
||||
### Bundle failed to deploy: "resource already exists" Error
|
||||
|
||||
If your bundle encounters the following error message during deployment:
|
||||
|
||||
```sh
|
||||
not installed: rendered manifests contain a resource that already
|
||||
exists. Unable to continue with install: ClusterRole "grafana-clusterrole"
|
||||
in namespace "" exists and cannot be imported into the current release: invalid
|
||||
ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace"
|
||||
must equal "ns-2": current value is "ns-1"
|
||||
```
|
||||
|
||||
This error occurs because a Helm resource with the same `releaseName` already exists in the cluster. To resolve this issue, you need to change the `releaseName` of the resource you want to create to avoid the conflict.
|
||||
|
|
@ -0,0 +1,467 @@
|
|||
import CodeBlock from '@theme/CodeBlock';
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
# Creating a Deployment
|
||||
|
||||
To deploy workloads onto downstream clusters, first create a Git repo, then create a GitRepo resource and apply it.
|
||||
|
||||
This tutorial uses the [fleet-examples](https://github.com/rancher/fleet-examples) repository.
|
||||
|
||||
:::note
|
||||
For more details on how to structure the repository and configure the deployment of each bundle see [GitRepo Contents](./gitrepo-content.md).
|
||||
For more details on the options that are available per Git repository see [Adding a GitRepo](./gitrepo-add.md).
|
||||
:::
|
||||
|
||||
## Single-Cluster Examples
|
||||
|
||||
All examples will deploy content to clusters with no per-cluster customizations. This is a good starting point to understand the basics of structuring Git repos for Fleet.
|
||||
|
||||
<Tabs groupId="examples">
|
||||
<TabItem value="helm" label="Helm" default>
|
||||
|
||||
An example using Helm. We are deploying the <a href="https://github.com/rancher/fleet-examples/tree/master/single-cluster/helm">helm example</a> to the local cluster.
|
||||
|
||||
The repository contains a helm chart and an optional `fleet.yaml` to configure the deployment:
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
namespace: fleet-helm-example
|
||||
|
||||
# Custom helm options
|
||||
helm:
|
||||
# The release name to use. If empty a generated release name will be used
|
||||
releaseName: guestbook
|
||||
|
||||
# The directory of the chart in the repo. Also any valid go-getter supported
|
||||
# URL can be used there is specify where to download the chart from.
|
||||
# If repo below is set this value if the chart name in the repo
|
||||
chart: ""
|
||||
|
||||
# An https to a valid Helm repository to download the chart from
|
||||
repo: ""
|
||||
|
||||
# Used if repo is set to look up the version of the chart
|
||||
version: ""
|
||||
|
||||
# Force recreate resource that can not be updated
|
||||
force: false
|
||||
|
||||
# How long for helm to wait for the release to be active. If the value
|
||||
# is less that or equal to zero, we will not wait in Helm
|
||||
timeoutSeconds: 0
|
||||
|
||||
# Custom values that will be passed as values.yaml to the installation
|
||||
values:
|
||||
replicas: 2
|
||||
```
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-local` namespace contains the local cluster resource. The local fleet-agent will create the deployment in the `fleet-helm-example` namespace.
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/helm
|
||||
EOF
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="helm-multi-chart" label="Helm Multi Chart" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/helm-multi-chart">example deploying multiple charts</a> from a single repo. This is similar to the previous example, but will deploy three helm charts from the sub folders, each configured by its own `fleet.yaml`.
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/helm-multi-chart
|
||||
EOF
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="helm-kustomize" label="Helm & Kustomize" default>
|
||||
|
||||
An example using <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/helm-kustomize">Kustomize to modify a third party Helm chart</a>.
|
||||
It will deploy the Kubernetes sample guestbook application as packaged as a Helm chart downloaded from a third party source and will modify the helm chart using Kustomize. The app will be deployed into the fleet-helm-kustomize-example namespace.
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/helm-kustomize
|
||||
EOF
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="kustomize" label="Kustomize" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/kustomize">example using Kustomize</a>.
|
||||
|
||||
Note that the `fleet.yaml` has a `kustomize:` key to specify the path to the required `kustomization.yaml`:
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
kustomize:
|
||||
# To use a kustomization.yaml different from the one in the root folder
|
||||
dir: ""
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/kustomize
|
||||
EOF
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="manifests" label="Manifests" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/tree/master/single-cluster/manifests">example using raw Kubernetes YAML</a>.
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-local -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- single-cluster/manifests
|
||||
EOF
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Multi-Cluster Examples
|
||||
|
||||
The examples below will deploy a multi git repo to multiple clusters at once and configure the app differently for each target.
|
||||
|
||||
<Tabs groupId="examples">
|
||||
<TabItem value="helm" label="Helm" default>
|
||||
|
||||
|
||||
An example using Helm. We are deploying the <a href="https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm">helm example</a> and customizing it per target cluster
|
||||
|
||||
The repository contains a helm chart and an optional `fleet.yaml` to configure the deployment. The `fleet.yaml` is used to configure different deployment options, depending on the cluster's labels:
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
namespace: fleet-mc-helm-example
|
||||
targetCustomizations:
|
||||
- name: dev
|
||||
helm:
|
||||
values:
|
||||
replication: false
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
helm:
|
||||
values:
|
||||
replicas: 3
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
helm:
|
||||
values:
|
||||
serviceType: LoadBalancer
|
||||
replicas: 3
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
```yaml title="gitrepo.yaml"
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm
|
||||
namespace: fleet-default
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- multi-cluster/helm
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||
|
||||
<CodeBlock language="bash">
|
||||
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||
</CodeBlock>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="helm-external" label="Helm External" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm-external">example using a Helm chart that is downloaded from a third party source and customizing it per target cluster</a>. The customization is similar to the previous example.
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
```yaml title="gitrepo.yaml"
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm-external
|
||||
namespace: fleet-default
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- multi-cluster/helm-external
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||
|
||||
<CodeBlock language="bash">
|
||||
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||
</CodeBlock>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="helm-kustomize" label="Helm & Kustomize" default>
|
||||
|
||||
An example using <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm-kustomize">kustomize to modify a third party Helm chart</a>.
|
||||
It will deploy the Kubernetes sample guestbook application as packaged as a Helm chart downloaded from a third party source and will modify the helm chart using Kustomize. The app will be deployed into the fleet-helm-kustomize-example namespace.
|
||||
|
||||
The application will be customized as follows per environment:
|
||||
|
||||
* Dev clusters: Only the redis leader is deployed and not the followers.
|
||||
* Test clusters: Scale the front deployment to 3
|
||||
* Prod clusters: Scale the front deployment to 3 and set the service type to LoadBalancer
|
||||
|
||||
The `fleet.yaml` is used to control which overlays are used, depending on the cluster's labels:
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
namespace: fleet-mc-kustomize-example
|
||||
targetCustomizations:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
kustomize:
|
||||
dir: overlays/dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
kustomize:
|
||||
dir: overlays/test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
kustomize:
|
||||
dir: overlays/prod
|
||||
```
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
```yaml title="gitrepo.yaml"
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: helm-kustomize
|
||||
namespace: fleet-default
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- multi-cluster/helm-kustomize
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||
|
||||
<CodeBlock language="bash">
|
||||
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||
</CodeBlock>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="kustomize" label="Kustomize" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/kustomize">example using Kustomize</a> and customizing it per target cluster.
|
||||
|
||||
The customization in `fleet.yaml` is identical to the "Helm & Kustomize" example.
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
```bash
|
||||
kubectl apply -n fleet-default -f - <<EOF
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: kustomize
|
||||
namespace: fleet-default
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- multi-cluster/kustomize
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
EOF
|
||||
```
|
||||
|
||||
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="manifests" label="Manifests" default>
|
||||
|
||||
An <a href="https://github.com/rancher/fleet-examples/tree/master/multi-cluster/manifests">example using raw Kubernetes YAML and customizing it per target cluster</a>.
|
||||
The application will be customized as follows per environment:
|
||||
|
||||
* Dev clusters: Only the redis leader is deployed and not the followers.
|
||||
* Test clusters: Scale the front deployment to 3
|
||||
* Prod clusters: Scale the front deployment to 3 and set the service type to LoadBalancer
|
||||
|
||||
The `fleet.yaml` is used to control which 'yaml' overlays are used, depending on the cluster's labels:
|
||||
|
||||
```yaml title="fleet.yaml"
|
||||
namespace: fleet-mc-manifest-example
|
||||
targetCustomizations:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
yaml:
|
||||
overlays:
|
||||
# Refers to overlays/noreplication folder
|
||||
- noreplication
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
yaml:
|
||||
overlays:
|
||||
# Refers to overlays/scale3 folder
|
||||
- scale3
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
yaml:
|
||||
# Refers to overlays/servicelb, scale3 folders
|
||||
overlays:
|
||||
- servicelb
|
||||
- scale3
|
||||
```
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||
|
||||
```yaml title="gitrepo.yaml"
|
||||
kind: GitRepo
|
||||
apiVersion: fleet.cattle.io/v1alpha1
|
||||
metadata:
|
||||
name: manifests
|
||||
namespace: fleet-default
|
||||
spec:
|
||||
repo: https://github.com/rancher/fleet-examples
|
||||
paths:
|
||||
- multi-cluster/manifests
|
||||
targets:
|
||||
- name: dev
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: dev
|
||||
|
||||
- name: test
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: test
|
||||
|
||||
- name: prod
|
||||
clusterSelector:
|
||||
matchLabels:
|
||||
env: prod
|
||||
```
|
||||
|
||||
<CodeBlock language="bash">
|
||||
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||
</CodeBlock>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
# Uninstall
|
||||
|
||||
Fleet is packaged as two Helm charts so uninstall is accomplished by
|
||||
uninstalling the appropriate Helm charts. To uninstall Fleet run the following
|
||||
two commands:
|
||||
|
||||
```shell
|
||||
helm -n cattle-fleet-system uninstall fleet
|
||||
helm -n cattle-fleet-system uninstall fleet-crd
|
||||
```
|
||||
|
||||
:::caution
|
||||
Uninstalling the CRDs will remove all deployed workloads.
|
||||
:::
|
||||
|
|
@ -0,0 +1,107 @@
|
|||
# Using Webhooks Instead of Polling
|
||||
|
||||
By default, Fleet utilizes polling (default: every 15 seconds) to pull from a Git repo. This is a convenient default that works reasonably well for a small number of repos (up to a few tens).
|
||||
|
||||
For installations with multiple tens up to hundreds of Git repos, and in general to reduce latency (the time between a push to Git and fleet reacting to it), configuring webhooks is recommended instead of polling.
|
||||
|
||||
Fleet currently supports Azure DevOps, GitHub, GitLab, Bitbucket, Bitbucket Server, and Gogs.
|
||||
|
||||
### 1. Configure the webhook service. Fleet uses a gitjob service to handle webhook requests. Create an ingress that points to the gitjob service.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: webhook-ingress
|
||||
namespace: cattle-fleet-system
|
||||
spec:
|
||||
rules:
|
||||
- host: your.domain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: gitjob
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
If you want to have the webhook available using the same host name as your Rancher or another service, you can use the following YAML with the URL http://your.domain.com/gitjob. The below YAML is specific for the Nginx Ingress Controller:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/use-regex: "true"
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /$2
|
||||
name: webhook-ingress
|
||||
namespace: cattle-fleet-system
|
||||
spec:
|
||||
rules:
|
||||
- host: your.domain.com
|
||||
http:
|
||||
paths:
|
||||
- path: /gitjob(/|$)(.*)
|
||||
pathType: ImplementationSpecific
|
||||
backend:
|
||||
service:
|
||||
name: gitjob
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
You can configure [TLS](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) on ingress.
|
||||
|
||||
:::
|
||||
|
||||
### 2. Go to your webhook provider and configure the webhook callback url. Here is a Github example.
|
||||
|
||||

|
||||
|
||||
Configuring a secret is optional. This is used to validate the webhook payload as the payload should not be trusted by default.
|
||||
If your webhook server is publicly accessible to the Internet, then it is recommended to configure the secret. If you do configure the
|
||||
secret, follow step 3.
|
||||
|
||||
:::note
|
||||
|
||||
only application/json is supported due to the limitation of webhook library.
|
||||
|
||||
:::
|
||||
|
||||
:::caution
|
||||
|
||||
If you configured the webhook the polling interval will be automatically adjusted to 1 hour.
|
||||
|
||||
:::
|
||||
|
||||
### 3. (Optional) Configure webhook secret. The secret is for validating webhook payload. Make sure to put it in a k8s secret called `gitjob-webhook` in `cattle-fleet-system`.
|
||||
|
||||
| Provider | K8s Secret Key |
|
||||
|-----------------|--------------------|
|
||||
| GitHub | `github` |
|
||||
| GitLab | `gitlab` |
|
||||
| BitBucket | `bitbucket` |
|
||||
| BitBucketServer | `bitbucket-server` |
|
||||
| Gogs | `gogs` |
|
||||
| Azure DevOps | `azure-username` |
|
||||
| Azure DevOps | `azure-password` |
|
||||
|
||||
For example, to create a secret containing a GitHub secret to validate the webhook payload, run:
|
||||
|
||||
```shell
|
||||
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=github=webhooksecretvalue
|
||||
```
|
||||
|
||||
For Azure DevOps:
|
||||
- Enable basic authentication in Azure
|
||||
- Create a secret containing the credentials for the basic authentication
|
||||
```shell
|
||||
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=azure-username=user --from-literal=azure-password=pass123
|
||||
```
|
||||
|
||||
### 4. Go to your git provider and test the connection. You should get a HTTP response code.
|
||||
|
|
@ -0,0 +1,158 @@
|
|||
{
|
||||
"docs": [
|
||||
"index",
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Tutorials",
|
||||
"collapsed": false,
|
||||
"items": [
|
||||
"quickstart",
|
||||
"tut-deployment",
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "uninstall"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Explanations",
|
||||
"collapsed": false,
|
||||
"items": [
|
||||
"architecture",
|
||||
"concepts",
|
||||
"ref-bundle-stages",
|
||||
"gitrepo-content",
|
||||
"namespaces",
|
||||
"resources-during-deployment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "How-tos for Operators",
|
||||
"collapsed": false,
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "installation"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cluster-registration"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cluster-group"
|
||||
},
|
||||
"multi-user"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "How-tos for Users",
|
||||
"collapsed": false,
|
||||
"items": [
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "gitrepo-add"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "gitrepo-targets"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "bundle-diffs"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "webhook"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "imagescan"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "bundle-add"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Reference",
|
||||
"collapsed": false,
|
||||
"items": [
|
||||
{
|
||||
"CLI": [
|
||||
"cli/fleet-agent/fleet-agent",
|
||||
"cli/fleet-agent/fleet-agent_clusterstatus",
|
||||
"cli/fleet-agent/fleet-agent_register",
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet_apply"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet_cleanup"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet_deploy"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet_target"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-cli/fleet_test"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-controller/fleet-manager"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-controller/fleet-manager_agentmanagement"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cli/fleet-controller/fleet-manager_cleanup"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "cluster-bundles-state"
|
||||
},
|
||||
{
|
||||
"type": "doc",
|
||||
"id": "resource-counts-and-resources-list"
|
||||
},
|
||||
"ref-registration",
|
||||
"ref-configuration",
|
||||
"ref-resources",
|
||||
"ref-crds",
|
||||
"ref-fleet-yaml",
|
||||
"ref-gitrepo",
|
||||
"ref-bundle"
|
||||
]
|
||||
},
|
||||
"troubleshooting",
|
||||
{
|
||||
"type": "category",
|
||||
"label": "Changelog",
|
||||
"items": [
|
||||
{
|
||||
"type": "autogenerated",
|
||||
"dirName": "changelogs/changelogs"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
[
|
||||
"0.10",
|
||||
"0.9",
|
||||
"0.8",
|
||||
"0.7",
|
||||
|
|
|
|||
Loading…
Reference in New Issue