Restructure docs
merge capitalize headers installation registration deployment tutorial
This commit is contained in:
parent
accc99e696
commit
8a3933a97b
|
|
@ -1,13 +0,0 @@
|
||||||
# Advanced Users
|
|
||||||
|
|
||||||
Note that using Fleet outside of Rancher is highly discouraged for any users who do not need to perform advanced actions. However, there are some advanced use cases that may need to be performed outside of Rancher, also known as Standalone Fleet, or Fleet without Rancher. This section will highlight such use cases.
|
|
||||||
|
|
||||||
The following are examples of advanced use cases:
|
|
||||||
|
|
||||||
- Nested GitRepo CRs
|
|
||||||
>Managing Fleet within Fleet (nested GitRepo usage) is not currently supported. We will update the documentation if support becomes available.
|
|
||||||
|
|
||||||
- [Single cluster installation](./single-cluster-install.md)
|
|
||||||
- [Multi-cluster installation](./multi-cluster-install.md)
|
|
||||||
|
|
||||||
Please refer to the [installation](./installation.md) and the [uninstall](./uninstall.md) documentation for additional information.
|
|
||||||
|
|
@ -1,24 +1,26 @@
|
||||||
import {versions} from '@site/src/fleetVersions';
|
import {versions} from '@site/src/fleetVersions';
|
||||||
import CodeBlock from '@theme/CodeBlock';
|
import CodeBlock from '@theme/CodeBlock';
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
# Agent Initiated
|
# Agent Initiated
|
||||||
|
|
||||||
|
A downstream cluster is registered by installing an agent via helm and using the **cluster registration token** and optionally a **client ID** or **cluster labels**.
|
||||||
|
|
||||||
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the agent initiated registration style.
|
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the agent initiated registration style.
|
||||||
|
|
||||||
## Cluster Registration Token and Client ID
|
## Cluster Registration Token and Client ID
|
||||||
|
|
||||||
A downstream cluster is registered using the **cluster registration token** and optionally a **client ID** or **cluster labels**.
|
|
||||||
|
|
||||||
The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
|
The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
|
||||||
able to initiate the registration process. This is required. Refer to the [cluster registration token page](./cluster-tokens.md) for more information
|
able to initiate the registration process. This is required.
|
||||||
on how to create tokens and obtain the values. The cluster registration token is manifested as a `values.yaml` file that will
|
The cluster registration token is manifested as a `values.yaml` file that will be passed to the `helm install` process.
|
||||||
be passed to the `helm install` process.
|
Alternatively one can pass the token directly to the helm install command via `--set token="$token"`.
|
||||||
|
|
||||||
There are two styles of registering an agent. You can have the cluster for this agent dynamically created, in which
|
There are two styles of registering an agent. You can have the cluster for this agent dynamically created, in which
|
||||||
case you will probably want to specify **cluster labels** upon registration. Or you can have the agent register to a predefined
|
case you will probably want to specify **cluster labels** upon registration. Or you can have the agent register to a predefined
|
||||||
cluster in the Fleet manager, in which case you will need a **client ID**. The former approach is typically the easiest.
|
cluster in the Fleet manager, in which case you will need a **client ID**. The former approach is typically the easiest.
|
||||||
|
|
||||||
## Install agent for a new Cluster
|
## Install Agent For a New Cluster
|
||||||
|
|
||||||
The Fleet agent is installed as a Helm chart. Following are explanations how to determine and set its parameters.
|
The Fleet agent is installed as a Helm chart. Following are explanations how to determine and set its parameters.
|
||||||
|
|
||||||
|
|
@ -65,6 +67,8 @@ to change which cluster Helm is installing to.
|
||||||
|
|
||||||
Finally, install the agent using Helm.
|
Finally, install the agent using Helm.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="helm" label="Install" default>
|
||||||
<CodeBlock language="bash">
|
<CodeBlock language="bash">
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
$CLUSTER_LABELS \\
|
$CLUSTER_LABELS \\
|
||||||
|
|
@ -73,14 +77,18 @@ Finally, install the agent using Helm.
|
||||||
--set apiServerURL="$API_SERVER_URL" \\
|
--set apiServerURL="$API_SERVER_URL" \\
|
||||||
fleet-agent`} {versions.next.fleetAgent}
|
fleet-agent`} {versions.next.fleetAgent}
|
||||||
</CodeBlock>
|
</CodeBlock>
|
||||||
|
</TabItem>
|
||||||
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
|
<TabItem value="validate" label="Validate">
|
||||||
|
You can check that status of the fleet pods by running the below commands.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# Ensure kubectl is pointing to the right cluster
|
# Ensure kubectl is pointing to the right cluster
|
||||||
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
||||||
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
||||||
```
|
```
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
The agent should now be deployed.
|
||||||
|
|
||||||
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
||||||
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
||||||
|
|
@ -94,7 +102,7 @@ NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAS
|
||||||
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
||||||
```
|
```
|
||||||
|
|
||||||
## Install agent for a predefined Cluster
|
## Install Agent For a Predefined Cluster
|
||||||
|
|
||||||
Client IDs are for the purpose of predefining clusters in the Fleet manager with existing labels and repos targeted to them.
|
Client IDs are for the purpose of predefining clusters in the Fleet manager with existing labels and repos targeted to them.
|
||||||
A client ID is not required and is just one approach to managing clusters.
|
A client ID is not required and is just one approach to managing clusters.
|
||||||
|
|
@ -146,6 +154,8 @@ to change which cluster Helm is installing to.
|
||||||
|
|
||||||
Finally, install the agent using Helm.
|
Finally, install the agent using Helm.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="helm2" label="Install" default>
|
||||||
<CodeBlock language="bash">
|
<CodeBlock language="bash">
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
--set clientID="$CLUSTER_CLIENT_ID" \\
|
--set clientID="$CLUSTER_CLIENT_ID" \\
|
||||||
|
|
@ -153,13 +163,18 @@ Finally, install the agent using Helm.
|
||||||
fleet-agent`} {versions.next.fleetAgent}
|
fleet-agent`} {versions.next.fleetAgent}
|
||||||
</CodeBlock>
|
</CodeBlock>
|
||||||
|
|
||||||
The agent should now be deployed. You can check that status of the fleet pods by running the below commands.
|
</TabItem>
|
||||||
|
<TabItem value="validate2" label="Validate">
|
||||||
|
You can check that status of the fleet pods by running the below commands.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
# Ensure kubectl is pointing to the right cluster
|
# Ensure kubectl is pointing to the right cluster
|
||||||
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
kubectl -n cattle-fleet-system logs -l app=fleet-agent
|
||||||
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
kubectl -n cattle-fleet-system get pods -l app=fleet-agent
|
||||||
```
|
```
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
The agent should now be deployed.
|
||||||
|
|
||||||
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
|
||||||
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
|
||||||
|
|
@ -172,3 +187,69 @@ kubectl -n clusters get clusters.fleet.cattle.io
|
||||||
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
|
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
|
||||||
my-cluster 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
my-cluster 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Cluster Registration Tokens
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
|
__Not needed for Manager initiated registration__:
|
||||||
|
For manager initiated registrations the token is managed by the Fleet manager and does
|
||||||
|
not need to be manually created and obtained.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
For an agent initiated registration the downstream cluster must have a cluster registration token.
|
||||||
|
Cluster registration tokens are used to establish a new identity for a cluster. Internally
|
||||||
|
cluster registration tokens are managed by creating Kubernetes service accounts that have the
|
||||||
|
permissions to create `ClusterRegistrationRequests` within a specific namespace. Once the
|
||||||
|
cluster is registered a new `ServiceAccount` is created for that cluster that is used as
|
||||||
|
the unique identity of the cluster. The agent is designed to forget the cluster registration
|
||||||
|
token after registration. While the agent will not maintain a reference to the cluster registration
|
||||||
|
token after a successful registration please note that usually other system bootstrap scripts do.
|
||||||
|
|
||||||
|
Since the cluster registration token is forgotten, if you need to re-register a cluster you must
|
||||||
|
give the cluster a new registration token.
|
||||||
|
|
||||||
|
### Token TTL
|
||||||
|
|
||||||
|
Cluster registration tokens can be reused by any cluster in a namespace. The tokens can be given a TTL
|
||||||
|
such that it will expire after a specific time.
|
||||||
|
|
||||||
|
### Create a new Token
|
||||||
|
|
||||||
|
The `ClusterRegistationToken` is a namespaced type and should be created in the same namespace
|
||||||
|
in which you will create `GitRepo` and `ClusterGroup` resources. For in depth details on how namespaces
|
||||||
|
are used in Fleet refer to the documentation on [namespaces](./namespaces.md). Create a new
|
||||||
|
token with the below YAML.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: ClusterRegistrationToken
|
||||||
|
apiVersion: "fleet.cattle.io/v1alpha1"
|
||||||
|
metadata:
|
||||||
|
name: new-token
|
||||||
|
namespace: clusters
|
||||||
|
spec:
|
||||||
|
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
|
||||||
|
ttl: 240h
|
||||||
|
```
|
||||||
|
|
||||||
|
After the `ClusterRegistrationToken` is created, Fleet will create a corresponding `Secret` with the same name.
|
||||||
|
As the `Secret` creation is performed asynchronously, you will need to wait until it's available before using it.
|
||||||
|
|
||||||
|
One way to do so is via the following one-liner:
|
||||||
|
```shell
|
||||||
|
while ! kubectl --namespace=clusters get secret new-token; do sleep 5; done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Obtaining Token Value (Agent values.yaml)
|
||||||
|
|
||||||
|
The token value contains YAML content for a `values.yaml` file that is expected to be passed to `helm install`
|
||||||
|
to install the Fleet agent on a downstream cluster.
|
||||||
|
|
||||||
|
Such value is contained in the `values` field of the `Secret` mentioned above. To obtain the YAML content for the
|
||||||
|
above example one can run the following one-liner:
|
||||||
|
```shell
|
||||||
|
kubectl --namespace clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Once the `values.yaml` is ready it can be used repeatedly by clusters to register until the TTL expires.
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,5 @@
|
||||||
# Architecture
|
# Architecture
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Fleet has two primary components. The Fleet manager and the cluster agents. These
|
Fleet has two primary components. The Fleet manager and the cluster agents. These
|
||||||
components work in a two-stage pull model. The Fleet manager will pull from git and the
|
components work in a two-stage pull model. The Fleet manager will pull from git and the
|
||||||
cluster agents will pull from the Fleet manager.
|
cluster agents will pull from the Fleet manager.
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Generating Diffs for Modified GitRepos
|
# Generating Diffs to Ignore Modified GitRepos
|
||||||
|
|
||||||
|
|
||||||
Continuous Delivery in Rancher is powered by fleet. When a user adds a GitRepo CR, then Continuous Delivery creates the associated fleet bundles.
|
Continuous Delivery in Rancher is powered by fleet. When a user adds a GitRepo CR, then Continuous Delivery creates the associated fleet bundles.
|
||||||
|
|
@ -18,6 +18,13 @@ Fleet bundles support the ability to specify a custom [jsonPointer patch](http:/
|
||||||
|
|
||||||
With the patch, users can instruct fleet to ignore object modifications.
|
With the patch, users can instruct fleet to ignore object modifications.
|
||||||
|
|
||||||
|
## Simple Example
|
||||||
|
|
||||||
|
https://github.com/rancher/fleet-examples/tree/master/bundle-diffs
|
||||||
|
|
||||||
|
|
||||||
|
## Gatekeeper Example
|
||||||
|
|
||||||
In this example, we are trying to deploy opa-gatekeeper using Continuous Delivery to our clusters.
|
In this example, we are trying to deploy opa-gatekeeper using Continuous Delivery to our clusters.
|
||||||
|
|
||||||
The opa-gatekeeper bundle associated with the opa GitRepo is in modified state.
|
The opa-gatekeeper bundle associated with the opa GitRepo is in modified state.
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,7 @@ fleet-manager [flags]
|
||||||
```
|
```
|
||||||
--debug Turn on debug logging
|
--debug Turn on debug logging
|
||||||
--debug-level int If debugging is enabled, set klog -v=X
|
--debug-level int If debugging is enabled, set klog -v=X
|
||||||
--disable-bootstrap disable local cluster components
|
--disable-bootstrap disable agent on local cluster
|
||||||
--disable-gitops disable gitops components
|
--disable-gitops disable gitops components
|
||||||
-h, --help help for fleet-manager
|
-h, --help help for fleet-manager
|
||||||
--kubeconfig string Kubeconfig file
|
--kubeconfig string Kubeconfig file
|
||||||
|
|
|
||||||
|
|
@ -1,65 +0,0 @@
|
||||||
# Cluster Registration Tokens
|
|
||||||
|
|
||||||
:::info
|
|
||||||
|
|
||||||
__Not needed for Manager initiated registration__:
|
|
||||||
For manager initiated registrations the token is managed by the Fleet manager and does
|
|
||||||
not need to be manually created and obtained.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
For an agent initiated registration the downstream cluster must have a cluster registration token.
|
|
||||||
Cluster registration tokens are used to establish a new identity for a cluster. Internally
|
|
||||||
cluster registration tokens are managed by creating Kubernetes service accounts that have the
|
|
||||||
permissions to create `ClusterRegistrationRequests` within a specific namespace. Once the
|
|
||||||
cluster is registered a new `ServiceAccount` is created for that cluster that is used as
|
|
||||||
the unique identity of the cluster. The agent is designed to forget the cluster registration
|
|
||||||
token after registration. While the agent will not maintain a reference to the cluster registration
|
|
||||||
token after a successful registration please note that usually other system bootstrap scripts do.
|
|
||||||
|
|
||||||
Since the cluster registration token is forgotten, if you need to re-register a cluster you must
|
|
||||||
give the cluster a new registration token.
|
|
||||||
|
|
||||||
## Token TTL
|
|
||||||
|
|
||||||
Cluster registration tokens can be reused by any cluster in a namespace. The tokens can be given a TTL
|
|
||||||
such that it will expire after a specific time.
|
|
||||||
|
|
||||||
## Create a new Token
|
|
||||||
|
|
||||||
The `ClusterRegistationToken` is a namespaced type and should be created in the same namespace
|
|
||||||
in which you will create `GitRepo` and `ClusterGroup` resources. For in depth details on how namespaces
|
|
||||||
are used in Fleet refer to the documentation on [namespaces](./namespaces.md). Create a new
|
|
||||||
token with the below YAML.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: ClusterRegistrationToken
|
|
||||||
apiVersion: "fleet.cattle.io/v1alpha1"
|
|
||||||
metadata:
|
|
||||||
name: new-token
|
|
||||||
namespace: clusters
|
|
||||||
spec:
|
|
||||||
# A duration string for how long this token is valid for. A value <= 0 or null means infinite time.
|
|
||||||
ttl: 240h
|
|
||||||
```
|
|
||||||
|
|
||||||
After the `ClusterRegistrationToken` is created, Fleet will create a corresponding `Secret` with the same name.
|
|
||||||
As the `Secret` creation is performed asynchronously, you will need to wait until it's available before using it.
|
|
||||||
|
|
||||||
One way to do so is via the following one-liner:
|
|
||||||
```shell
|
|
||||||
while ! kubectl --namespace=clusters get secret new-token; do sleep 5; done
|
|
||||||
```
|
|
||||||
|
|
||||||
## Obtaining Token Value (Agent values.yaml)
|
|
||||||
|
|
||||||
The token value contains YAML content for a `values.yaml` file that is expected to be passed to `helm install`
|
|
||||||
to install the Fleet agent on a downstream cluster.
|
|
||||||
|
|
||||||
Such value is contained in the `values` field of the `Secret` mentioned above. To obtain the YAML content for the
|
|
||||||
above example one can run the following one-liner:
|
|
||||||
```shell
|
|
||||||
kubectl --namespace clusters get secret new-token -o 'jsonpath={.data.values}' | base64 --decode > values.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Once the `values.yaml` is ready it can be used repeatedly by clusters to register until the TTL expires.
|
|
||||||
|
|
@ -1,75 +0,0 @@
|
||||||
# Examples
|
|
||||||
|
|
||||||
### Lifecycle of a Fleet Bundle
|
|
||||||
|
|
||||||
To demonstrate the lifecycle of a Fleet bundle, we will use [multi-cluster/helm](https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm) as a case study.
|
|
||||||
|
|
||||||
1. User will create a [GitRepo](./gitrepo-add.md#create-gitrepo-instance) that points to the multi-cluster/helm repository.
|
|
||||||
2. The `gitjob-controller` will sync changes from the GitRepo and detect changes from the polling or [webhook event](./webhook.md). With every commit change, the `gitjob-controller` will create a job that clones the git repository, reads content from the repo such as `fleet.yaml` and other manifests, and creates the Fleet [bundle](./cluster-bundles-state.md#bundles).
|
|
||||||
|
|
||||||
>**Note:** The job pod with the image name `rancher/tekton-utils` will be under the same namespace as the GitRepo.
|
|
||||||
|
|
||||||
3. The `fleet-controller` then syncs changes from the bundle. According to the targets, the `fleet-controller` will create `BundleDeployment` resources, which are a combination of a bundle and a target cluster.
|
|
||||||
4. The `fleet-agent` will then pull the `BundleDeployment` from the Fleet controlplane. The agent deploys bundle manifests as a [Helm chart](https://helm.sh/docs/intro/install/) from the `BundleDeployment` into the downstream clusters.
|
|
||||||
5. The `fleet-agent` will continue to monitor the application bundle and report statuses back in the following order: bundledeployment > bundle > GitRepo > cluster.
|
|
||||||
|
|
||||||
### Deploy Kubernetes Manifests Across Clusters with Customization
|
|
||||||
|
|
||||||
[Fleet in Rancher](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/) allows users to manage clusters easily as if they were one cluster. Users can deploy bundles, which can be comprised of deployment manifests or any other Kubernetes resource, across clusters using grouping configuration.
|
|
||||||
|
|
||||||
To demonstrate how to deploy Kubernetes manifests across different clusters using Fleet, we will use [multi-cluster/helm/fleet.yaml](https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm/fleet.yaml) as a case study.
|
|
||||||
|
|
||||||
**Situation:** User has three clusters with three different labels: `env=dev`, `env=test`, and `env=prod`. User wants to deploy a frontend application with a backend database across these clusters.
|
|
||||||
|
|
||||||
**Expected behavior:**
|
|
||||||
|
|
||||||
- After deploying to the `dev` cluster, database replication is not enabled.
|
|
||||||
- After deploying to the `test` cluster, database replication is enabled.
|
|
||||||
- After deploying to the `prod` cluster, database replication is enabled and Load balancer services are exposed.
|
|
||||||
|
|
||||||
**Advantage of Fleet:**
|
|
||||||
|
|
||||||
Instead of deploying the app on each cluster, Fleet allows you to deploy across all clusters following these steps:
|
|
||||||
|
|
||||||
1. Deploy gitRepo `https://github.com/rancher/fleet-examples.git` and specify the path `multi-cluster/helm`.
|
|
||||||
2. Under `multi-cluster/helm`, a Helm chart will deploy the frontend app service and backend database service.
|
|
||||||
3. The following rule will be defined in `fleet.yaml`:
|
|
||||||
|
|
||||||
```
|
|
||||||
targetCustomizations:
|
|
||||||
- name: dev
|
|
||||||
helm:
|
|
||||||
values:
|
|
||||||
replication: false
|
|
||||||
clusterSelector:
|
|
||||||
matchLabels:
|
|
||||||
env: dev
|
|
||||||
|
|
||||||
- name: test
|
|
||||||
helm:
|
|
||||||
values:
|
|
||||||
replicas: 3
|
|
||||||
clusterSelector:
|
|
||||||
matchLabels:
|
|
||||||
env: test
|
|
||||||
|
|
||||||
- name: prod
|
|
||||||
helm:
|
|
||||||
values:
|
|
||||||
serviceType: LoadBalancer
|
|
||||||
replicas: 3
|
|
||||||
clusterSelector:
|
|
||||||
matchLabels:
|
|
||||||
env: prod
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result:**
|
|
||||||
|
|
||||||
Fleet will deploy the Helm chart with your customized `values.yaml` to the different clusters.
|
|
||||||
|
|
||||||
>**Note:** Configuration management is not limited to deployments but can be expanded to general configuration management. Fleet is able to apply configuration management through customization among any set of clusters automatically.
|
|
||||||
|
|
||||||
### Additional Examples
|
|
||||||
|
|
||||||
Examples using raw Kubernetes YAML, Helm charts, Kustomize, and combinations
|
|
||||||
of the three are in the [Fleet Examples repo](https://github.com/rancher/fleet-examples/).
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Expected Repo Structure
|
# GitRepo Contents
|
||||||
|
|
||||||
Fleet will create bundles from a git repository. This happens either explicitly by specifying paths, or when a `fleet.yaml` is found.
|
Fleet will create bundles from a git repository. This happens either explicitly by specifying paths, or when a `fleet.yaml` is found.
|
||||||
|
|
||||||
|
|
@ -54,7 +54,7 @@ __How changes are applied to `values.yaml`__:
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
```yaml
|
```yaml title="fleet.yaml"
|
||||||
# The default namespace to be applied to resources. This field is not used to
|
# The default namespace to be applied to resources. This field is not used to
|
||||||
# enforce or lock down the deployment to a specific namespace, but instead
|
# enforce or lock down the deployment to a specific namespace, but instead
|
||||||
# provide the default value of the namespace field if one is not specified
|
# provide the default value of the namespace field if one is not specified
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,7 @@
|
||||||
# Mapping to Downstream Clusters
|
# Mapping to Downstream Clusters
|
||||||
|
|
||||||
|
[Fleet in Rancher](https://rancher.com/docs/rancher/v2.6/en/deploy-across-clusters/fleet/) allows users to manage clusters easily as if they were one cluster. Users can deploy bundles, which can be comprised of deployment manifests or any other Kubernetes resource, across clusters using grouping configuration.
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
__Multi-cluster Only__:
|
__Multi-cluster Only__:
|
||||||
|
|
@ -9,7 +11,7 @@ This approach only applies if you are running Fleet in a multi-cluster style
|
||||||
|
|
||||||
When deploying `GitRepos` to downstream clusters the clusters must be mapped to a target.
|
When deploying `GitRepos` to downstream clusters the clusters must be mapped to a target.
|
||||||
|
|
||||||
## Defining targets
|
## Defining Targets
|
||||||
|
|
||||||
The deployment targets of `GitRepo` is done using the `spec.targets` field to
|
The deployment targets of `GitRepo` is done using the `spec.targets` field to
|
||||||
match clusters or cluster groups. The YAML specification is as below.
|
match clusters or cluster groups. The YAML specification is as below.
|
||||||
|
|
@ -67,7 +69,7 @@ clusterSelector: {}
|
||||||
clusterSelector: null
|
clusterSelector: null
|
||||||
```
|
```
|
||||||
|
|
||||||
## Default target
|
## Default Target
|
||||||
|
|
||||||
If no target is set for the `GitRepo` then the default targets value is applied. The default targets value is as below.
|
If no target is set for the `GitRepo` then the default targets value is applied. The default targets value is as below.
|
||||||
|
|
||||||
|
|
@ -79,3 +81,62 @@ targets:
|
||||||
|
|
||||||
This means if you wish to setup a default location non-configured GitRepos will go to, then just create a cluster group called default
|
This means if you wish to setup a default location non-configured GitRepos will go to, then just create a cluster group called default
|
||||||
and add clusters to it.
|
and add clusters to it.
|
||||||
|
|
||||||
|
## Customization per Cluster
|
||||||
|
|
||||||
|
To demonstrate how to deploy Kubernetes manifests across different clusters with customization using Fleet, we will use [multi-cluster/helm/fleet.yaml](https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm/fleet.yaml).
|
||||||
|
|
||||||
|
**Situation:** User has three clusters with three different labels: `env=dev`, `env=test`, and `env=prod`. User wants to deploy a frontend application with a backend database across these clusters.
|
||||||
|
|
||||||
|
**Expected behavior:**
|
||||||
|
|
||||||
|
- After deploying to the `dev` cluster, database replication is not enabled.
|
||||||
|
- After deploying to the `test` cluster, database replication is enabled.
|
||||||
|
- After deploying to the `prod` cluster, database replication is enabled and Load balancer services are exposed.
|
||||||
|
|
||||||
|
**Advantage of Fleet:**
|
||||||
|
|
||||||
|
Instead of deploying the app on each cluster, Fleet allows you to deploy across all clusters following these steps:
|
||||||
|
|
||||||
|
1. Deploy gitRepo `https://github.com/rancher/fleet-examples.git` and specify the path `multi-cluster/helm`.
|
||||||
|
2. Under `multi-cluster/helm`, a Helm chart will deploy the frontend app service and backend database service.
|
||||||
|
3. The following rule will be defined in `fleet.yaml`:
|
||||||
|
|
||||||
|
```
|
||||||
|
targetCustomizations:
|
||||||
|
- name: dev
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
replication: false
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
replicas: 3
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
serviceType: LoadBalancer
|
||||||
|
replicas: 3
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
|
||||||
|
Fleet will deploy the Helm chart with your customized `values.yaml` to the different clusters.
|
||||||
|
|
||||||
|
>**Note:** Configuration management is not limited to deployments but can be expanded to general configuration management. Fleet is able to apply configuration management through customization among any set of clusters automatically.
|
||||||
|
|
||||||
|
## Additional Examples
|
||||||
|
|
||||||
|
Examples using raw Kubernetes YAML, Helm charts, Kustomize, and combinations
|
||||||
|
of the three are in the [Fleet Examples repo](https://github.com/rancher/fleet-examples/).
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Image scan
|
# Using Image Scan to Update Container Image References
|
||||||
|
|
||||||
Image scan in fleet allows you to scan your image repository, fetch the desired image and update your git repository,
|
Image scan in fleet allows you to scan your image repository, fetch the desired image and update your git repository,
|
||||||
without the need to manually update your manifests.
|
without the need to manually update your manifests.
|
||||||
|
|
@ -112,4 +112,4 @@ spec:
|
||||||
```
|
```
|
||||||
|
|
||||||
Try pushing a new image tag, for example, `<image>:<new-tag>`. Wait for a while and there should be a new commit pushed into your git repository to change tag in deployment.yaml.
|
Try pushing a new image tag, for example, `<image>:<new-tag>`. Wait for a while and there should be a new commit pushed into your git repository to change tag in deployment.yaml.
|
||||||
Once change is made into git repository, fleet will read through the change and deploy the change into your cluster.
|
Once change is made into git repository, fleet will read through the change and deploy the change into your cluster.
|
||||||
|
|
|
||||||
|
|
@ -1,9 +1,238 @@
|
||||||
# Installation
|
import {versions} from '@site/src/fleetVersions';
|
||||||
|
import CodeBlock from '@theme/CodeBlock';
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
The installation is broken up into two different use cases: [Single](./single-cluster-install.md) and
|
# Installation Details
|
||||||
[Multi-Cluster](./multi-cluster-install.md) install. The single cluster install is for if you wish to use GitOps to manage a single cluster,
|
|
||||||
|
The installation is broken up into two different use cases: single and multi-cluster.
|
||||||
|
The single cluster install is for if you wish to use GitOps to manage a single cluster,
|
||||||
in which case you do not need a centralized manager cluster. In the multi-cluster use case
|
in which case you do not need a centralized manager cluster. In the multi-cluster use case
|
||||||
you will setup a centralized manager cluster to which you can register clusters.
|
you will setup a centralized manager cluster to which you can register clusters.
|
||||||
|
|
||||||
If you are just learning Fleet the single cluster install is the recommended starting
|
If you are just learning Fleet the single cluster install is the recommended starting
|
||||||
point. After which you can move from single cluster to multi-cluster setup down the line.
|
point. After which you can move from single cluster to multi-cluster setup down the line.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Single-cluster is the default installation. The same cluster will run both the Fleet
|
||||||
|
manager and the Fleet agent. The cluster will communicate with Git server to
|
||||||
|
deploy resources to this local cluster. This is the simplest setup and very
|
||||||
|
useful for dev/test and small scale setups. This use case is supported as a valid
|
||||||
|
use case for production.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="helm" label="Helm 3" default>
|
||||||
|
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
|
||||||
|
fairly straight forward. To install the Helm 3 CLI follow the <a href="https://helm.sh/docs/intro/install">official install instructions</a>.
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="kubernetes" label="Kubernetes" default>
|
||||||
|
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
|
||||||
|
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
|
||||||
|
Any Kubernetes community supported version of Kubernetes will work, in practice this means {versions.next.kubernetes} or greater.
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
Install the following two Helm charts.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="install" label="Install" default>
|
||||||
|
First install the Fleet CustomResourcesDefintions.
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
|
fleet-crd`} {versions.next.fleetCRD}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
Second install the Fleet controllers.
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
|
fleet`} {versions.next.fleet}
|
||||||
|
</CodeBlock>
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="verify" label="Verify">
|
||||||
|
|
||||||
|
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
|
||||||
|
running the below commands.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
||||||
|
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.
|
||||||
|
|
||||||
|
## Configuration for Multi-Cluster
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.
|
||||||
|
|
||||||
|
The multi-cluster install described below is **only** covered in standalone Fleet, which is untested by Rancher QA.
|
||||||
|
:::
|
||||||
|
|
||||||
|
|
||||||
|
:::info
|
||||||
|
The setup is the same as for a single cluster.
|
||||||
|
After installing the Fleet manager, you will then need to register remote downstream clusters with the Fleet manager.
|
||||||
|
|
||||||
|
However, to allow for [manager-initiated registration](./manager-initiated) of downstream clusters, a few extra settings are required. Without the API server URL and the CA, only [agent-initiated registration](./agent-initiated) of downstream clusters is possible.
|
||||||
|
:::
|
||||||
|
|
||||||
|
### API Server URL and CA certificate
|
||||||
|
|
||||||
|
In order for your Fleet management installation to properly work it is important
|
||||||
|
the correct API server URL and CA certificates are configured properly. The Fleet agents
|
||||||
|
will communicate to the Kubernetes API server URL. This means the Kubernetes
|
||||||
|
API server must be accessible to the downstream clusters. You will also need
|
||||||
|
to obtain the CA certificate of the API server. The easiest way to obtain this information
|
||||||
|
is typically from your kubeconfig file (`$HOME/.kube/config`). The `server`,
|
||||||
|
`certificate-authority-data`, or `certificate-authority` fields will have these values.
|
||||||
|
|
||||||
|
```yaml title="$HOME/.kube/config"
|
||||||
|
apiVersion: v1
|
||||||
|
clusters:
|
||||||
|
- cluster:
|
||||||
|
certificate-authority-data: LS0tLS1CRUdJTi...
|
||||||
|
server: https://example.com:6443
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Extract CA certificate
|
||||||
|
|
||||||
|
Please note that the `certificate-authority-data` field is base64 encoded and will need to be
|
||||||
|
decoded before you save it into a file. This can be done by saving the base64 encoded contents to
|
||||||
|
a file and then running
|
||||||
|
|
||||||
|
```shell
|
||||||
|
base64 -d encoded-file > ca.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, retrieve the CA certificate from your kubeconfig.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="extractca" label="Extract First">
|
||||||
|
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
|
||||||
|
`KUBECONFIG` and place then in a file named `ca.pem`.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="extractcas" label="Multiple Entries">
|
||||||
|
Or, if you have a multi-cluster setup, you can use this command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
||||||
|
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
|
||||||
|
#### Extract API Server
|
||||||
|
|
||||||
|
If you have a multi-cluster setup, you can use this command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
||||||
|
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
|
||||||
|
# Leave empty if your API server is signed by a well known CA
|
||||||
|
API_SERVER_CA="ca.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Validate
|
||||||
|
|
||||||
|
First validate the server URL is correct.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -fLk "$API_SERVER_URL/version"
|
||||||
|
```
|
||||||
|
|
||||||
|
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
|
||||||
|
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
|
||||||
|
6443 for Kubernetes.
|
||||||
|
|
||||||
|
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
|
||||||
|
well known CA then omit the `--cacert "$API_SERVER_CA"` part of the command.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
curl -fL --cacert "$API_SERVER_CA" "$API_SERVER_URL/version"
|
||||||
|
```
|
||||||
|
|
||||||
|
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
|
||||||
|
only because the curl command is not setting proper credentials, but this validates that the TLS
|
||||||
|
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
|
||||||
|
the `ca.pem` is not correct. The contents of the `$API_SERVER_CA` file should look similar to the below:
|
||||||
|
|
||||||
|
```pem title="ca.pem"
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
|
||||||
|
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
|
||||||
|
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
|
||||||
|
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
|
||||||
|
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
|
||||||
|
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
|
||||||
|
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
|
||||||
|
KDs/pb3fnMTtpA==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install
|
||||||
|
|
||||||
|
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
|
||||||
|
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
|
||||||
|
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
|
||||||
|
|
||||||
|
Setup the environment with your specific values, e.g.:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
API_SERVER_URL="https://example.com:6443"
|
||||||
|
API_SERVER_CA="ca.pem"
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you have validated the API server URL and API server CA parameters, install the following two
|
||||||
|
Helm charts.
|
||||||
|
|
||||||
|
<Tabs>
|
||||||
|
<TabItem value="install2" label="Install" default>
|
||||||
|
First install the Fleet CustomResourcesDefintions.
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
|
fleet-crd`} {versions.next.fleetCRD}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
Second install the Fleet controllers.
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
|
--set apiServerURL="$API_SERVER_URL" \\
|
||||||
|
--set-file apiServerCA="$API_SERVER_CA" \\
|
||||||
|
fleet`} {versions.next.fleet}
|
||||||
|
</CodeBlock>
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="verifiy2" label="Verify">
|
||||||
|
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
||||||
|
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
||||||
|
```
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
||||||
|
```
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-overview.md) and [git repos](./gitrepo-add.md) with
|
||||||
|
the Fleet manager.
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
# Manager Initiated
|
# Manager Initiated
|
||||||
|
|
||||||
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the manager initiated registration style.
|
Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the manager initiated registration style.
|
||||||
|
If you are using Fleet standalone without Rancher, it must be installed as described in [installation details](installation.md).
|
||||||
|
|
||||||
## Kubeconfig Secret
|
## Kubeconfig Secret
|
||||||
|
|
||||||
|
|
@ -17,6 +18,7 @@ registered with Fleet.
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
### Kubeconfig Secret
|
### Kubeconfig Secret
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
kind: Secret
|
kind: Secret
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
|
@ -26,6 +28,7 @@ metadata:
|
||||||
data:
|
data:
|
||||||
value: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIHNlcnZlcjogaHR0cHM6Ly9leGFtcGxlLmNvbTo2NDQzCiAgbmFtZTogY2x1c3Rlcgpjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogY2x1c3RlcgogICAgdXNlcjogdXNlcgogIG5hbWU6IGRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0CmtpbmQ6IENvbmZpZwpwcmVmZXJlbmNlczoge30KdXNlcnM6Ci0gbmFtZTogdXNlcgogIHVzZXI6CiAgICB0b2tlbjogc29tZXRoaW5nCg==
|
value: YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIHNlcnZlcjogaHR0cHM6Ly9leGFtcGxlLmNvbTo2NDQzCiAgbmFtZTogY2x1c3Rlcgpjb250ZXh0czoKLSBjb250ZXh0OgogICAgY2x1c3RlcjogY2x1c3RlcgogICAgdXNlcjogdXNlcgogIG5hbWU6IGRlZmF1bHQKY3VycmVudC1jb250ZXh0OiBkZWZhdWx0CmtpbmQ6IENvbmZpZwpwcmVmZXJlbmNlczoge30KdXNlcnM6Ci0gbmFtZTogdXNlcgogIHVzZXI6CiAgICB0b2tlbjogc29tZXRoaW5nCg==
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cluster
|
### Cluster
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: fleet.cattle.io/v1alpha1
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
|
|
||||||
|
|
@ -1,166 +0,0 @@
|
||||||
import {versions} from '@site/src/fleetVersions';
|
|
||||||
import CodeBlock from '@theme/CodeBlock';
|
|
||||||
|
|
||||||
# Multi Cluster Install
|
|
||||||

|
|
||||||
|
|
||||||
**Note:** Downstream clusters in Rancher are automatically registered in Fleet. Users can access Fleet under `Continuous Delivery` on Rancher.
|
|
||||||
|
|
||||||
**Warning:** The multi-cluster install described below is **only** covered in standalone Fleet, which is untested by Rancher QA.
|
|
||||||
|
|
||||||
In the below use case, you will setup a centralized Fleet manager. The centralized Fleet manager is a
|
|
||||||
Kubernetes cluster running the Fleet controllers. After installing the Fleet manager, you will then
|
|
||||||
need to register remote downstream clusters with the Fleet manager.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Helm 3
|
|
||||||
|
|
||||||
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
|
|
||||||
fairly straight forward. To install the Helm 3 CLI follow the
|
|
||||||
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
|
|
||||||
|
|
||||||
macOS
|
|
||||||
```
|
|
||||||
brew install helm
|
|
||||||
```
|
|
||||||
Windows
|
|
||||||
```
|
|
||||||
choco install kubernetes-helm
|
|
||||||
```
|
|
||||||
|
|
||||||
### Kubernetes
|
|
||||||
|
|
||||||
The Fleet manager is a controller running on a Kubernetes cluster so an existing cluster is required. All
|
|
||||||
downstream cluster that will be managed will need to communicate to this central Kubernetes cluster. This
|
|
||||||
means the Kubernetes API server URL must be accessible to the downstream clusters. Any Kubernetes community
|
|
||||||
supported version of Kubernetes will work, in practice this means 1.15 or greater.
|
|
||||||
|
|
||||||
## API Server URL and CA certificate
|
|
||||||
|
|
||||||
In order for your Fleet management installation to properly work it is important
|
|
||||||
the correct API server URL and CA certificates are configured properly. The Fleet agents
|
|
||||||
will communicate to the Kubernetes API server URL. This means the Kubernetes
|
|
||||||
API server must be accessible to the downstream clusters. You will also need
|
|
||||||
to obtain the CA certificate of the API server. The easiest way to obtain this information
|
|
||||||
is typically from your kubeconfig file (`${HOME}/.kube/config`). The `server`,
|
|
||||||
`certificate-authority-data`, or `certificate-authority` fields will have these values.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
clusters:
|
|
||||||
- cluster:
|
|
||||||
certificate-authority-data: LS0tLS1CRUdJTi...
|
|
||||||
server: https://example.com:6443
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note that the `certificate-authority-data` field is base64 encoded and will need to be
|
|
||||||
decoded before you save it into a file. This can be done by saving the base64 encoded contents to
|
|
||||||
a file and then running
|
|
||||||
```shell
|
|
||||||
base64 -d encoded-file > ca.pem
|
|
||||||
```
|
|
||||||
If you have `jq` and `base64` available then this one-liners will pull all CA certificates from your
|
|
||||||
`KUBECONFIG` and place then in a file named `ca.pem`.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl config view -o json --raw | jq -r '.clusters[].cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
If you have a multi-cluster setup, you can use this command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
|
||||||
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTERNAME").cluster["certificate-authority-data"]' | base64 -d > ca.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
## Install
|
|
||||||
|
|
||||||
In the following example it will be assumed the API server URL from the `KUBECONFIG` which is `https://example.com:6443`
|
|
||||||
and the CA certificate is in the file `ca.pem`. If your API server URL is signed by a well-known CA you can
|
|
||||||
omit the `apiServerCA` parameter below or just create an empty `ca.pem` file (ie `touch ca.pem`).
|
|
||||||
|
|
||||||
Run the following commands
|
|
||||||
|
|
||||||
Setup the environment with your specific values.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
API_SERVER_URL="https://example.com:6443"
|
|
||||||
API_SERVER_CA="ca.pem"
|
|
||||||
```
|
|
||||||
|
|
||||||
If you have a multi-cluster setup, you can use this command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
# replace CLUSTERNAME with the name of the cluster according to your KUBECONFIG
|
|
||||||
API_SERVER_URL=$(kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="CLUSTER").cluster["server"]')
|
|
||||||
# Leave empty if your API server is signed by a well known CA
|
|
||||||
API_SERVER_CA="ca.pem"
|
|
||||||
```
|
|
||||||
|
|
||||||
First validate the server URL is correct.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -fLk ${API_SERVER_URL}/version
|
|
||||||
```
|
|
||||||
|
|
||||||
The output of this command should be JSON with the version of the Kubernetes server or a `401 Unauthorized` error.
|
|
||||||
If you do not get either of these results than please ensure you have the correct URL. The API server port is typically
|
|
||||||
6443 for Kubernetes.
|
|
||||||
|
|
||||||
Next validate that the CA certificate is proper by running the below command. If your API server is signed by a
|
|
||||||
well known CA then omit the `--cacert ${API_SERVER_CA}` part of the command.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -fL --cacert ${API_SERVER_CA} ${API_SERVER_URL}/version
|
|
||||||
```
|
|
||||||
|
|
||||||
If you get a valid JSON response or an `401 Unauthorized` then it worked. The Unauthorized error is
|
|
||||||
only because the curl command is not setting proper credentials, but this validates that the TLS
|
|
||||||
connection work and the `ca.pem` is correct for this URL. If you get a `SSL certificate problem` then
|
|
||||||
the `ca.pem` is not correct. The contents of the `${API_SERVER_CA}` file should look similar to the below
|
|
||||||
|
|
||||||
```
|
|
||||||
-----BEGIN CERTIFICATE-----
|
|
||||||
MIIBVjCB/qADAgECAgEAMAoGCCqGSM49BAMCMCMxITAfBgNVBAMMGGszcy1zZXJ2
|
|
||||||
ZXItY2FAMTU5ODM5MDQ0NzAeFw0yMDA4MjUyMTIwNDdaFw0zMDA4MjMyMTIwNDda
|
|
||||||
MCMxITAfBgNVBAMMGGszcy1zZXJ2ZXItY2FAMTU5ODM5MDQ0NzBZMBMGByqGSM49
|
|
||||||
AgEGCCqGSM49AwEHA0IABDXlQNkXnwUPdbSgGz5Rk6U9ldGFjF6y1YyF36cNGk4E
|
|
||||||
0lMgNcVVD9gKuUSXEJk8tzHz3ra/+yTwSL5xQeLHBl+jIzAhMA4GA1UdDwEB/wQE
|
|
||||||
AwICpDAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIFMtZ5gGDoDs
|
|
||||||
ciRyve+T4xbRNVHES39tjjup/LuN4tAgAiAteeB3jgpTMpZyZcOOHl9gpZ8PgEcN
|
|
||||||
KDs/pb3fnMTtpA==
|
|
||||||
-----END CERTIFICATE-----
|
|
||||||
```
|
|
||||||
|
|
||||||
Once you have validated the API server URL and API server CA parameters, install the following two
|
|
||||||
Helm charts.
|
|
||||||
|
|
||||||
First install the Fleet CustomResourcesDefintions.
|
|
||||||
<CodeBlock language="bash">
|
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
|
||||||
fleet-crd`} {versions.next.fleetCRD}
|
|
||||||
</CodeBlock>
|
|
||||||
|
|
||||||
Second install the Fleet controllers.
|
|
||||||
<CodeBlock language="bash">
|
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
|
||||||
--set apiServerURL="$\{API_SERVER_URL}" \\
|
|
||||||
--set-file apiServerCA="$\{API_SERVER_CA}" \\
|
|
||||||
fleet`} {versions.next.fleet}
|
|
||||||
</CodeBlock>
|
|
||||||
|
|
||||||
Fleet should be ready to use. You can check the status of the Fleet controller pods by running the below commands.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
|
||||||
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
|
||||||
```
|
|
||||||
|
|
||||||
At this point the Fleet manager should be ready. You can now [register clusters](./cluster-overview.md) and [git repos](./gitrepo-add.md) with
|
|
||||||
the Fleet manager.
|
|
||||||
|
|
@ -8,7 +8,7 @@ important to understand the security model and how one can use Fleet in a multi-
|
||||||
|
|
||||||
The primary types are all scoped to a namespace. All selectors for `GitRepo` targets will be evaluated against
|
The primary types are all scoped to a namespace. All selectors for `GitRepo` targets will be evaluated against
|
||||||
the `Clusters` and `ClusterGroups` in the same namespaces. This means that if you give `create` or `update` privileges
|
the `Clusters` and `ClusterGroups` in the same namespaces. This means that if you give `create` or `update` privileges
|
||||||
to a the `GitRepo` type in a namespace, that end user can modify the selector to match any cluster in that namespace.
|
to a `GitRepo` type in a namespace, that end user can modify the selector to match any cluster in that namespace.
|
||||||
This means in practice if you want to have two teams self manage their own `GitRepo` registrations but they should
|
This means in practice if you want to have two teams self manage their own `GitRepo` registrations but they should
|
||||||
not be able to target each others clusters, they should be in different namespaces.
|
not be able to target each others clusters, they should be in different namespaces.
|
||||||
|
|
||||||
|
|
@ -18,6 +18,10 @@ When deploying a Fleet bundle, the specified namespace will automatically be cre
|
||||||
|
|
||||||
## Special Namespaces
|
## Special Namespaces
|
||||||
|
|
||||||
|
An overview of the [namespaces](namespaces.md) used by fleet and their resources.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### fleet-local (local workspace, cluster registration namespace)
|
### fleet-local (local workspace, cluster registration namespace)
|
||||||
|
|
||||||
The **fleet-local** namespace is a special namespace used for the single cluster use case or to bootstrap
|
The **fleet-local** namespace is a special namespace used for the single cluster use case or to bootstrap
|
||||||
|
|
@ -41,14 +45,14 @@ to live in this namespace in the downstream cluster.
|
||||||
This namespace holds secrets for the cluster registration process. It should contain no other resources in it,
|
This namespace holds secrets for the cluster registration process. It should contain no other resources in it,
|
||||||
especially secrets.
|
especially secrets.
|
||||||
|
|
||||||
### Cluster namespaces
|
### Cluster Namespaces
|
||||||
|
|
||||||
For every cluster that is registered a namespace is created by the Fleet manager for that cluster.
|
For every cluster that is registered a namespace is created by the Fleet manager for that cluster.
|
||||||
These namespaces are named in the form `cluster-${namespace}-${cluster}-${random}`. The purpose of this
|
These namespaces are named in the form `cluster-${namespace}-${cluster}-${random}`. The purpose of this
|
||||||
namespace is that all `BundleDeployments` for that cluster are put into this namespace and
|
namespace is that all `BundleDeployments` for that cluster are put into this namespace and
|
||||||
then the downstream cluster is given access to watch and update `BundleDeployments` in that namespace only.
|
then the downstream cluster is given access to watch and update `BundleDeployments` in that namespace only.
|
||||||
|
|
||||||
## Cross namespace deployments
|
## Cross Namespace Deployments
|
||||||
|
|
||||||
It is possible to create a GitRepo that will deploy across namespaces. The primary purpose of this is so that a
|
It is possible to create a GitRepo that will deploy across namespaces. The primary purpose of this is so that a
|
||||||
central privileged team can manage common configuration for many clusters that are managed by different teams. The way
|
central privileged team can manage common configuration for many clusters that are managed by different teams. The way
|
||||||
|
|
@ -107,7 +111,7 @@ defaultClientSecretName: ""
|
||||||
defaultServiceAccount: ""
|
defaultServiceAccount: ""
|
||||||
```
|
```
|
||||||
|
|
||||||
### AllowedTargetNamespaces
|
### Allowed Target Namespaces
|
||||||
|
|
||||||
This can be used to limit a deployment to a set of namespaces on a downstream cluster.
|
This can be used to limit a deployment to a set of namespaces on a downstream cluster.
|
||||||
If an allowedTargetNamespaces restriction is present, all `GitRepos` must
|
If an allowedTargetNamespaces restriction is present, all `GitRepos` must
|
||||||
|
|
|
||||||
|
|
@ -1,8 +1,12 @@
|
||||||
import {versions} from '@site/src/fleetVersions';
|
import {versions} from '@site/src/fleetVersions';
|
||||||
import CodeBlock from '@theme/CodeBlock';
|
import CodeBlock from '@theme/CodeBlock';
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
# Quick Start
|
# Quick Start
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
Who needs documentation, lets just run this thing!
|
Who needs documentation, lets just run this thing!
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
@ -10,9 +14,18 @@ Who needs documentation, lets just run this thing!
|
||||||
Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure
|
Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure
|
||||||
things to your cluster.
|
things to your cluster.
|
||||||
|
|
||||||
```
|
<Tabs>
|
||||||
brew install helm
|
<TabItem value="linux" label="Linux/Mac" default>
|
||||||
```
|
<CodeBlock language="bash">
|
||||||
|
brew install helm
|
||||||
|
</CodeBlock>
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="windows" label="Windows" default>
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
choco install kubernetes-helm
|
||||||
|
</CodeBlock>
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)
|
Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)
|
||||||
|
|
||||||
|
|
@ -24,7 +37,7 @@ helm -n cattle-fleet-system install --create-namespace --wait \\
|
||||||
fleet`} {versions.next.fleet}
|
fleet`} {versions.next.fleet}
|
||||||
</CodeBlock>
|
</CodeBlock>
|
||||||
|
|
||||||
## Add a Git Repo to watch
|
## Add a Git Repo to Watch
|
||||||
|
|
||||||
Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
|
Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
|
||||||
be deployed should be in `/manifests` in your repo.
|
be deployed should be in `/manifests` in your repo.
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,19 @@
|
||||||
# Bundle Rendering Stages
|
# Bundle Lifecycle
|
||||||
|
|
||||||
The different stages a bundle goes through until deployed.
|
A bundle is an internal resource used for the orchestration of resources from git. When a GitRepo is scanned it will produce one or more bundles.
|
||||||
|
|
||||||
|
To demonstrate the life cycle of a Fleet bundle, we will use [multi-cluster/helm](https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm) as a case study.
|
||||||
|
|
||||||
|
1. User will create a [GitRepo](./gitrepo-add.md#create-gitrepo-instance) that points to the multi-cluster/helm repository.
|
||||||
|
2. The `gitjob-controller` will sync changes from the GitRepo and detect changes from the polling or [webhook event](./webhook.md). With every commit change, the `gitjob-controller` will create a job that clones the git repository, reads content from the repo such as `fleet.yaml` and other manifests, and creates the Fleet [bundle](./cluster-bundles-state.md#bundles).
|
||||||
|
|
||||||
|
>**Note:** The job pod with the image name `rancher/tekton-utils` will be under the same namespace as the GitRepo.
|
||||||
|
|
||||||
|
3. The `fleet-controller` then syncs changes from the bundle. According to the targets, the `fleet-controller` will create `BundleDeployment` resources, which are a combination of a bundle and a target cluster.
|
||||||
|
4. The `fleet-agent` will then pull the `BundleDeployment` from the Fleet controlplane. The agent deploys bundle manifests as a [Helm chart](https://helm.sh/docs/intro/install/) from the `BundleDeployment` into the downstream clusters.
|
||||||
|
5. The `fleet-agent` will continue to monitor the application bundle and report statuses back in the following order: bundledeployment > bundle > GitRepo > cluster.
|
||||||
|
|
||||||
|
|
||||||
|
This diagram shows the different rendering stages a bundle goes through until deployment.
|
||||||
|
|
||||||

|

|
||||||
|
|
|
||||||
|
|
@ -1,5 +0,0 @@
|
||||||
# Namespaces
|
|
||||||
|
|
||||||
An overview of the [namespaces](namespaces.md) used by fleet and their resources.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
@ -1,11 +1,11 @@
|
||||||
# Registration
|
# Cluster Registration Internals
|
||||||
|
|
||||||
Detailed analysis of the registration process for clusters. This shows the interaction of controllers, resources and service accounts during the registration of a new downstream cluster or the local cluster.
|
Detailed analysis of the registration process for clusters. This shows the interaction of controllers, resources and service accounts during the registration of a new downstream cluster or the local cluster.
|
||||||
It's important to note that there are multiple ways to start this:
|
It's important to note that there are multiple ways to start this:
|
||||||
|
|
||||||
* Creating a bootstrap config. Fleet does this for the local agent.
|
* Creating a bootstrap config. Fleet does this for the local agent.
|
||||||
* Creating a Cluster resource with a kubeconfing. Rancher does this for downstream clusters.
|
* Creating a `Cluster` resource with a kubeconfig. Rancher does this for downstream clusters.
|
||||||
* Create a Cluster resource with an id.
|
* Create a `Cluster` resource with an id.
|
||||||
* Create a ClusterRegistration resource.
|
* Create a `ClusterRegistration` resource.
|
||||||
|
|
||||||

|

|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Resources
|
# Custom Resources
|
||||||
|
|
||||||
This shows the resources, also the internal ones, involved in creating a deployment from a git repository.
|
This shows the resources, also the internal ones, involved in creating a deployment from a git repository.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,65 +0,0 @@
|
||||||
import {versions} from '@site/src/fleetVersions';
|
|
||||||
import CodeBlock from '@theme/CodeBlock';
|
|
||||||
|
|
||||||
# Single Cluster Install
|
|
||||||

|
|
||||||
|
|
||||||
In this use case you have only one cluster. The cluster will run both the Fleet
|
|
||||||
manager and the Fleet agent. The cluster will communicate with Git server to
|
|
||||||
deploy resources to this local cluster. This is the simplest setup and very
|
|
||||||
useful for dev/test and small scale setups. This use case is supported as a valid
|
|
||||||
use case for production.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Helm 3
|
|
||||||
|
|
||||||
Fleet is distributed as a Helm chart. Helm 3 is a CLI, has no server side component, and is
|
|
||||||
fairly straight forward. To install the Helm 3 CLI follow the
|
|
||||||
[official install instructions](https://helm.sh/docs/intro/install/). The TL;DR is
|
|
||||||
|
|
||||||
macOS
|
|
||||||
```
|
|
||||||
brew install helm
|
|
||||||
```
|
|
||||||
Windows
|
|
||||||
```
|
|
||||||
choco install kubernetes-helm
|
|
||||||
```
|
|
||||||
|
|
||||||
### Kubernetes
|
|
||||||
|
|
||||||
Fleet is a controller running on a Kubernetes cluster so an existing cluster is required. For the
|
|
||||||
single cluster use case you will install Fleet to the cluster which you intend to manage with GitOps.
|
|
||||||
Any Kubernetes community supported version of Kubernetes will work, in practice this means 1.15 or greater.
|
|
||||||
|
|
||||||
## Install
|
|
||||||
|
|
||||||
Install the following two Helm charts.
|
|
||||||
|
|
||||||
First install the Fleet CustomResourcesDefintions.
|
|
||||||
<CodeBlock language="bash">
|
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
|
||||||
fleet-crd`} {versions.next.fleetCRD}
|
|
||||||
</CodeBlock>
|
|
||||||
|
|
||||||
Second install the Fleet controllers.
|
|
||||||
<CodeBlock language="bash">
|
|
||||||
{`helm -n cattle-fleet-system install --create-namespace --wait \\
|
|
||||||
fleet`} {versions.next.fleet}
|
|
||||||
</CodeBlock>
|
|
||||||
|
|
||||||
Fleet should be ready to use now for single cluster. You can check the status of the Fleet controller pods by
|
|
||||||
running the below commands.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl -n cattle-fleet-system logs -l app=fleet-controller
|
|
||||||
kubectl -n cattle-fleet-system get pods -l app=fleet-controller
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
fleet-controller-64f49d756b-n57wq 1/1 Running 0 3m21s
|
|
||||||
```
|
|
||||||
|
|
||||||
You can now [register some git repos](./gitrepo-add.md) in the `fleet-local` namespace to start deploying Kubernetes resources.
|
|
||||||
|
|
@ -232,3 +232,8 @@ You can force a redeployment of an agent for a given cluster by setting `redeplo
|
||||||
```sh
|
```sh
|
||||||
kubectl patch clusters.fleet.cattle.io -n fleet-local local --type=json -p '[{"op": "add", "path": "/spec/redeployAgentGeneration", "value": -1}]'
|
kubectl patch clusters.fleet.cattle.io -n fleet-local local --type=json -p '[{"op": "add", "path": "/spec/redeployAgentGeneration", "value": -1}]'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Nested GitRepo CRs
|
||||||
|
|
||||||
|
Managing Fleet within Fleet (nested `GitRepo` usage) is not currently supported. We will update the documentation if support becomes available.
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,467 @@
|
||||||
|
import CodeBlock from '@theme/CodeBlock';
|
||||||
|
import Tabs from '@theme/Tabs';
|
||||||
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
|
# Creating a Deployment
|
||||||
|
|
||||||
|
To deploy workloads onto downstream clusters, first create a Git repo, then create a GitRepo resource and apply it.
|
||||||
|
|
||||||
|
This tutorial uses the [fleet-examples](https://github.com/rancher/fleet-examples) repository.
|
||||||
|
|
||||||
|
:::note
|
||||||
|
For more details on how to structure the repository and configure the deployment of each bundle see [GitRepo Contents](./gitrepo-structure).
|
||||||
|
For more details on the options that are available per Git repository see [Adding a GitRepo](./gitrepo-add).
|
||||||
|
:::
|
||||||
|
|
||||||
|
## Single-Cluster Examples
|
||||||
|
|
||||||
|
All examples will deploy content to clusters with no per-cluster customizations. This is a good starting point to understand the basics of structuring Git repos for Fleet.
|
||||||
|
|
||||||
|
<Tabs groupId="examples">
|
||||||
|
<TabItem value="helm" label="Helm" default>
|
||||||
|
|
||||||
|
An example using Helm. We are deploying the <a href="https://github.com/rancher/fleet-examples/tree/master/single-cluster/helm">helm example</a> to the local cluster.
|
||||||
|
|
||||||
|
The repository contains a helm chart and an optional `fleet.yaml` to configure the deployment:
|
||||||
|
|
||||||
|
```yaml title="fleet.yaml"
|
||||||
|
namespace: fleet-helm-example
|
||||||
|
|
||||||
|
# Custom helm options
|
||||||
|
helm:
|
||||||
|
# The release name to use. If empty a generated release name will be used
|
||||||
|
releaseName: guestbook
|
||||||
|
|
||||||
|
# The directory of the chart in the repo. Also any valid go-getter supported
|
||||||
|
# URL can be used there is specify where to download the chart from.
|
||||||
|
# If repo below is set this value if the chart name in the repo
|
||||||
|
chart: ""
|
||||||
|
|
||||||
|
# An https to a valid Helm repository to download the chart from
|
||||||
|
repo: ""
|
||||||
|
|
||||||
|
# Used if repo is set to look up the version of the chart
|
||||||
|
version: ""
|
||||||
|
|
||||||
|
# Force recreate resource that can not be updated
|
||||||
|
force: false
|
||||||
|
|
||||||
|
# How long for helm to wait for the release to be active. If the value
|
||||||
|
# is less that or equal to zero, we will not wait in Helm
|
||||||
|
timeoutSeconds: 0
|
||||||
|
|
||||||
|
# Custom values that will be passed as values.yaml to the installation
|
||||||
|
values:
|
||||||
|
replicas: 2
|
||||||
|
```
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-local` namespace contains the local cluster resource. The local fleet-agent will create the deployment in the `fleet-helm-example` namespace.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-local -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- single-cluster/helm
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="helm-multi-chart" label="Helm Multi Chart" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/helm-multi-chart">example deploying multiple charts</a> from a single repo. This is similar to the previous example, but will deploy three helm charts from the sub folders, each configured by its own `fleet.yaml`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-local -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- single-cluster/helm-multi-chart
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="helm-kustomize" label="Helm & Kustomize" default>
|
||||||
|
|
||||||
|
An example using <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/helm-kustomize">Kustomize to modify a third party Helm chart</a>.
|
||||||
|
It will deploy the Kubernetes sample guestbook application as packaged as a Helm chart downloaded from a third party source and will modify the helm chart using Kustomize. The app will be deployed into the fleet-helm-kustomize-example namespace.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-local -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- single-cluster/helm-kustomize
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="kustomize" label="Kustomize" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/blob/master/single-cluster/kustomize">example using Kustomize</a>.
|
||||||
|
|
||||||
|
Note that the `fleet.yaml` has a `kustomize:` key to specify the path to the required `kustomization.yaml`:
|
||||||
|
|
||||||
|
```yaml title="fleet.yaml"
|
||||||
|
kustomize:
|
||||||
|
# To use a kustomization.yaml different from the one in the root folder
|
||||||
|
dir: ""
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-local -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- single-cluster/kustomize
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="manifests" label="Manifests" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/tree/master/single-cluster/manifests">example using raw Kubernetes YAML</a>.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-local -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- single-cluster/manifests
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Multi-Cluster Examples
|
||||||
|
|
||||||
|
The examples below will deploy a multi git repo to multiple clusters at once and configure the app differently for each target.
|
||||||
|
|
||||||
|
<Tabs groupId="examples">
|
||||||
|
<TabItem value="helm" label="Helm" default>
|
||||||
|
|
||||||
|
|
||||||
|
An example using Helm. We are deploying the <a href="https://github.com/rancher/fleet-examples/tree/master/multi-cluster/helm">helm example</a> and customizing it per target cluster
|
||||||
|
|
||||||
|
The repository contains a helm chart and an optional `fleet.yaml` to configure the deployment. The `fleet.yaml` is used to configure different deployment options, depending on the cluster's labels:
|
||||||
|
|
||||||
|
```yaml title="fleet.yaml"
|
||||||
|
namespace: fleet-mc-helm-example
|
||||||
|
targetCustomizations:
|
||||||
|
- name: dev
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
replication: false
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
replicas: 3
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
helm:
|
||||||
|
values:
|
||||||
|
serviceType: LoadBalancer
|
||||||
|
replicas: 3
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
```yaml title="gitrepo.yaml"
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm
|
||||||
|
namespace: fleet-default
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- multi-cluster/helm
|
||||||
|
targets:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||||
|
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="helm-external" label="Helm External" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm-external">example using a Helm chart that is downloaded from a third party source and customizing it per target cluster</a>. The customization is similar to the previous example.
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
```yaml title="gitrepo.yaml"
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm-external
|
||||||
|
namespace: fleet-default
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- multi-cluster/helm-external
|
||||||
|
targets:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||||
|
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="helm-kustomize" label="Helm & Kustomize" default>
|
||||||
|
|
||||||
|
An example using <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/helm-kustomize">kustomize to modify a third party Helm chart</a>.
|
||||||
|
It will deploy the Kubernetes sample guestbook application as packaged as a Helm chart downloaded from a third party source and will modify the helm chart using Kustomize. The app will be deployed into the fleet-helm-kustomize-example namespace.
|
||||||
|
|
||||||
|
The application will be customized as follows per environment:
|
||||||
|
|
||||||
|
* Dev clusters: Only the redis leader is deployed and not the followers.
|
||||||
|
* Test clusters: Scale the front deployment to 3
|
||||||
|
* Prod clusters: Scale the front deployment to 3 and set the service type to LoadBalancer
|
||||||
|
|
||||||
|
The `fleet.yaml` is used to control which overlays are used, depending on the cluster's labels:
|
||||||
|
|
||||||
|
```yaml title="fleet.yaml"
|
||||||
|
namespace: fleet-mc-kustomize-example
|
||||||
|
targetCustomizations:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
kustomize:
|
||||||
|
dir: overlays/dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
kustomize:
|
||||||
|
dir: overlays/test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
kustomize:
|
||||||
|
dir: overlays/prod
|
||||||
|
```
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
```yaml title="gitrepo.yaml"
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: helm-kustomize
|
||||||
|
namespace: fleet-default
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- multi-cluster/helm-kustomize
|
||||||
|
targets:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||||
|
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="kustomize" label="Kustomize" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/blob/master/multi-cluster/kustomize">example using Kustomize</a> and customizing it per target cluster.
|
||||||
|
|
||||||
|
The customization in `fleet.yaml` is identical to the "Helm & Kustomize" example.
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -n fleet-default -f - <<EOF
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: kustomize
|
||||||
|
namespace: fleet-default
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- multi-cluster/kustomize
|
||||||
|
targets:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
By applying the gitrepo resource to the upstream cluster, fleet will start to monitor the repository and create deployments:
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="manifests" label="Manifests" default>
|
||||||
|
|
||||||
|
An <a href="https://github.com/rancher/fleet-examples/tree/master/multi-cluster/manifests">example using raw Kubernetes YAML and customizing it per target cluster</a>.
|
||||||
|
The application will be customized as follows per environment:
|
||||||
|
|
||||||
|
* Dev clusters: Only the redis leader is deployed and not the followers.
|
||||||
|
* Test clusters: Scale the front deployment to 3
|
||||||
|
* Prod clusters: Scale the front deployment to 3 and set the service type to LoadBalancer
|
||||||
|
|
||||||
|
The `fleet.yaml` is used to control which 'yaml' overlays are used, depending on the cluster's labels:
|
||||||
|
|
||||||
|
```yaml title="fleet.yaml"
|
||||||
|
namespace: fleet-mc-manifest-example
|
||||||
|
targetCustomizations:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
yaml:
|
||||||
|
overlays:
|
||||||
|
# Refers to overlays/noreplication folder
|
||||||
|
- noreplication
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
yaml:
|
||||||
|
overlays:
|
||||||
|
# Refers to overlays/scale3 folder
|
||||||
|
- scale3
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
yaml:
|
||||||
|
# Refers to overlays/servicelb, scale3 folders
|
||||||
|
overlays:
|
||||||
|
- servicelb
|
||||||
|
- scale3
|
||||||
|
```
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
To create the deployment, we apply the custom resource to the upstream cluster. The `fleet-default` namespace, by default, contains the downstream cluster resources. The chart will be deployed to all clusters in the fleet-default namespace, which have a labeled cluster resources that matches any entry under `targets:`.
|
||||||
|
|
||||||
|
```yaml title="gitrepo.yaml"
|
||||||
|
kind: GitRepo
|
||||||
|
apiVersion: fleet.cattle.io/v1alpha1
|
||||||
|
metadata:
|
||||||
|
name: manifests
|
||||||
|
namespace: fleet-default
|
||||||
|
spec:
|
||||||
|
repo: https://github.com/rancher/fleet-examples
|
||||||
|
paths:
|
||||||
|
- multi-cluster/manifests
|
||||||
|
targets:
|
||||||
|
- name: dev
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: dev
|
||||||
|
|
||||||
|
- name: test
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: test
|
||||||
|
|
||||||
|
- name: prod
|
||||||
|
clusterSelector:
|
||||||
|
matchLabels:
|
||||||
|
env: prod
|
||||||
|
```
|
||||||
|
|
||||||
|
<CodeBlock language="bash">
|
||||||
|
{`kubectl apply -n fleet-default -f gitrepo.yaml`}
|
||||||
|
</CodeBlock>
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
@ -7,4 +7,8 @@ two commands:
|
||||||
```shell
|
```shell
|
||||||
helm -n cattle-fleet-system uninstall fleet
|
helm -n cattle-fleet-system uninstall fleet
|
||||||
helm -n cattle-fleet-system uninstall fleet-crd
|
helm -n cattle-fleet-system uninstall fleet-crd
|
||||||
```
|
```
|
||||||
|
|
||||||
|
:::caution
|
||||||
|
Uninstalling the CRDs will remove all deployed workloads.
|
||||||
|
:::
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,4 @@
|
||||||
# Webhook
|
# Using Webhooks Instead of Polling
|
||||||
|
|
||||||
By default, Fleet utilizes polling (default: 15 seconds) to pull from a Git repo.However, this can be configured to utilize a webhook instead.Fleet currently supports Github,
|
By default, Fleet utilizes polling (default: 15 seconds) to pull from a Git repo.However, this can be configured to utilize a webhook instead.Fleet currently supports Github,
|
||||||
GitLab, Bitbucket, Bitbucket Server and Gogs.
|
GitLab, Bitbucket, Bitbucket Server and Gogs.
|
||||||
|
|
@ -67,4 +67,4 @@ For example, to create a secret containing a GitHub secret to validate the webho
|
||||||
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=github=webhooksecretvalue
|
kubectl create secret generic gitjob-webhook -n cattle-fleet-system --from-literal=github=webhooksecretvalue
|
||||||
```
|
```
|
||||||
|
|
||||||
### 4. Go to your git provider and test the connection. You should get a HTTP response code.
|
### 4. Go to your git provider and test the connection. You should get a HTTP response code.
|
||||||
|
|
|
||||||
|
|
@ -14,8 +14,8 @@
|
||||||
"write-heading-ids": "docusaurus write-heading-ids"
|
"write-heading-ids": "docusaurus write-heading-ids"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@docusaurus/core": "^2.2.0",
|
"@docusaurus/core": "^2.3.1",
|
||||||
"@docusaurus/preset-classic": "^2.2.0",
|
"@docusaurus/preset-classic": "^2.3.1",
|
||||||
"@mdx-js/react": "^1.6.22",
|
"@mdx-js/react": "^1.6.22",
|
||||||
"clsx": "^1.2.1",
|
"clsx": "^1.2.1",
|
||||||
"prism-react-renderer": "^1.3.5",
|
"prism-react-renderer": "^1.3.5",
|
||||||
|
|
@ -23,7 +23,7 @@
|
||||||
"react-dom": "^17.0.2"
|
"react-dom": "^17.0.2"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@docusaurus/module-type-aliases": "^2.2.0"
|
"@docusaurus/module-type-aliases": "^2.3.1"
|
||||||
},
|
},
|
||||||
"browserslist": {
|
"browserslist": {
|
||||||
"production": [
|
"production": [
|
||||||
|
|
|
||||||
83
sidebars.js
83
sidebars.js
|
|
@ -1,71 +1,65 @@
|
||||||
module.exports = {
|
module.exports = {
|
||||||
docs: [
|
docs: [
|
||||||
'index',
|
'index',
|
||||||
'quickstart',
|
|
||||||
'concepts',
|
|
||||||
'architecture',
|
|
||||||
'examples',
|
|
||||||
{
|
{
|
||||||
type: 'category',
|
type: 'category',
|
||||||
label: 'Operator Guide',
|
label: 'Tutorials',
|
||||||
|
collapsed: false,
|
||||||
items:[
|
items:[
|
||||||
|
'quickstart',
|
||||||
|
'tut-deployment',
|
||||||
|
{type:'doc', id:'uninstall'},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
type: 'category',
|
||||||
|
label: 'Explanations',
|
||||||
|
collapsed: false,
|
||||||
|
items:[
|
||||||
|
'architecture',
|
||||||
|
'concepts',
|
||||||
|
'ref-bundle-stages',
|
||||||
|
'ref-components',
|
||||||
|
'namespaces',
|
||||||
|
'ref-resources',
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
type: 'category',
|
||||||
|
label: 'How-tos for Operators',
|
||||||
|
collapsed: false,
|
||||||
|
items:[
|
||||||
|
{type: 'doc', id: 'installation'},
|
||||||
{
|
{
|
||||||
'Registering Clusters':
|
'Registering Clusters':
|
||||||
[
|
[
|
||||||
{type: 'doc', id: 'cluster-overview'},
|
{type: 'doc', id: 'cluster-overview'},
|
||||||
{type: 'doc', id: 'cluster-tokens'},
|
|
||||||
{type: 'doc', id: 'agent-initiated'},
|
{type: 'doc', id: 'agent-initiated'},
|
||||||
{type: 'doc', id: 'manager-initiated'},
|
{type: 'doc', id: 'manager-initiated'},
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
{type:'doc', id:'cluster-group'},
|
{type:'doc', id:'cluster-group'},
|
||||||
'namespaces',
|
|
||||||
'multi-tenancy',
|
'multi-tenancy',
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
type: 'category',
|
type: 'category',
|
||||||
label: 'User Guide',
|
label: 'How-tos for Users',
|
||||||
|
collapsed: false,
|
||||||
items:[
|
items:[
|
||||||
{type:'doc', id:'gitrepo-add'},
|
{type:'doc', id:'gitrepo-add'},
|
||||||
{type:'doc', id:'gitrepo-structure'},
|
{type:'doc', id:'gitrepo-structure'},
|
||||||
{type:'doc', id:'gitrepo-targets'},
|
{type:'doc', id:'gitrepo-targets'},
|
||||||
{type:'doc', id:'bundle-diffs'},
|
{type:'doc', id:'bundle-diffs'},
|
||||||
{type:'doc', id:'webhook'},
|
{type:'doc', id:'webhook'},
|
||||||
{type:'doc', id:'imagescan'},
|
{type:'doc', id:'imagescan'},
|
||||||
],
|
|
||||||
},
|
|
||||||
'troubleshooting',
|
|
||||||
{
|
|
||||||
type: 'category',
|
|
||||||
label: 'Advanced Users',
|
|
||||||
items:[
|
|
||||||
'advanced-users',
|
|
||||||
{
|
|
||||||
'Installation':
|
|
||||||
[
|
|
||||||
{type:'doc', id:'installation'},
|
|
||||||
{type:'doc', id:'single-cluster-install'},
|
|
||||||
{type:'doc', id:'multi-cluster-install'},
|
|
||||||
{type:'doc', id:'uninstall'},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
type: 'category',
|
type: 'category',
|
||||||
label: 'Reference',
|
label: 'Reference',
|
||||||
|
collapsed: false,
|
||||||
items:[
|
items:[
|
||||||
{type:'doc', id:'cluster-bundles-state'},
|
|
||||||
'ref-crd-gitrepo',
|
|
||||||
'ref-fleet-yaml',
|
|
||||||
'ref-bundle-stages',
|
|
||||||
'ref-components',
|
|
||||||
'ref-namespaces',
|
|
||||||
'ref-resources',
|
|
||||||
'ref-configuration',
|
|
||||||
'ref-registration',
|
|
||||||
"ref-crds",
|
|
||||||
{
|
{
|
||||||
'CLI':
|
'CLI':
|
||||||
[
|
[
|
||||||
|
|
@ -80,7 +74,14 @@ module.exports = {
|
||||||
{type: 'doc', id: 'cli/fleet-controller/fleet-manager'},
|
{type: 'doc', id: 'cli/fleet-controller/fleet-manager'},
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
|
{type:'doc', id:'cluster-bundles-state'},
|
||||||
|
'ref-registration',
|
||||||
|
'ref-configuration',
|
||||||
|
"ref-crds",
|
||||||
|
'ref-fleet-yaml',
|
||||||
|
'ref-crd-gitrepo',
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
|
'troubleshooting',
|
||||||
],
|
],
|
||||||
};
|
};
|
||||||
|
|
|
||||||
|
|
@ -8,5 +8,6 @@ export const versions = {
|
||||||
"fleet": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-0.6.0-rc.4.tgz",
|
"fleet": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-0.6.0-rc.4.tgz",
|
||||||
"fleetAgent": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-agent-0.6.0-rc.4.tgz",
|
"fleetAgent": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-agent-0.6.0-rc.4.tgz",
|
||||||
"fleetCRD": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-crd-0.6.0-rc.4.tgz",
|
"fleetCRD": "https://github.com/rancher/fleet/releases/download/v0.6.0-rc.4/fleet-crd-0.6.0-rc.4.tgz",
|
||||||
|
"kubernetes": "1.20.5",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
||||||
364
yarn.lock
364
yarn.lock
|
|
@ -1195,10 +1195,10 @@
|
||||||
"@docsearch/css" "3.3.0"
|
"@docsearch/css" "3.3.0"
|
||||||
algoliasearch "^4.0.0"
|
algoliasearch "^4.0.0"
|
||||||
|
|
||||||
"@docusaurus/core@2.2.0", "@docusaurus/core@^2.2.0":
|
"@docusaurus/core@2.3.1", "@docusaurus/core@^2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/core/-/core-2.2.0.tgz#64c9ee31502c23b93c869f8188f73afaf5fd4867"
|
resolved "https://registry.yarnpkg.com/@docusaurus/core/-/core-2.3.1.tgz#32849f2ffd2f086a4e55739af8c4195c5eb386f2"
|
||||||
integrity sha512-Vd6XOluKQqzG12fEs9prJgDtyn6DPok9vmUWDR2E6/nV5Fl9SVkhEQOBxwObjk3kQh7OY7vguFaLh0jqdApWsA==
|
integrity sha512-0Jd4jtizqnRAr7svWaBbbrCCN8mzBNd2xFLoT/IM7bGfFie5y58oz97KzXliwiLY3zWjqMXjQcuP1a5VgCv2JA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@babel/core" "^7.18.6"
|
"@babel/core" "^7.18.6"
|
||||||
"@babel/generator" "^7.18.7"
|
"@babel/generator" "^7.18.7"
|
||||||
|
|
@ -1210,13 +1210,13 @@
|
||||||
"@babel/runtime" "^7.18.6"
|
"@babel/runtime" "^7.18.6"
|
||||||
"@babel/runtime-corejs3" "^7.18.6"
|
"@babel/runtime-corejs3" "^7.18.6"
|
||||||
"@babel/traverse" "^7.18.8"
|
"@babel/traverse" "^7.18.8"
|
||||||
"@docusaurus/cssnano-preset" "2.2.0"
|
"@docusaurus/cssnano-preset" "2.3.1"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/react-loadable" "5.5.2"
|
"@docusaurus/react-loadable" "5.5.2"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-common" "2.2.0"
|
"@docusaurus/utils-common" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
"@slorber/static-site-generator-webpack-plugin" "^4.0.7"
|
"@slorber/static-site-generator-webpack-plugin" "^4.0.7"
|
||||||
"@svgr/webpack" "^6.2.1"
|
"@svgr/webpack" "^6.2.1"
|
||||||
autoprefixer "^10.4.7"
|
autoprefixer "^10.4.7"
|
||||||
|
|
@ -1237,7 +1237,7 @@
|
||||||
del "^6.1.1"
|
del "^6.1.1"
|
||||||
detect-port "^1.3.0"
|
detect-port "^1.3.0"
|
||||||
escape-html "^1.0.3"
|
escape-html "^1.0.3"
|
||||||
eta "^1.12.3"
|
eta "^2.0.0"
|
||||||
file-loader "^6.2.0"
|
file-loader "^6.2.0"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
html-minifier-terser "^6.1.0"
|
html-minifier-terser "^6.1.0"
|
||||||
|
|
@ -1272,33 +1272,33 @@
|
||||||
webpack-merge "^5.8.0"
|
webpack-merge "^5.8.0"
|
||||||
webpackbar "^5.0.2"
|
webpackbar "^5.0.2"
|
||||||
|
|
||||||
"@docusaurus/cssnano-preset@2.2.0":
|
"@docusaurus/cssnano-preset@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/cssnano-preset/-/cssnano-preset-2.2.0.tgz#fc05044659051ae74ab4482afcf4a9936e81d523"
|
resolved "https://registry.yarnpkg.com/@docusaurus/cssnano-preset/-/cssnano-preset-2.3.1.tgz#e042487655e3e062417855e12edb3f6eee8f5ecb"
|
||||||
integrity sha512-mAAwCo4n66TMWBH1kXnHVZsakW9VAXJzTO4yZukuL3ro4F+JtkMwKfh42EG75K/J/YIFQG5I/Bzy0UH/hFxaTg==
|
integrity sha512-7mIhAROES6CY1GmCjR4CZkUfjTL6B3u6rKHK0ChQl2d1IevYXq/k/vFgvOrJfcKxiObpMnE9+X6R2Wt1KqxC6w==
|
||||||
dependencies:
|
dependencies:
|
||||||
cssnano-preset-advanced "^5.3.8"
|
cssnano-preset-advanced "^5.3.8"
|
||||||
postcss "^8.4.14"
|
postcss "^8.4.14"
|
||||||
postcss-sort-media-queries "^4.2.1"
|
postcss-sort-media-queries "^4.2.1"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/logger@2.2.0":
|
"@docusaurus/logger@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/logger/-/logger-2.2.0.tgz#ea2f7feda7b8675485933b87f06d9c976d17423f"
|
resolved "https://registry.yarnpkg.com/@docusaurus/logger/-/logger-2.3.1.tgz#d76aefb452e3734b4e0e645efc6cbfc0aae52869"
|
||||||
integrity sha512-DF3j1cA5y2nNsu/vk8AG7xwpZu6f5MKkPPMaaIbgXLnWGfm6+wkOeW7kNrxnM95YOhKUkJUophX69nGUnLsm0A==
|
integrity sha512-2lAV/olKKVr9qJhfHFCaqBIl8FgYjbUFwgUnX76+cULwQYss+42ZQ3grHGFvI0ocN2X55WcYe64ellQXz7suqg==
|
||||||
dependencies:
|
dependencies:
|
||||||
chalk "^4.1.2"
|
chalk "^4.1.2"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/mdx-loader@2.2.0":
|
"@docusaurus/mdx-loader@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/mdx-loader/-/mdx-loader-2.2.0.tgz#fd558f429e5d9403d284bd4214e54d9768b041a0"
|
resolved "https://registry.yarnpkg.com/@docusaurus/mdx-loader/-/mdx-loader-2.3.1.tgz#7ec6acee5eff0a280e1b399ea4dd690b15a793f7"
|
||||||
integrity sha512-X2bzo3T0jW0VhUU+XdQofcEeozXOTmKQMvc8tUnWRdTnCvj4XEcBVdC3g+/jftceluiwSTNRAX4VBOJdNt18jA==
|
integrity sha512-Gzga7OsxQRpt3392K9lv/bW4jGppdLFJh3luKRknCKSAaZrmVkOQv2gvCn8LAOSZ3uRg5No7AgYs/vpL8K94lA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@babel/parser" "^7.18.8"
|
"@babel/parser" "^7.18.8"
|
||||||
"@babel/traverse" "^7.18.8"
|
"@babel/traverse" "^7.18.8"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@mdx-js/mdx" "^1.6.22"
|
"@mdx-js/mdx" "^1.6.22"
|
||||||
escape-html "^1.0.3"
|
escape-html "^1.0.3"
|
||||||
file-loader "^6.2.0"
|
file-loader "^6.2.0"
|
||||||
|
|
@ -1313,13 +1313,13 @@
|
||||||
url-loader "^4.1.1"
|
url-loader "^4.1.1"
|
||||||
webpack "^5.73.0"
|
webpack "^5.73.0"
|
||||||
|
|
||||||
"@docusaurus/module-type-aliases@2.2.0", "@docusaurus/module-type-aliases@^2.2.0":
|
"@docusaurus/module-type-aliases@2.3.1", "@docusaurus/module-type-aliases@^2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-2.2.0.tgz#1e23e54a1bbb6fde1961e4fa395b1b69f4803ba5"
|
resolved "https://registry.yarnpkg.com/@docusaurus/module-type-aliases/-/module-type-aliases-2.3.1.tgz#986186200818fed999be2e18d6c698eaf4683a33"
|
||||||
integrity sha512-wDGW4IHKoOr9YuJgy7uYuKWrDrSpsUSDHLZnWQYM9fN7D5EpSmYHjFruUpKWVyxLpD/Wh0rW8hYZwdjJIQUQCQ==
|
integrity sha512-6KkxfAVOJqIUynTRb/tphYCl+co3cP0PlHiMDbi+SzmYxMdgIrwYqH9yAnGSDoN6Jk2ZE/JY/Azs/8LPgKP48A==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/react-loadable" "5.5.2"
|
"@docusaurus/react-loadable" "5.5.2"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@types/history" "^4.7.11"
|
"@types/history" "^4.7.11"
|
||||||
"@types/react" "*"
|
"@types/react" "*"
|
||||||
"@types/react-router-config" "*"
|
"@types/react-router-config" "*"
|
||||||
|
|
@ -1327,18 +1327,18 @@
|
||||||
react-helmet-async "*"
|
react-helmet-async "*"
|
||||||
react-loadable "npm:@docusaurus/react-loadable@5.5.2"
|
react-loadable "npm:@docusaurus/react-loadable@5.5.2"
|
||||||
|
|
||||||
"@docusaurus/plugin-content-blog@2.2.0":
|
"@docusaurus/plugin-content-blog@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.2.0.tgz#dc55982e76771f4e678ac10e26d10e1da2011dc1"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-blog/-/plugin-content-blog-2.3.1.tgz#236b8ee4f20f7047aa9c285ae77ae36683ad48a3"
|
||||||
integrity sha512-0mWBinEh0a5J2+8ZJXJXbrCk1tSTNf7Nm4tYAl5h2/xx+PvH/Bnu0V+7mMljYm/1QlDYALNIIaT/JcoZQFUN3w==
|
integrity sha512-f5LjqX+9WkiLyGiQ41x/KGSJ/9bOjSD8lsVhPvYeUYHCtYpuiDKfhZE07O4EqpHkBx4NQdtQDbp+aptgHSTuiw==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-common" "2.2.0"
|
"@docusaurus/utils-common" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
cheerio "^1.0.0-rc.12"
|
cheerio "^1.0.0-rc.12"
|
||||||
feed "^4.2.2"
|
feed "^4.2.2"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
|
|
@ -1349,18 +1349,18 @@
|
||||||
utility-types "^3.10.0"
|
utility-types "^3.10.0"
|
||||||
webpack "^5.73.0"
|
webpack "^5.73.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-content-docs@2.2.0":
|
"@docusaurus/plugin-content-docs@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.2.0.tgz#0fcb85226fcdb80dc1e2d4a36ef442a650dcc84d"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-docs/-/plugin-content-docs-2.3.1.tgz#feae1555479558a55182f22f8a07acc5e0d7444d"
|
||||||
integrity sha512-BOazBR0XjzsHE+2K1wpNxz5QZmrJgmm3+0Re0EVPYFGW8qndCWGNtXW/0lGKhecVPML8yyFeAmnUCIs7xM2wPw==
|
integrity sha512-DxztTOBEruv7qFxqUtbsqXeNcHqcVEIEe+NQoI1oi2DBmKBhW/o0MIal8lt+9gvmpx3oYtlwmLOOGepxZgJGkw==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/module-type-aliases" "2.2.0"
|
"@docusaurus/module-type-aliases" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
"@types/react-router-config" "^5.0.6"
|
"@types/react-router-config" "^5.0.6"
|
||||||
combine-promises "^1.1.0"
|
combine-promises "^1.1.0"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
|
|
@ -1371,84 +1371,95 @@
|
||||||
utility-types "^3.10.0"
|
utility-types "^3.10.0"
|
||||||
webpack "^5.73.0"
|
webpack "^5.73.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-content-pages@2.2.0":
|
"@docusaurus/plugin-content-pages@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.2.0.tgz#e3f40408787bbe229545dd50595f87e1393bc3ae"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-content-pages/-/plugin-content-pages-2.3.1.tgz#f534a37862be5b3f2ba5b150458d7527646b6f39"
|
||||||
integrity sha512-+OTK3FQHk5WMvdelz8v19PbEbx+CNT6VSpx7nVOvMNs5yJCKvmqBJBQ2ZSxROxhVDYn+CZOlmyrC56NSXzHf6g==
|
integrity sha512-E80UL6hvKm5VVw8Ka8YaVDtO6kWWDVUK4fffGvkpQ/AJQDOg99LwOXKujPoICC22nUFTsZ2Hp70XvpezCsFQaA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
webpack "^5.73.0"
|
webpack "^5.73.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-debug@2.2.0":
|
"@docusaurus/plugin-debug@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-debug/-/plugin-debug-2.2.0.tgz#b38741d2c492f405fee01ee0ef2e0029cedb689a"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-debug/-/plugin-debug-2.3.1.tgz#26fef904713e148f6dee44957506280f8b7853bb"
|
||||||
integrity sha512-p9vOep8+7OVl6r/NREEYxf4HMAjV8JMYJ7Bos5fCFO0Wyi9AZEo0sCTliRd7R8+dlJXZEgcngSdxAUo/Q+CJow==
|
integrity sha512-Ujpml1Ppg4geB/2hyu2diWnO49az9U2bxM9Shen7b6qVcyFisNJTkVG2ocvLC7wM1efTJcUhBO6zAku2vKJGMw==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
react-json-view "^1.21.3"
|
react-json-view "^1.21.3"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-google-analytics@2.2.0":
|
"@docusaurus/plugin-google-analytics@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.2.0.tgz#63c7137eff5a1208d2059fea04b5207c037d7954"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-analytics/-/plugin-google-analytics-2.3.1.tgz#e2e7db4cf6a7063e8ba5e128d4e413f4d6a0c862"
|
||||||
integrity sha512-+eZVVxVeEnV5nVQJdey9ZsfyEVMls6VyWTIj8SmX0k5EbqGvnIfET+J2pYEuKQnDIHxy+syRMoRM6AHXdHYGIg==
|
integrity sha512-OHip0GQxKOFU8n7gkt3TM4HOYTXPCFDjqKbMClDD3KaDnyTuMp/Zvd9HSr770lLEscgPWIvzhJByRAClqsUWiQ==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-google-gtag@2.2.0":
|
"@docusaurus/plugin-google-gtag@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.2.0.tgz#7b086d169ac5fe9a88aca10ab0fd2bf00c6c6b12"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-gtag/-/plugin-google-gtag-2.3.1.tgz#b8da54a60c0a50aca609c3643faef78cb4f247a0"
|
||||||
integrity sha512-6SOgczP/dYdkqUMGTRqgxAS1eTp6MnJDAQMy8VCF1QKbWZmlkx4agHDexihqmYyCujTYHqDAhm1hV26EET54NQ==
|
integrity sha512-uXtDhfu4+Hm+oqWUySr3DNI5cWC/rmP6XJyAk83Heor3dFjZqDwCbkX8yWPywkRiWev3Dk/rVF8lEn0vIGVocA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/plugin-sitemap@2.2.0":
|
"@docusaurus/plugin-google-tag-manager@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.2.0.tgz#876da60937886032d63143253d420db6a4b34773"
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-google-tag-manager/-/plugin-google-tag-manager-2.3.1.tgz#f19bc01cc784fa4734187c5bc637f0574857e15d"
|
||||||
integrity sha512-0jAmyRDN/aI265CbWZNZuQpFqiZuo+5otk2MylU9iVrz/4J7gSc+ZJ9cy4EHrEsW7PV8s1w18hIEsmcA1YgkKg==
|
integrity sha512-Ww2BPEYSqg8q8tJdLYPFFM3FMDBCVhEM4UUqKzJaiRMx3NEoly3qqDRAoRDGdIhlC//Rf0iJV9cWAoq2m6k3sw==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
tslib "^2.4.0"
|
||||||
"@docusaurus/utils-common" "2.2.0"
|
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/plugin-sitemap@2.3.1":
|
||||||
|
version "2.3.1"
|
||||||
|
resolved "https://registry.yarnpkg.com/@docusaurus/plugin-sitemap/-/plugin-sitemap-2.3.1.tgz#f526ab517ca63b7a3460d585876f5952cb908aa0"
|
||||||
|
integrity sha512-8Yxile/v6QGYV9vgFiYL+8d2N4z4Er3pSHsrD08c5XI8bUXxTppMwjarDUTH/TRTfgAWotRbhJ6WZLyajLpozA==
|
||||||
|
dependencies:
|
||||||
|
"@docusaurus/core" "2.3.1"
|
||||||
|
"@docusaurus/logger" "2.3.1"
|
||||||
|
"@docusaurus/types" "2.3.1"
|
||||||
|
"@docusaurus/utils" "2.3.1"
|
||||||
|
"@docusaurus/utils-common" "2.3.1"
|
||||||
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
sitemap "^7.1.1"
|
sitemap "^7.1.1"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/preset-classic@^2.2.0":
|
"@docusaurus/preset-classic@^2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/preset-classic/-/preset-classic-2.2.0.tgz#bece5a043eeb74430f7c6c7510000b9c43669eb7"
|
resolved "https://registry.yarnpkg.com/@docusaurus/preset-classic/-/preset-classic-2.3.1.tgz#f0193f06093eb55cafef66bd1ad9e0d33198bf95"
|
||||||
integrity sha512-yKIWPGNx7BT8v2wjFIWvYrS+nvN04W+UameSFf8lEiJk6pss0kL6SG2MRvyULiI3BDxH+tj6qe02ncpSPGwumg==
|
integrity sha512-OQ5W0AHyfdUk0IldwJ3BlnZ1EqoJuu2L2BMhqLbqwNWdkmzmSUvlFLH1Pe7CZSQgB2YUUC/DnmjbPKk/qQD0lQ==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/plugin-content-blog" "2.2.0"
|
"@docusaurus/plugin-content-blog" "2.3.1"
|
||||||
"@docusaurus/plugin-content-docs" "2.2.0"
|
"@docusaurus/plugin-content-docs" "2.3.1"
|
||||||
"@docusaurus/plugin-content-pages" "2.2.0"
|
"@docusaurus/plugin-content-pages" "2.3.1"
|
||||||
"@docusaurus/plugin-debug" "2.2.0"
|
"@docusaurus/plugin-debug" "2.3.1"
|
||||||
"@docusaurus/plugin-google-analytics" "2.2.0"
|
"@docusaurus/plugin-google-analytics" "2.3.1"
|
||||||
"@docusaurus/plugin-google-gtag" "2.2.0"
|
"@docusaurus/plugin-google-gtag" "2.3.1"
|
||||||
"@docusaurus/plugin-sitemap" "2.2.0"
|
"@docusaurus/plugin-google-tag-manager" "2.3.1"
|
||||||
"@docusaurus/theme-classic" "2.2.0"
|
"@docusaurus/plugin-sitemap" "2.3.1"
|
||||||
"@docusaurus/theme-common" "2.2.0"
|
"@docusaurus/theme-classic" "2.3.1"
|
||||||
"@docusaurus/theme-search-algolia" "2.2.0"
|
"@docusaurus/theme-common" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/theme-search-algolia" "2.3.1"
|
||||||
|
"@docusaurus/types" "2.3.1"
|
||||||
|
|
||||||
"@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2":
|
"@docusaurus/react-loadable@5.5.2", "react-loadable@npm:@docusaurus/react-loadable@5.5.2":
|
||||||
version "5.5.2"
|
version "5.5.2"
|
||||||
|
|
@ -1458,23 +1469,23 @@
|
||||||
"@types/react" "*"
|
"@types/react" "*"
|
||||||
prop-types "^15.6.2"
|
prop-types "^15.6.2"
|
||||||
|
|
||||||
"@docusaurus/theme-classic@2.2.0":
|
"@docusaurus/theme-classic@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/theme-classic/-/theme-classic-2.2.0.tgz#a048bb1bc077dee74b28bec25f4b84b481863742"
|
resolved "https://registry.yarnpkg.com/@docusaurus/theme-classic/-/theme-classic-2.3.1.tgz#8e6e194236e702c0d4e8d7b7cbb6886ae456e598"
|
||||||
integrity sha512-kjbg/qJPwZ6H1CU/i9d4l/LcFgnuzeiGgMQlt6yPqKo0SOJIBMPuz7Rnu3r/WWbZFPi//o8acclacOzmXdUUEg==
|
integrity sha512-SelSIDvyttb7ZYHj8vEUhqykhAqfOPKk+uP0z85jH72IMC58e7O8DIlcAeBv+CWsLbNIl9/Hcg71X0jazuxJug==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/module-type-aliases" "2.2.0"
|
"@docusaurus/module-type-aliases" "2.3.1"
|
||||||
"@docusaurus/plugin-content-blog" "2.2.0"
|
"@docusaurus/plugin-content-blog" "2.3.1"
|
||||||
"@docusaurus/plugin-content-docs" "2.2.0"
|
"@docusaurus/plugin-content-docs" "2.3.1"
|
||||||
"@docusaurus/plugin-content-pages" "2.2.0"
|
"@docusaurus/plugin-content-pages" "2.3.1"
|
||||||
"@docusaurus/theme-common" "2.2.0"
|
"@docusaurus/theme-common" "2.3.1"
|
||||||
"@docusaurus/theme-translations" "2.2.0"
|
"@docusaurus/theme-translations" "2.3.1"
|
||||||
"@docusaurus/types" "2.2.0"
|
"@docusaurus/types" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-common" "2.2.0"
|
"@docusaurus/utils-common" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
"@mdx-js/react" "^1.6.22"
|
"@mdx-js/react" "^1.6.22"
|
||||||
clsx "^1.2.1"
|
clsx "^1.2.1"
|
||||||
copy-text-to-clipboard "^3.0.1"
|
copy-text-to-clipboard "^3.0.1"
|
||||||
|
|
@ -1489,17 +1500,17 @@
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
utility-types "^3.10.0"
|
utility-types "^3.10.0"
|
||||||
|
|
||||||
"@docusaurus/theme-common@2.2.0":
|
"@docusaurus/theme-common@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-2.2.0.tgz#2303498d80448aafdd588b597ce9d6f4cfa930e4"
|
resolved "https://registry.yarnpkg.com/@docusaurus/theme-common/-/theme-common-2.3.1.tgz#82f52d80226efef8c4418c4eacfc5051aa215f7f"
|
||||||
integrity sha512-R8BnDjYoN90DCL75gP7qYQfSjyitXuP9TdzgsKDmSFPNyrdE3twtPNa2dIN+h+p/pr+PagfxwWbd6dn722A1Dw==
|
integrity sha512-RYmYl2OR2biO+yhmW1aS5FyEvnrItPINa+0U2dMxcHpah8reSCjQ9eJGRmAgkZFchV1+aIQzXOI1K7LCW38O0g==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/mdx-loader" "2.2.0"
|
"@docusaurus/mdx-loader" "2.3.1"
|
||||||
"@docusaurus/module-type-aliases" "2.2.0"
|
"@docusaurus/module-type-aliases" "2.3.1"
|
||||||
"@docusaurus/plugin-content-blog" "2.2.0"
|
"@docusaurus/plugin-content-blog" "2.3.1"
|
||||||
"@docusaurus/plugin-content-docs" "2.2.0"
|
"@docusaurus/plugin-content-docs" "2.3.1"
|
||||||
"@docusaurus/plugin-content-pages" "2.2.0"
|
"@docusaurus/plugin-content-pages" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@types/history" "^4.7.11"
|
"@types/history" "^4.7.11"
|
||||||
"@types/react" "*"
|
"@types/react" "*"
|
||||||
"@types/react-router-config" "*"
|
"@types/react-router-config" "*"
|
||||||
|
|
@ -1507,42 +1518,43 @@
|
||||||
parse-numeric-range "^1.3.0"
|
parse-numeric-range "^1.3.0"
|
||||||
prism-react-renderer "^1.3.5"
|
prism-react-renderer "^1.3.5"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
use-sync-external-store "^1.2.0"
|
||||||
utility-types "^3.10.0"
|
utility-types "^3.10.0"
|
||||||
|
|
||||||
"@docusaurus/theme-search-algolia@2.2.0":
|
"@docusaurus/theme-search-algolia@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.2.0.tgz#77fd9f7a600917e6024fe3ac7fb6cfdf2ce84737"
|
resolved "https://registry.yarnpkg.com/@docusaurus/theme-search-algolia/-/theme-search-algolia-2.3.1.tgz#d587b40913119e9287d14670e277b933d8f453f0"
|
||||||
integrity sha512-2h38B0tqlxgR2FZ9LpAkGrpDWVdXZ7vltfmTdX+4RsDs3A7khiNsmZB+x/x6sA4+G2V2CvrsPMlsYBy5X+cY1w==
|
integrity sha512-JdHaRqRuH1X++g5fEMLnq7OtULSGQdrs9AbhcWRQ428ZB8/HOiaN6mj3hzHvcD3DFgu7koIVtWPQnvnN7iwzHA==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docsearch/react" "^3.1.1"
|
"@docsearch/react" "^3.1.1"
|
||||||
"@docusaurus/core" "2.2.0"
|
"@docusaurus/core" "2.3.1"
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/plugin-content-docs" "2.2.0"
|
"@docusaurus/plugin-content-docs" "2.3.1"
|
||||||
"@docusaurus/theme-common" "2.2.0"
|
"@docusaurus/theme-common" "2.3.1"
|
||||||
"@docusaurus/theme-translations" "2.2.0"
|
"@docusaurus/theme-translations" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
"@docusaurus/utils-validation" "2.2.0"
|
"@docusaurus/utils-validation" "2.3.1"
|
||||||
algoliasearch "^4.13.1"
|
algoliasearch "^4.13.1"
|
||||||
algoliasearch-helper "^3.10.0"
|
algoliasearch-helper "^3.10.0"
|
||||||
clsx "^1.2.1"
|
clsx "^1.2.1"
|
||||||
eta "^1.12.3"
|
eta "^2.0.0"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
lodash "^4.17.21"
|
lodash "^4.17.21"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
utility-types "^3.10.0"
|
utility-types "^3.10.0"
|
||||||
|
|
||||||
"@docusaurus/theme-translations@2.2.0":
|
"@docusaurus/theme-translations@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/theme-translations/-/theme-translations-2.2.0.tgz#5fbd4693679806f80c26eeae1381e1f2c23d83e7"
|
resolved "https://registry.yarnpkg.com/@docusaurus/theme-translations/-/theme-translations-2.3.1.tgz#b2b1ecc00a737881b5bfabc19f90b20f0fe02bb3"
|
||||||
integrity sha512-3T140AG11OjJrtKlY4pMZ5BzbGRDjNs2co5hJ6uYJG1bVWlhcaFGqkaZ5lCgKflaNHD7UHBHU9Ec5f69jTdd6w==
|
integrity sha512-BsBZzAewJabVhoGG1Ij2u4pMS3MPW6gZ6sS4pc+Y7czevRpzxoFNJXRtQDVGe7mOpv/MmRmqg4owDK+lcOTCVQ==
|
||||||
dependencies:
|
dependencies:
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/types@2.2.0":
|
"@docusaurus/types@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/types/-/types-2.2.0.tgz#02c577a4041ab7d058a3c214ccb13647e21a9857"
|
resolved "https://registry.yarnpkg.com/@docusaurus/types/-/types-2.3.1.tgz#785ade2e0f4e35e1eb7fb0d04c27d11c3991a2e8"
|
||||||
integrity sha512-b6xxyoexfbRNRI8gjblzVOnLr4peCJhGbYGPpJ3LFqpi5nsFfoK4mmDLvWdeah0B7gmJeXabN7nQkFoqeSdmOw==
|
integrity sha512-PREbIRhTaNNY042qmfSE372Jb7djZt+oVTZkoqHJ8eff8vOIc2zqqDqBVc5BhOfpZGPTrE078yy/torUEZy08A==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@types/history" "^4.7.11"
|
"@types/history" "^4.7.11"
|
||||||
"@types/react" "*"
|
"@types/react" "*"
|
||||||
|
|
@ -1553,31 +1565,32 @@
|
||||||
webpack "^5.73.0"
|
webpack "^5.73.0"
|
||||||
webpack-merge "^5.8.0"
|
webpack-merge "^5.8.0"
|
||||||
|
|
||||||
"@docusaurus/utils-common@2.2.0":
|
"@docusaurus/utils-common@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/utils-common/-/utils-common-2.2.0.tgz#a401c1b93a8697dd566baf6ac64f0fdff1641a78"
|
resolved "https://registry.yarnpkg.com/@docusaurus/utils-common/-/utils-common-2.3.1.tgz#1abe66846eb641547e4964d44f3011938e58e50b"
|
||||||
integrity sha512-qebnerHp+cyovdUseDQyYFvMW1n1nv61zGe5JJfoNQUnjKuApch3IVsz+/lZ9a38pId8kqehC1Ao2bW/s0ntDA==
|
integrity sha512-pVlRpXkdNcxmKNxAaB1ya2hfCEvVsLDp2joeM6K6uv55Oc5nVIqgyYSgSNKZyMdw66NnvMfsu0RBylcwZQKo9A==
|
||||||
dependencies:
|
dependencies:
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/utils-validation@2.2.0":
|
"@docusaurus/utils-validation@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/utils-validation/-/utils-validation-2.2.0.tgz#04d4d103137ad0145883971d3aa497f4a1315f25"
|
resolved "https://registry.yarnpkg.com/@docusaurus/utils-validation/-/utils-validation-2.3.1.tgz#b65c718ba9b84b7a891bccf5ac6d19b57ee7d887"
|
||||||
integrity sha512-I1hcsG3yoCkasOL5qQAYAfnmVoLei7apugT6m4crQjmDGxq+UkiRrq55UqmDDyZlac/6ax/JC0p+usZ6W4nVyg==
|
integrity sha512-7n0208IG3k1HVTByMHlZoIDjjOFC8sbViHVXJx0r3Q+3Ezrx+VQ1RZ/zjNn6lT+QBCRCXlnlaoJ8ug4HIVgQ3w==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@docusaurus/utils" "2.2.0"
|
"@docusaurus/utils" "2.3.1"
|
||||||
joi "^17.6.0"
|
joi "^17.6.0"
|
||||||
js-yaml "^4.1.0"
|
js-yaml "^4.1.0"
|
||||||
tslib "^2.4.0"
|
tslib "^2.4.0"
|
||||||
|
|
||||||
"@docusaurus/utils@2.2.0":
|
"@docusaurus/utils@2.3.1":
|
||||||
version "2.2.0"
|
version "2.3.1"
|
||||||
resolved "https://registry.yarnpkg.com/@docusaurus/utils/-/utils-2.2.0.tgz#3d6f9b7a69168d5c92d371bf21c556a4f50d1da6"
|
resolved "https://registry.yarnpkg.com/@docusaurus/utils/-/utils-2.3.1.tgz#24b9cae3a23b1e6dc88f95c45722c7e82727b032"
|
||||||
integrity sha512-oNk3cjvx7Tt1Lgh/aeZAmFpGV2pDr5nHKrBVx6hTkzGhrnMuQqLt6UPlQjdYQ3QHXwyF/ZtZMO1D5Pfi0lu7SA==
|
integrity sha512-9WcQROCV0MmrpOQDXDGhtGMd52DHpSFbKLfkyaYumzbTstrbA5pPOtiGtxK1nqUHkiIv8UwexS54p0Vod2I1lg==
|
||||||
dependencies:
|
dependencies:
|
||||||
"@docusaurus/logger" "2.2.0"
|
"@docusaurus/logger" "2.3.1"
|
||||||
"@svgr/webpack" "^6.2.1"
|
"@svgr/webpack" "^6.2.1"
|
||||||
|
escape-string-regexp "^4.0.0"
|
||||||
file-loader "^6.2.0"
|
file-loader "^6.2.0"
|
||||||
fs-extra "^10.1.0"
|
fs-extra "^10.1.0"
|
||||||
github-slugger "^1.4.0"
|
github-slugger "^1.4.0"
|
||||||
|
|
@ -3627,10 +3640,10 @@ esutils@^2.0.2:
|
||||||
resolved "https://registry.yarnpkg.com/esutils/-/esutils-2.0.3.tgz#74d2eb4de0b8da1293711910d50775b9b710ef64"
|
resolved "https://registry.yarnpkg.com/esutils/-/esutils-2.0.3.tgz#74d2eb4de0b8da1293711910d50775b9b710ef64"
|
||||||
integrity sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==
|
integrity sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==
|
||||||
|
|
||||||
eta@^1.12.3:
|
eta@^2.0.0:
|
||||||
version "1.12.3"
|
version "2.0.0"
|
||||||
resolved "https://registry.yarnpkg.com/eta/-/eta-1.12.3.tgz#2982d08adfbef39f9fa50e2fbd42d7337e7338b1"
|
resolved "https://registry.yarnpkg.com/eta/-/eta-2.0.0.tgz#376865fadebc899e5b6dfce82fae64cbbe47e594"
|
||||||
integrity sha512-qHixwbDLtekO/d51Yr4glcaUJCIjGVJyTzuqV4GPlgZo1YpgOKG+avQynErZIYrfM6JIJdtiG2Kox8tbb+DoGg==
|
integrity sha512-NqE7S2VmVwgMS8yBxsH4VgNQjNjLq1gfGU0u9I6Cjh468nPRMoDfGdK9n1p/3Dvsw3ebklDkZsFAnKJ9sefjBA==
|
||||||
|
|
||||||
etag@~1.8.1:
|
etag@~1.8.1:
|
||||||
version "1.8.1"
|
version "1.8.1"
|
||||||
|
|
@ -7199,6 +7212,11 @@ use-latest@^1.2.1:
|
||||||
dependencies:
|
dependencies:
|
||||||
use-isomorphic-layout-effect "^1.1.1"
|
use-isomorphic-layout-effect "^1.1.1"
|
||||||
|
|
||||||
|
use-sync-external-store@^1.2.0:
|
||||||
|
version "1.2.0"
|
||||||
|
resolved "https://registry.yarnpkg.com/use-sync-external-store/-/use-sync-external-store-1.2.0.tgz#7dbefd6ef3fe4e767a0cf5d7287aacfb5846928a"
|
||||||
|
integrity sha512-eEgnFxGQ1Ife9bzYs6VLi8/4X6CObHMw9Qr9tPY43iKwsPw8xE8+EFsf/2cFZ5S3esXgpWgtSCtLNS41F+sKPA==
|
||||||
|
|
||||||
util-deprecate@^1.0.1, util-deprecate@^1.0.2, util-deprecate@~1.0.1:
|
util-deprecate@^1.0.1, util-deprecate@^1.0.2, util-deprecate@~1.0.1:
|
||||||
version "1.0.2"
|
version "1.0.2"
|
||||||
resolved "https://registry.yarnpkg.com/util-deprecate/-/util-deprecate-1.0.2.tgz#450d4dc9fa70de732762fbd2d4a28981419a0ccf"
|
resolved "https://registry.yarnpkg.com/util-deprecate/-/util-deprecate-1.0.2.tgz#450d4dc9fa70de732762fbd2d4a28981419a0ccf"
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue