Merge pull request #1986 from Poor12/improve-docs
Improve docs about helm chart
This commit is contained in:
commit
e84381ccd0
|
@ -8,7 +8,7 @@ Karmada aims to provide turnkey automation for multi-cluster application managem
|
|||
|
||||
Switch to the `root` directory of the repo.
|
||||
```console
|
||||
helm install karmada -n karmada-system --create-namespace ./charts
|
||||
helm install karmada -n karmada-system --create-namespace ./charts/karmada
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
@ -22,7 +22,7 @@ To install the chart with the release name `karmada` in namespace `karmada-syste
|
|||
|
||||
Switch to the `root` directory of the repo.
|
||||
```console
|
||||
helm install karmada -n karmada-system --create-namespace ./charts
|
||||
helm install karmada -n karmada-system --create-namespace ./charts/karmada
|
||||
```
|
||||
|
||||
Get kubeconfig from the cluster:
|
||||
|
@ -45,7 +45,7 @@ components: [
|
|||
Execute command (switch to the `root` directory of the repo, and sets the `current-context` in a kubeconfig file)
|
||||
```console
|
||||
kubectl config use-context host
|
||||
helm install karmada-descheduler -n karmada-system ./charts
|
||||
helm install karmada-descheduler -n karmada-system ./charts/karmada
|
||||
```
|
||||
|
||||
## Uninstalling the Chart
|
||||
|
@ -91,7 +91,7 @@ agent:
|
|||
Execute command (switch to the `root` directory of the repo, and sets the `current-context` in a kubeconfig file)
|
||||
```console
|
||||
kubectl config use-context member
|
||||
helm install karmada-agent -n karmada-system --create-namespace ./charts
|
||||
helm install karmada-agent -n karmada-system --create-namespace ./charts/karmada
|
||||
```
|
||||
### 2. Install component
|
||||
Edited values.yaml for karmada-scheduler-estimator
|
||||
|
@ -121,7 +121,7 @@ schedulerEstimator:
|
|||
Execute command (switch to the `root` directory of the repo, and sets the `current-context` in a kubeconfig file)
|
||||
```console
|
||||
kubectl config use-context host
|
||||
helm install karmada-scheduler-estimator -n karmada-system ./charts
|
||||
helm install karmada-scheduler-estimator -n karmada-system ./charts/karmada
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
|
|
@ -58,11 +58,11 @@ sed -i'' -e "s/{{caBundle}}/${ca_string}/g" ./"charts/_crds/patches/webhook_in_c
|
|||
|
||||
Generate the final CRD by `kubectl kustomize` command, e.g:
|
||||
```bash
|
||||
kubectl kustomize ./charts/_crds
|
||||
kubectl kustomize ./charts/karmada/_crds
|
||||
```
|
||||
Or, you can apply to `karmada-apiserver` by:
|
||||
```bash
|
||||
kubectl kustomize ./charts/_crds | kubectl apply -f -
|
||||
kubectl kustomize ./charts/karmada/_crds | kubectl apply -f -
|
||||
```
|
||||
|
||||
### Upgrading Components
|
||||
|
|
|
@ -1,31 +1,52 @@
|
|||
# Use Flux to support Helm chart propagation
|
||||
|
||||
[Flux](https://fluxcd.io/) is most useful when used as a deployment tool at the end of a Continuous Delivery pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster. With Flux, Karmada can easily realize the ability to distribute applications packaged by helm across clusters. Not only that, with Karmada's OverridePolicy, users can customize applications for specific clusters and manage cross-cluster applications on the unified Karmada control plane.
|
||||
[Flux](https://fluxcd.io/) is most useful when used as a deployment tool at the end of a Continuous Delivery pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster. With Flux, Karmada can easily realize the ability to distribute applications packaged by Helm across clusters. Not only that, with Karmada's OverridePolicy, users can customize applications for specific clusters and manage cross-cluster applications on the unified Karmada control plane.
|
||||
|
||||
## Start up Karmada clusters
|
||||
You just need to clone the Karmada repo, and run the following script in the `karmada` directory.
|
||||
|
||||
To start up Karmada, you can refer to [here](https://github.com/karmada-io/karmada/blob/master/docs/installation/installation.md).
|
||||
If you just want to try Karmada, we recommend building a development environment by ```hack/local-up-karmada.sh```.
|
||||
|
||||
```sh
|
||||
git clone https://github.com/karmada-io/karmada
|
||||
cd karmada
|
||||
hack/local-up-karmada.sh
|
||||
```
|
||||
|
||||
After that, you will start a Kubernetes cluster by kind to run the Karmada control plane and create member clusters managed by Karmada.
|
||||
|
||||
```sh
|
||||
kubectl get clusters --kubeconfig ~/.kube/karmada.config
|
||||
```
|
||||
|
||||
You can use the command above to check registered clusters, and you will get similar output as follows:
|
||||
|
||||
```
|
||||
hack/local-up-karmada.sh
|
||||
NAME VERSION MODE READY AGE
|
||||
member1 v1.23.4 Push True 7m38s
|
||||
member2 v1.23.4 Push True 7m35s
|
||||
member3 v1.23.4 Pull True 7m27s
|
||||
```
|
||||
|
||||
## Start up Flux
|
||||
|
||||
Install the `flux` binary:
|
||||
In Karmada control plane, you need to install Flux crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.
|
||||
Based on work API [here](https://github.com/kubernetes-sigs/work-api), they will be encapsulated as a work object deliverd to member clusters and reconciled by Flux controllers in member clusters finally.
|
||||
|
||||
```
|
||||
curl -s https://fluxcd.io/install.sh | sudo bash
|
||||
```sh
|
||||
kubectl apply -k github.com/fluxcd/flux2/manifests/crds?ref=main --kubeconfig ~/.kube/karmada.config
|
||||
```
|
||||
|
||||
Install the toolkit controllers in the `flux-system` namespace:
|
||||
For testing purposes, we'll install Flux on member clusters without storing its manifests in a Git repository:
|
||||
|
||||
```
|
||||
flux install
|
||||
```sh
|
||||
flux install --kubeconfig ~/.kube/members.config --context member1
|
||||
flux install --kubeconfig ~/.kube/members.config --context member2
|
||||
```
|
||||
|
||||
Tips:
|
||||
|
||||
1. The Flux toolkit controllers need to be installed on each cluster using the `flux install` command.
|
||||
1. If you want to manage Helm releases across your fleet of clusters, Flux must be installed on each cluster.
|
||||
|
||||
2. If the Flux toolkit controllers are successfully installed, you should see the following Pods:
|
||||
|
||||
|
@ -38,11 +59,11 @@ notification-controller-7ccfbfbb98-lpgjl 1/1 Running 0 15d
|
|||
source-controller-6b8d9cb5cc-7dbcb 1/1 Running 0 15d
|
||||
```
|
||||
|
||||
## Helm chart propagation
|
||||
## Helm release propagation
|
||||
|
||||
If you want to propagate helm applications to member clusters, you can refer to the guide below.
|
||||
If you want to propagate Helm releases for your apps to member clusters, you can refer to the guide below.
|
||||
|
||||
1. Define a HelmRepository source
|
||||
1. Define a Flux `HelmRepository` and a `HelmRelease` manifest in Karmada Control plane. They will serve as resource templates.
|
||||
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
|
@ -51,8 +72,26 @@ metadata:
|
|||
name: podinfo
|
||||
spec:
|
||||
interval: 1m
|
||||
url: https://stefanprodan.github.io/podinfo
|
||||
---
|
||||
url: https://stefanprodan.github.io/podinfo
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: podinfo
|
||||
spec:
|
||||
interval: 5m
|
||||
chart:
|
||||
spec:
|
||||
chart: podinfo
|
||||
version: 5.0.3
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: podinfo
|
||||
```
|
||||
|
||||
2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
|
||||
|
||||
```yaml
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: PropagationPolicy
|
||||
metadata:
|
||||
|
@ -67,24 +106,6 @@ spec:
|
|||
clusterNames:
|
||||
- member1
|
||||
- member2
|
||||
```
|
||||
|
||||
2. Define a HelmRelease source
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
metadata:
|
||||
name: podinfo
|
||||
spec:
|
||||
interval: 5m
|
||||
chart:
|
||||
spec:
|
||||
chart: podinfo
|
||||
version: 5.0.3
|
||||
sourceRef:
|
||||
kind: HelmRepository
|
||||
name: podinfo
|
||||
---
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: PropagationPolicy
|
||||
|
@ -102,53 +123,89 @@ spec:
|
|||
- member2
|
||||
```
|
||||
|
||||
3. Apply those YAMLs to the karmada-apiserver
|
||||
The above configuration is for propagating the Flux objects to member1 and member2 clusters.
|
||||
|
||||
3. Apply those manifests to the Karmada-apiserver:
|
||||
|
||||
```sh
|
||||
kubectl apply -f ../helm/ --kubeconfig ~/.kube/karmada.config
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
$ kubectl apply -f ../helm/
|
||||
helmrelease.helm.toolkit.fluxcd.io/podinfo created
|
||||
helmrepository.source.toolkit.fluxcd.io/podinfo created
|
||||
propagationpolicy.policy.karmada.io/helm-release created
|
||||
propagationpolicy.policy.karmada.io/helm-repo created
|
||||
```
|
||||
|
||||
4. Switch to the distributed cluster
|
||||
4. Switch to the distributed cluster and verify:
|
||||
|
||||
```sh
|
||||
helm --kubeconfig ~/.kube/members.config --kube-context member1 list
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
helm --kubeconfig=/root/.kube/members.config --kube-context member1 list
|
||||
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
||||
podinfo default 1 2022-05-27 01:44:35.24229175 +0000 UTC deployed podinfo-5.0.3 5.0.3
|
||||
```
|
||||
|
||||
Also, you can use Karmada's OverridePolicy to customize applications for specific clusters. For example, if you just want to change replicas in member1, you can refer to the overridePolicy below.
|
||||
Based on Karmada's propagation policy, you can schedule Helm releases to your desired cluster flexibly, just like Kubernetes scheduling Pods to the desired node.
|
||||
|
||||
## Customize the Helm release for specific clusters
|
||||
|
||||
The example above shows how to distribute the same Helm release to multiple clusters in Karmada. Besides, you can use Karmada's OverridePolicy to customize applications for specific clusters.
|
||||
For example, If you just want to change replicas in member1, you can refer to the overridePolicy below.
|
||||
|
||||
1. Define a Karmada `OverridePolicy`:
|
||||
|
||||
```yaml
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: OverridePolicy
|
||||
metadata:
|
||||
name: example-override
|
||||
namespace: default
|
||||
name: example-override
|
||||
namespace: default
|
||||
spec:
|
||||
resourceSelectors:
|
||||
- apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
name: podinfo
|
||||
overrideRules:
|
||||
- targetCluster:
|
||||
clusterNames:
|
||||
- member1
|
||||
overriders:
|
||||
plaintext:
|
||||
- path: "/spec/values"
|
||||
operator: add
|
||||
value:
|
||||
replicaCount: 2
|
||||
resourceSelectors:
|
||||
- apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
name: podinfo
|
||||
overrideRules:
|
||||
- targetCluster:
|
||||
clusterNames:
|
||||
- member1
|
||||
overriders:
|
||||
plaintext:
|
||||
- path: "/spec/values"
|
||||
operator: add
|
||||
value:
|
||||
replicaCount: 2
|
||||
```
|
||||
|
||||
After that, you can find that replicas in member1 has changed.
|
||||
2. Apply the manifests to the Karmada-apiserver:
|
||||
|
||||
```sh
|
||||
kubectl apply -f example-override.yaml --kubeconfig ~/.kube/karmada.config
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
overridepolicy.policy.karmada.io/example-override configured
|
||||
```
|
||||
|
||||
3. After applying the above policy in Karmada control plane, you will find that replicas in member1 has changed to 2, but those in member2 keep the same.
|
||||
|
||||
```sh
|
||||
kubectl --kubeconfig ~/.kube/members.config --context member1 get po
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
$ kubectl --kubeconfig ~/.kube/members.config --context member1 get po
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
podinfo-68979685bc-6wz6s 1/1 Running 0 6m28s
|
||||
podinfo-68979685bc-dz9f6 1/1 Running 0 7m42s
|
||||
|
@ -158,38 +215,19 @@ podinfo-68979685bc-dz9f6 1/1 Running 0 7m42s
|
|||
|
||||
Kustomize propagation is basically the same as helm chart propagation above. You can refer to the guide below.
|
||||
|
||||
1. Define a Git repository source
|
||||
1. Define a Flux `GitRepository` and a `Kustomization` manifest in Karmada Control plane:
|
||||
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
metadata:
|
||||
name: podinfo
|
||||
name: podinfo
|
||||
spec:
|
||||
interval: 1m
|
||||
url: https://github.com/stefanprodan/podinfo
|
||||
ref:
|
||||
branch: master
|
||||
---
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: PropagationPolicy
|
||||
metadata:
|
||||
name: kust-git
|
||||
spec:
|
||||
resourceSelectors:
|
||||
- apiVersion: source.toolkit.fluxcd.io/v1beta1
|
||||
kind: GitRepository
|
||||
name: podinfo
|
||||
placement:
|
||||
clusterAffinity:
|
||||
clusterNames:
|
||||
- member1
|
||||
- member2
|
||||
```
|
||||
|
||||
2. Define a kustomization
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
|
@ -203,7 +241,11 @@ spec:
|
|||
name: podinfo
|
||||
validation: client
|
||||
timeout: 80s
|
||||
---
|
||||
```
|
||||
|
||||
2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
|
||||
|
||||
```yaml
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: PropagationPolicy
|
||||
metadata:
|
||||
|
@ -218,22 +260,47 @@ spec:
|
|||
clusterNames:
|
||||
- member1
|
||||
- member2
|
||||
---
|
||||
apiVersion: policy.karmada.io/v1alpha1
|
||||
kind: PropagationPolicy
|
||||
metadata:
|
||||
name: kust-git
|
||||
spec:
|
||||
resourceSelectors:
|
||||
- apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
name: podinfo
|
||||
placement:
|
||||
clusterAffinity:
|
||||
clusterNames:
|
||||
- member1
|
||||
- member2
|
||||
```
|
||||
|
||||
3. Apply those YAMLs to the karmada-apiserver
|
||||
3. Apply those YAMLs to the karmada-apiserver:
|
||||
|
||||
```sh
|
||||
kubectl apply -f kust/ --kubeconfig ~/.kube/karmada.config
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
$ kubectl apply -f kust/
|
||||
gitrepository.source.toolkit.fluxcd.io/podinfo created
|
||||
kustomization.kustomize.toolkit.fluxcd.io/podinfo-dev created
|
||||
propagationpolicy.policy.karmada.io/kust-git created
|
||||
propagationpolicy.policy.karmada.io/kust-release created
|
||||
```
|
||||
|
||||
4. Switch to the distributed cluster
|
||||
4. Switch to the distributed cluster and verify:
|
||||
|
||||
```sh
|
||||
kubectl --kubeconfig ~/.kube/members.config --context member1 get pod -n dev
|
||||
```
|
||||
|
||||
The output is similar to:
|
||||
|
||||
```
|
||||
$ kubectl get pod -n dev
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
backend-69c7655cb-rbtrq 1/1 Running 0 15s
|
||||
cache-bdff5c8dc-mmnbm 1/1 Running 0 15s
|
||||
|
|
|
@ -205,7 +205,6 @@ In this case, we will use Kyverno v1.6.2. Related deployment files are from [her
|
|||
name: kubeconfig
|
||||
subPath: kubeconfig
|
||||
initContainers:
|
||||
- args:
|
||||
- env:
|
||||
- name: METRICS_CONFIG
|
||||
value: kyverno-metrics
|
||||
|
@ -243,31 +242,31 @@ In this case, we will use Kyverno v1.6.2. Related deployment files are from [her
|
|||
secretName: kubeconfig
|
||||
---
|
||||
apiVersion: v1
|
||||
stringData:
|
||||
kubeconfig: |-
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: {{ca_crt}}
|
||||
server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
|
||||
name: kind-karmada
|
||||
contexts:
|
||||
- context:
|
||||
cluster: kind-karmada
|
||||
user: kind-karmada
|
||||
name: karmada
|
||||
current-context: karmada
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kind-karmada
|
||||
user:
|
||||
client-certificate-data: {{client_cer}}
|
||||
client-key-data: {{client_key}}
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: kubeconfig
|
||||
namespace: kyverno
|
||||
stringData:
|
||||
kubeconfig: |-
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: {{ca_crt}}
|
||||
server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
|
||||
name: kind-karmada
|
||||
contexts:
|
||||
- context:
|
||||
cluster: kind-karmada
|
||||
user: kind-karmada
|
||||
name: karmada
|
||||
current-context: karmada
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kind-karmada
|
||||
user:
|
||||
client-certificate-data: {{client_cer}}
|
||||
client-key-data: {{client_key}}
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: kubeconfig
|
||||
namespace: kyverno
|
||||
```
|
||||
|
||||
For multi-cluster deployment, We need to add the config of `--serverIP` which is the address of the webhook server. So you need to ensure that the network from node in karmada control plane to those in karmada-host cluster is connected and expose kyverno controller pods to control plane, for example, using `nodePort` above. Then, fill in the secret which represents kubeconfig pointing to karmada-apiserver, such as **ca_crt, client_cer and client_key** above.
|
||||
|
|
|
@ -22,7 +22,7 @@ else
|
|||
fi
|
||||
|
||||
echo "Starting to package into a Karmada chart archive"
|
||||
helm package ./charts --version "${version}" -d "${release_dir}"
|
||||
helm package ./charts/karmada --version "${version}" -d "${release_dir}"
|
||||
cd "${release_dir}"
|
||||
mv "karmada-${version}.tgz" ${tar_file}
|
||||
sha256sum "${tar_file}" > "${tar_file}.sha256"
|
||||
|
|
Loading…
Reference in New Issue