Quickstart optimization
Signed-off-by: lfbear <lfbear@gmail.com>
This commit is contained in:
parent
46df7a6dff
commit
f28cce6383
|
@ -79,7 +79,7 @@ jobs:
|
||||||
with:
|
with:
|
||||||
go-version: 1.16.x
|
go-version: 1.16.x
|
||||||
- name: setup e2e test environment
|
- name: setup e2e test environment
|
||||||
run: hack/karmada-bootstrap.sh
|
run: hack/local-up-karmada.sh
|
||||||
- name: run e2e
|
- name: run e2e
|
||||||
run: hack/run-e2e.sh
|
run: hack/run-e2e.sh
|
||||||
- name: upload logs
|
- name: upload logs
|
||||||
|
|
86
README.md
86
README.md
|
@ -93,12 +93,6 @@ This guide will cover:
|
||||||
- Join a member cluster to `karmada` control plane.
|
- Join a member cluster to `karmada` control plane.
|
||||||
- Propagate an application by `karmada`.
|
- Propagate an application by `karmada`.
|
||||||
|
|
||||||
### Demo
|
|
||||||
|
|
||||||
There are several demonstrations of common cases.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
- [Go](https://golang.org/) version v1.16+
|
- [Go](https://golang.org/) version v1.16+
|
||||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.19+
|
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.19+
|
||||||
|
@ -118,90 +112,40 @@ cd karmada
|
||||||
|
|
||||||
#### 3. Deploy and run karmada control plane:
|
#### 3. Deploy and run karmada control plane:
|
||||||
|
|
||||||
run the following script: (It will create a host cluster by kind)
|
run the following script:
|
||||||
|
|
||||||
```
|
```
|
||||||
hack/local-up-karmada.sh
|
# hack/local-up-karmada.sh
|
||||||
```
|
```
|
||||||
The script `hack/local-up-karmada.sh` will do following tasks for you:
|
This script will do following tasks for you:
|
||||||
- Start a Kubernetes cluster to run the karmada control plane, aka. the `host cluster`.
|
- Start a Kubernetes cluster to run the karmada control plane, aka. the `host cluster`.
|
||||||
- Build karmada control plane components based on a current codebase.
|
- Build karmada control plane components based on a current codebase.
|
||||||
- Deploy karmada control plane components on `host cluster`.
|
- Deploy karmada control plane components on `host cluster`.
|
||||||
|
- Create member clusters and join to Karmada.
|
||||||
|
|
||||||
If everything goes well, at the end of the script output, you will see similar messages as follows:
|
If everything goes well, at the end of the script output, you will see similar messages as follows:
|
||||||
```
|
```
|
||||||
Local Karmada is running.
|
Local Karmada is running.
|
||||||
|
|
||||||
Kubeconfig for karmada is in file: /root/.kube/karmada.config, so you can run:
|
To start using your karmada, run:
|
||||||
export KUBECONFIG="/root/.kube/karmada.config"
|
export KUBECONFIG=/root/.kube/karmada.config
|
||||||
Or use kubectl with --kubeconfig=/root/.kube/karmada.config
|
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
|
||||||
Please use 'kubectl config use-context <Context_Name>' to switch clusters.
|
|
||||||
The following is context intro:
|
To manage your member clusters, run:
|
||||||
------------------------------------------------------
|
export KUBECONFIG=/root/.kube/members.config
|
||||||
| Context Name | Purpose |
|
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
|
||||||
|----------------------------------------------------|
|
|
||||||
| karmada-host | the cluster karmada install in |
|
|
||||||
|----------------------------------------------------|
|
|
||||||
| karmada-apiserver | karmada control plane |
|
|
||||||
------------------------------------------------------
|
|
||||||
```
|
```
|
||||||
|
|
||||||
There are two contexts:
|
There are two contexts about karmada:
|
||||||
- karmada-apiserver `kubectl config use-context karmada-apiserver`
|
- karmada-apiserver `kubectl config use-context karmada-apiserver`
|
||||||
- karmada-host `kubectl config use-context karmada-host`
|
- karmada-host `kubectl config use-context karmada-host`
|
||||||
|
|
||||||
The `karmada-apiserver` is the **main kubeconfig** to be used when interacting with karamda control plane, while `karmada-host` is only used for debugging karmada installation with the host cluster. You can check all clusters at any time by running: `kubectl config view`. To switch cluster contexts, run `kubectl config use-context [CONTEXT_NAME]`
|
The `karmada-apiserver` is the **main kubeconfig** to be used when interacting with karamda control plane, while `karmada-host` is only used for debugging karmada installation with the host cluster. You can check all clusters at any time by running: `kubectl config view`. To switch cluster contexts, run `kubectl config use-context [CONTEXT_NAME]`
|
||||||
|
|
||||||
#### Tips
|
|
||||||
- Please make sure you can access google cloud registry: k8s.gcr.io
|
|
||||||
- Install script will download golang package, if your server is in the mainland China, you may set go proxy like this `export GOPROXY=https://goproxy.cn`
|
|
||||||
|
|
||||||
### Join member cluster
|
### Demo
|
||||||
In the following steps, we are going to create a member cluster and then join the cluster to
|
|
||||||
karmada control plane.
|
|
||||||
|
|
||||||
#### 1. Create member cluster
|

|
||||||
We are going to create a cluster named `member1` and we want the `KUBECONFIG` file
|
|
||||||
in `$HOME/.kube/karmada.config`. Run following command:
|
|
||||||
```
|
|
||||||
hack/create-cluster.sh member1 $HOME/.kube/karmada.config
|
|
||||||
```
|
|
||||||
The script `hack/create-cluster.sh` will create a cluster by kind.
|
|
||||||
|
|
||||||
#### 2. Join member cluster to karmada control plane
|
|
||||||
|
|
||||||
You can choose one of mode: [push](#21-push-mode-karmada-controls-the-member-cluster-initiative-by-using-karmadactl) or
|
|
||||||
[pull](#22-pull-mode-installing-karmada-agent-in-the-member-cluster), either will help you join a member cluster.
|
|
||||||
|
|
||||||
##### 2.1. Push Mode: Karmada controls the member cluster initiative by using `karmadactl`
|
|
||||||
|
|
||||||
The command `karmadactl` will help to join the member cluster to karmada control plane,
|
|
||||||
before that, we should switch to karmada apiserver:
|
|
||||||
```
|
|
||||||
kubectl config use-context karmada-apiserver
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, install `karmadactl` command and join the member cluster:
|
|
||||||
```
|
|
||||||
go get github.com/karmada-io/karmada/cmd/karmadactl
|
|
||||||
karmadactl join member1 --cluster-kubeconfig=$HOME/.kube/karmada.config
|
|
||||||
```
|
|
||||||
The `karmadactl join` command will create a `Cluster` object to reflect the member cluster.
|
|
||||||
|
|
||||||
##### 2.2. Pull Mode: Installing karmada-agent in the member cluster
|
|
||||||
|
|
||||||
The following script will install the `karamda-agent` to your member cluster, you need to specify the kubeconfig and the cluster context of the karmada control plane and member cluster.
|
|
||||||
```
|
|
||||||
hack/deploy-karmada-agent.sh <karmada_apiserver_kubeconfig> <karmada_apiserver_context_name> <member_cluster_kubeconfig> <member_cluster_context_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Check member cluster status
|
|
||||||
Now, check the member clusters from karmada control plane by following command:
|
|
||||||
```
|
|
||||||
$ kubectl get clusters
|
|
||||||
NAME VERSION MODE READY AGE
|
|
||||||
member1 v1.20.2 Push True 66s
|
|
||||||
```
|
|
||||||
|
|
||||||
### Propagate application
|
### Propagate application
|
||||||
In the following steps, we are going to propagate a deployment by karmada.
|
In the following steps, we are going to propagate a deployment by karmada.
|
||||||
|
@ -223,7 +167,7 @@ You can check deployment status from karmada, don't need to access member cluste
|
||||||
```
|
```
|
||||||
$ kubectl get deployment
|
$ kubectl get deployment
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
nginx 1/1 1 1 43s
|
nginx 2/2 2 2 20s
|
||||||
```
|
```
|
||||||
|
|
||||||
## Kubernetes compatibility
|
## Kubernetes compatibility
|
||||||
|
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 309 KiB |
|
@ -6,13 +6,13 @@ Users are able to **export** and **import** services between clusters with [Mult
|
||||||
|
|
||||||
### Karmada has been installed
|
### Karmada has been installed
|
||||||
|
|
||||||
We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/karmada-bootstrap.sh` script which is also used to run our E2E cases.
|
We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
|
||||||
|
|
||||||
### Member Cluster Network
|
### Member Cluster Network
|
||||||
|
|
||||||
Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
|
Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
|
||||||
|
|
||||||
- If you use the `hack/karmada-bootstrap.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
|
- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
|
||||||
- If you use `Kind` tool to create member clusters, you can refer to the [hack/util.sh](https://github.com/karmada-io/karmada/blob/af5f544cbe68d6b37a730b42dcc1ead0fac16915/hack/util.sh#L452-L470) script to enable the container networks.
|
- If you use `Kind` tool to create member clusters, you can refer to the [hack/util.sh](https://github.com/karmada-io/karmada/blob/af5f544cbe68d6b37a730b42dcc1ead0fac16915/hack/util.sh#L452-L470) script to enable the container networks.
|
||||||
|
|
||||||
### The ServiceExport and ServiceImport CRDs have been installed
|
### The ServiceExport and ServiceImport CRDs have been installed
|
||||||
|
|
|
@ -0,0 +1,15 @@
|
||||||
|
# Troubleshooting
|
||||||
|
|
||||||
|
## I can't access some resources when install Karmada
|
||||||
|
|
||||||
|
- Pulling images from Google Container Registry(k8s.gcr.io)
|
||||||
|
|
||||||
|
You may run the following command to change the image registry in the mainland China
|
||||||
|
```shell
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-etcd.yaml
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-apiserver.yaml
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/kube-controller-manager.yaml
|
||||||
|
```
|
||||||
|
- Download golang package in the mainland China, run the following command before installing
|
||||||
|
```shell
|
||||||
|
export GOPROXY=https://goproxy.cn
|
|
@ -10,7 +10,7 @@ You have installed Argo CD following the instructions in [Getting Started](https
|
||||||
### Karmada Installation
|
### Karmada Installation
|
||||||
In this example, we are using a Karmada environment with at lease `3` member clusters joined.
|
In this example, we are using a Karmada environment with at lease `3` member clusters joined.
|
||||||
|
|
||||||
You can set up the environment by `hack/karmada-bootstrap.sh`, which is also used to run our E2E cases.
|
You can set up the environment by `hack/local-up-karmada.sh`, which is also used to run our E2E cases.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# kubectl get clusters
|
# kubectl get clusters
|
||||||
|
|
|
@ -10,12 +10,12 @@ ensures development quality.
|
||||||
|
|
||||||
## For end-user
|
## For end-user
|
||||||
|
|
||||||
- [`local-up-karmada.sh`](local-up-karmada.sh) This script will quickly set up a local development environment based on the current codebase.
|
- [`local-up-karmada.sh`](local-up-karmada.sh) This script will quickly set up a local development environment with member clusters based on the current codebase.
|
||||||
|
|
||||||
- [`remote-up-karmada.sh`](remote-up-karmada.sh) This script will install Karmada to a standalone K8s cluster, this cluster
|
- [`remote-up-karmada.sh`](remote-up-karmada.sh) This script will install Karmada to a standalone K8s cluster, this cluster
|
||||||
may be real, remote , and even for production. It is worth noting for the connectivity from your client to Karmada API server,
|
may be real, remote , and even for production. It is worth noting for the connectivity from your client to Karmada API server,
|
||||||
it will create a load balancer service with an external IP by default, if your want to customize this service, you may add
|
it will create a load balancer service with an external IP by default, else type `export CLUSTER_IP_ONLY=true` with the `ClusterIP` type service before the following script.
|
||||||
the annotations at the metadata part of service `karmada-apiserver` in
|
If your want to customize a load balancer service, you may add the annotations at the metadata part of service `karmada-apiserver` in
|
||||||
[`../artifacts/deploy/karmada-apiserver.yaml`](../artifacts/deploy/karmada-apiserver.yaml) before the installing. The
|
[`../artifacts/deploy/karmada-apiserver.yaml`](../artifacts/deploy/karmada-apiserver.yaml) before the installing. The
|
||||||
following is an example.
|
following is an example.
|
||||||
```
|
```
|
||||||
|
@ -29,6 +29,13 @@ ensures development quality.
|
||||||
# Tencent cloud (you need to replace words 'xxxxxxxx')
|
# Tencent cloud (you need to replace words 'xxxxxxxx')
|
||||||
#service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxxxxx
|
#service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxxxxx
|
||||||
```
|
```
|
||||||
|
The usage of `remote-up-karmada.sh`:
|
||||||
|
```
|
||||||
|
# hack/remote-up-karmada.sh <kubeconfig> <context_name>
|
||||||
|
```
|
||||||
|
`kubeconfig` is your cluster's kubeconfig that you want to install to
|
||||||
|
|
||||||
|
`context_name` is the name of context in 'kubeconfig'
|
||||||
|
|
||||||
- [`deploy-karmada-agent.sh`](deploy-karmada-agent.sh) This script will install Karmada Agent to the specific cluster.
|
- [`deploy-karmada-agent.sh`](deploy-karmada-agent.sh) This script will install Karmada Agent to the specific cluster.
|
||||||
|
|
||||||
|
@ -38,16 +45,13 @@ ensures development quality.
|
||||||
the installing step.
|
the installing step.
|
||||||
|
|
||||||
## For CI pipeline
|
## For CI pipeline
|
||||||
- [`karmada-bootstrap.sh`](karmada-bootstrap.sh) This script will quickly pull up a local Karmada environment too,
|
- [`local-up-karmada.sh`](local-up-karmada.sh) This script also used for testing.
|
||||||
what is different from `local-up-karmada.sh` is it will pull up member clusters. This is usually for testing,
|
|
||||||
of course, you may also use it for your local environment.
|
|
||||||
|
|
||||||
- [`run-e2e.sh`](run-e2e.sh) This script runs e2e test against on Karmada control plane. You should prepare your environment
|
- [`run-e2e.sh`](run-e2e.sh) This script runs e2e test against on Karmada control plane. You should prepare your environment
|
||||||
in advance with `karmada-bootstrap.sh`.
|
in advance with `local-up-karmada.sh`.
|
||||||
|
|
||||||
## Some internal scripts
|
## Some internal scripts
|
||||||
These scripts are not intended used by end-users, just for the development
|
These scripts are not intended used by end-users, just for the development
|
||||||
- [`deploy-karmada.sh`](deploy-karmada.sh) Underlying common implementation for `local-up-karmada.sh`, `remote-up-karmada.sh`
|
- [`deploy-karmada.sh`](deploy-karmada.sh) Underlying common implementation for `local-up-karmada.sh` and `remote-up-karmada.sh`.
|
||||||
and `karmada-bootstrap.sh`
|
|
||||||
|
|
||||||
- [`util.sh`](util.sh) All util functions.
|
- [`util.sh`](util.sh) All util functions.
|
||||||
|
|
|
@ -1,119 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
set -o errexit
|
|
||||||
set -o nounset
|
|
||||||
set -o pipefail
|
|
||||||
|
|
||||||
# This script starts a local karmada control plane based on current codebase and with a certain number of clusters joined.
|
|
||||||
# Parameters: [HOST_IPADDRESS](optional) if you want to export clusters' API server port to specific IP address
|
|
||||||
# This script depends on utils in: ${REPO_ROOT}/hack/util.sh
|
|
||||||
# 1. used by developer to setup develop environment quickly.
|
|
||||||
# 2. used by e2e testing to setup test environment automatically.
|
|
||||||
|
|
||||||
REPO_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
|
|
||||||
source "${REPO_ROOT}"/hack/util.sh
|
|
||||||
|
|
||||||
# variable define
|
|
||||||
KUBECONFIG_PATH=${KUBECONFIG_PATH:-"${HOME}/.kube"}
|
|
||||||
MAIN_KUBECONFIG=${MAIN_KUBECONFIG:-"${KUBECONFIG_PATH}/karmada.config"}
|
|
||||||
HOST_CLUSTER_NAME=${HOST_CLUSTER_NAME:-"karmada-host"}
|
|
||||||
KARMADA_APISERVER_CLUSTER_NAME=${KARMADA_APISERVER_CLUSTER_NAME:-"karmada-apiserver"}
|
|
||||||
MEMBER_CLUSTER_KUBECONFIG=${MEMBER_CLUSTER_KUBECONFIG:-"${KUBECONFIG_PATH}/members.config"}
|
|
||||||
MEMBER_CLUSTER_1_NAME=${MEMBER_CLUSTER_1_NAME:-"member1"}
|
|
||||||
MEMBER_CLUSTER_2_NAME=${MEMBER_CLUSTER_2_NAME:-"member2"}
|
|
||||||
PULL_MODE_CLUSTER_NAME=${PULL_MODE_CLUSTER_NAME:-"member3"}
|
|
||||||
HOST_IPADDRESS=${1:-}
|
|
||||||
|
|
||||||
CLUSTER_VERSION=${CLUSTER_VERSION:-"kindest/node:v1.19.1"}
|
|
||||||
KIND_LOG_FILE=${KIND_LOG_FILE:-"/tmp/karmada"}
|
|
||||||
|
|
||||||
#step0: prepare
|
|
||||||
# Make sure go exists
|
|
||||||
util::cmd_must_exist "go"
|
|
||||||
# install kind and kubectl
|
|
||||||
util::install_tools sigs.k8s.io/kind v0.11.1
|
|
||||||
# get arch name and os name in bootstrap
|
|
||||||
BS_ARCH=$(go env GOARCH)
|
|
||||||
BS_OS=$(go env GOOS)
|
|
||||||
# check arch and os name before installing
|
|
||||||
util::install_environment_check "${BS_ARCH}" "${BS_OS}"
|
|
||||||
# we choose v1.18.0, because in kubectl after versions 1.18 exist a bug which will give wrong output when using jsonpath.
|
|
||||||
# bug details: https://github.com/kubernetes/kubernetes/pull/98057
|
|
||||||
util::install_kubectl "v1.18.0" "${BS_ARCH}" "${BS_OS}"
|
|
||||||
|
|
||||||
#step1. create host cluster and member clusters in parallel
|
|
||||||
# host IP address: script parameter ahead of macOS IP
|
|
||||||
if [[ -z "${HOST_IPADDRESS}" ]]; then
|
|
||||||
util::get_macos_ipaddress # Adapt for macOS
|
|
||||||
HOST_IPADDRESS=${MAC_NIC_IPADDRESS:-}
|
|
||||||
fi
|
|
||||||
#prepare for kindClusterConfig
|
|
||||||
TEMP_PATH=$(mktemp -d)
|
|
||||||
echo -e "Preparing kindClusterConfig in path: ${TEMP_PATH}"
|
|
||||||
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/member1.yaml "${TEMP_PATH}"/member1.yaml
|
|
||||||
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/member2.yaml "${TEMP_PATH}"/member2.yaml
|
|
||||||
if [[ -n "${HOST_IPADDRESS}" ]]; then # If bind the port of clusters(karmada-host, member1 and member2) to the host IP
|
|
||||||
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/karmada-host.yaml "${TEMP_PATH}"/karmada-host.yaml
|
|
||||||
sed -i'' -e "s/{{host_ipaddress}}/${HOST_IPADDRESS}/g" "${TEMP_PATH}"/karmada-host.yaml
|
|
||||||
sed -i'' -e '/networking:/a\'$'\n'' apiServerAddress: '"${HOST_IPADDRESS}"''$'\n' "${TEMP_PATH}"/member1.yaml
|
|
||||||
sed -i'' -e '/networking:/a\'$'\n'' apiServerAddress: '"${HOST_IPADDRESS}"''$'\n' "${TEMP_PATH}"/member2.yaml
|
|
||||||
util::create_cluster "${HOST_CLUSTER_NAME}" "${MAIN_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/karmada-host.yaml
|
|
||||||
else
|
|
||||||
util::create_cluster "${HOST_CLUSTER_NAME}" "${MAIN_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"
|
|
||||||
fi
|
|
||||||
util::create_cluster "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member1.yaml
|
|
||||||
util::create_cluster "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member2.yaml
|
|
||||||
util::create_cluster "${PULL_MODE_CLUSTER_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"
|
|
||||||
|
|
||||||
#step2. make images and get karmadactl
|
|
||||||
export VERSION="latest"
|
|
||||||
export REGISTRY="swr.ap-southeast-1.myhuaweicloud.com/karmada"
|
|
||||||
make images GOOS="linux" --directory="${REPO_ROOT}"
|
|
||||||
|
|
||||||
GO111MODULE=on go install "github.com/karmada-io/karmada/cmd/karmadactl"
|
|
||||||
GOPATH=$(go env GOPATH | awk -F ':' '{print $1}')
|
|
||||||
KARMADACTL_BIN="${GOPATH}/bin/karmadactl"
|
|
||||||
|
|
||||||
#step3. wait until the host cluster ready
|
|
||||||
echo "Waiting for the host clusters to be ready..."
|
|
||||||
util::check_clusters_ready "${MAIN_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
|
||||||
|
|
||||||
#step4. load components images to kind cluster
|
|
||||||
export VERSION="latest"
|
|
||||||
export REGISTRY="swr.ap-southeast-1.myhuaweicloud.com/karmada"
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-controller-manager:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-scheduler:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-webhook:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
|
||||||
|
|
||||||
#step5. install karmada control plane components
|
|
||||||
"${REPO_ROOT}"/hack/deploy-karmada.sh "${MAIN_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
|
||||||
|
|
||||||
#step6. wait until the member cluster ready and join member clusters
|
|
||||||
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_1_NAME}"
|
|
||||||
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_2_NAME}"
|
|
||||||
|
|
||||||
# connecting networks between member1 and member2 clusters
|
|
||||||
echo "connecting cluster networks..."
|
|
||||||
util::add_routes "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_2_NAME}"
|
|
||||||
util::add_routes "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_1_NAME}"
|
|
||||||
echo "cluster networks connected"
|
|
||||||
|
|
||||||
#join push mode member clusters
|
|
||||||
export KUBECONFIG="${MAIN_KUBECONFIG}"
|
|
||||||
kubectl config use-context "${KARMADA_APISERVER_CLUSTER_NAME}"
|
|
||||||
${KARMADACTL_BIN} join member1 --cluster-kubeconfig="${MEMBER_CLUSTER_KUBECONFIG}"
|
|
||||||
${KARMADACTL_BIN} join member2 --cluster-kubeconfig="${MEMBER_CLUSTER_KUBECONFIG}"
|
|
||||||
|
|
||||||
# wait until the pull mode cluster ready
|
|
||||||
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${PULL_MODE_CLUSTER_NAME}"
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-agent:${VERSION}" --name="${PULL_MODE_CLUSTER_NAME}"
|
|
||||||
|
|
||||||
#step7. deploy karmada agent in pull mode member clusters
|
|
||||||
"${REPO_ROOT}"/hack/deploy-karmada-agent.sh "${MAIN_KUBECONFIG}" "${KARMADA_APISERVER_CLUSTER_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${PULL_MODE_CLUSTER_NAME}"
|
|
||||||
|
|
||||||
function print_success() {
|
|
||||||
echo -e "\nLocal Karmada is running."
|
|
||||||
echo "To start using your karmada, run:"
|
|
||||||
echo -e " export KUBECONFIG=${MAIN_KUBECONFIG}"
|
|
||||||
}
|
|
||||||
|
|
||||||
print_success
|
|
|
@ -1,90 +1,134 @@
|
||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
set -o errexit
|
set -o errexit
|
||||||
set -o nounset
|
set -o nounset
|
||||||
set -o pipefail
|
set -o pipefail
|
||||||
|
|
||||||
# The path for KUBECONFIG.
|
# This script starts a local karmada control plane based on current codebase and with a certain number of clusters joined.
|
||||||
|
# Parameters: [HOST_IPADDRESS](optional) if you want to export clusters' API server port to specific IP address
|
||||||
|
# This script depends on utils in: ${REPO_ROOT}/hack/util.sh
|
||||||
|
# 1. used by developer to setup develop environment quickly.
|
||||||
|
# 2. used by e2e testing to setup test environment automatically.
|
||||||
|
|
||||||
|
REPO_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
|
||||||
|
source "${REPO_ROOT}"/hack/util.sh
|
||||||
|
|
||||||
|
# variable define
|
||||||
KUBECONFIG_PATH=${KUBECONFIG_PATH:-"${HOME}/.kube"}
|
KUBECONFIG_PATH=${KUBECONFIG_PATH:-"${HOME}/.kube"}
|
||||||
# The host cluster name which used to install karmada control plane components.
|
MAIN_KUBECONFIG=${MAIN_KUBECONFIG:-"${KUBECONFIG_PATH}/karmada.config"}
|
||||||
HOST_CLUSTER_NAME=${HOST_CLUSTER_NAME:-"karmada-host"}
|
HOST_CLUSTER_NAME=${HOST_CLUSTER_NAME:-"karmada-host"}
|
||||||
|
KARMADA_APISERVER_CLUSTER_NAME=${KARMADA_APISERVER_CLUSTER_NAME:-"karmada-apiserver"}
|
||||||
|
MEMBER_CLUSTER_KUBECONFIG=${MEMBER_CLUSTER_KUBECONFIG:-"${KUBECONFIG_PATH}/members.config"}
|
||||||
|
MEMBER_CLUSTER_1_NAME=${MEMBER_CLUSTER_1_NAME:-"member1"}
|
||||||
|
MEMBER_CLUSTER_2_NAME=${MEMBER_CLUSTER_2_NAME:-"member2"}
|
||||||
|
PULL_MODE_CLUSTER_NAME=${PULL_MODE_CLUSTER_NAME:-"member3"}
|
||||||
|
HOST_IPADDRESS=${1:-}
|
||||||
|
|
||||||
# This script starts a local karmada control plane.
|
|
||||||
# Usage: hack/local-up-karmada.sh
|
|
||||||
# Example: hack/local-up-karmada.sh (start local karmada)
|
|
||||||
|
|
||||||
SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
|
|
||||||
# The KUBECONFIG path for the 'host cluster'.
|
|
||||||
HOST_CLUSTER_KUBECONFIG="${KUBECONFIG_PATH}/karmada.config"
|
|
||||||
CLUSTER_VERSION=${CLUSTER_VERSION:-"kindest/node:v1.19.1"}
|
CLUSTER_VERSION=${CLUSTER_VERSION:-"kindest/node:v1.19.1"}
|
||||||
KIND_LOG_FILE=${KIND_LOG_FILE:-"/tmp/karmada"}
|
KIND_LOG_FILE=${KIND_LOG_FILE:-"/tmp/karmada"}
|
||||||
|
|
||||||
|
#step0: prepare
|
||||||
|
# proxy setting in China mainland
|
||||||
|
|
||||||
|
if [[ -n ${CHINA_MAINLAND:-} ]]; then
|
||||||
|
export GOPROXY=https://goproxy.cn # set go proxy
|
||||||
|
# set mirror registry of k8s.gcr.io
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-etcd.yaml
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-apiserver.yaml
|
||||||
|
sed -i'' -e "s#k8s.gcr.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/kube-controller-manager.yaml
|
||||||
|
fi
|
||||||
|
|
||||||
# Make sure go exists
|
# Make sure go exists
|
||||||
source "${SCRIPT_ROOT}"/hack/util.sh
|
|
||||||
util::cmd_must_exist "go"
|
util::cmd_must_exist "go"
|
||||||
|
# install kind and kubectl
|
||||||
|
util::install_tools sigs.k8s.io/kind v0.11.1
|
||||||
|
# get arch name and os name in bootstrap
|
||||||
|
BS_ARCH=$(go env GOARCH)
|
||||||
|
BS_OS=$(go env GOOS)
|
||||||
|
# check arch and os name before installing
|
||||||
|
util::install_environment_check "${BS_ARCH}" "${BS_OS}"
|
||||||
|
# we choose v1.18.0, because in kubectl after versions 1.18 exist a bug which will give wrong output when using jsonpath.
|
||||||
|
# bug details: https://github.com/kubernetes/kubernetes/pull/98057
|
||||||
|
util::install_kubectl "v1.18.0" "${BS_ARCH}" "${BS_OS}"
|
||||||
|
|
||||||
# Check kind exists or install
|
#step1. create host cluster and member clusters in parallel
|
||||||
if [[ ! -x $(command -v kind) ]]; then
|
# host IP address: script parameter ahead of macOS IP
|
||||||
util::install_kind "v0.11.1"
|
if [[ -z "${HOST_IPADDRESS}" ]]; then
|
||||||
fi
|
|
||||||
|
|
||||||
# Make sure KUBECONFIG path exists.
|
|
||||||
if [ ! -d "$KUBECONFIG_PATH" ]; then
|
|
||||||
mkdir -p "$KUBECONFIG_PATH"
|
|
||||||
fi
|
|
||||||
|
|
||||||
util::get_macos_ipaddress # Adapt for macOS
|
util::get_macos_ipaddress # Adapt for macOS
|
||||||
|
HOST_IPADDRESS=${MAC_NIC_IPADDRESS:-}
|
||||||
# create a cluster to deploy karmada control plane components.
|
|
||||||
if [[ -n "${MAC_NIC_IPADDRESS}" ]]; then # install on macOS
|
|
||||||
TEMP_PATH=$(mktemp -d)
|
|
||||||
cp -rf "${SCRIPT_ROOT}"/artifacts/kindClusterConfig/karmada-host.yaml "${TEMP_PATH}"/karmada-host.yaml
|
|
||||||
sed -i'' -e "s/{{host_ipaddress}}/${MAC_NIC_IPADDRESS}/g" "${TEMP_PATH}"/karmada-host.yaml
|
|
||||||
util::create_cluster "${HOST_CLUSTER_NAME}" "${HOST_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}/karmada-host.yaml"
|
|
||||||
else
|
|
||||||
util::create_cluster "${HOST_CLUSTER_NAME}" "${HOST_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"
|
|
||||||
fi
|
fi
|
||||||
|
#prepare for kindClusterConfig
|
||||||
|
TEMP_PATH=$(mktemp -d)
|
||||||
|
echo -e "Preparing kindClusterConfig in path: ${TEMP_PATH}"
|
||||||
|
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/member1.yaml "${TEMP_PATH}"/member1.yaml
|
||||||
|
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/member2.yaml "${TEMP_PATH}"/member2.yaml
|
||||||
|
if [[ -n "${HOST_IPADDRESS}" ]]; then # If bind the port of clusters(karmada-host, member1 and member2) to the host IP
|
||||||
|
cp -rf "${REPO_ROOT}"/artifacts/kindClusterConfig/karmada-host.yaml "${TEMP_PATH}"/karmada-host.yaml
|
||||||
|
sed -i'' -e "s/{{host_ipaddress}}/${HOST_IPADDRESS}/g" "${TEMP_PATH}"/karmada-host.yaml
|
||||||
|
sed -i'' -e '/networking:/a\'$'\n'' apiServerAddress: '"${HOST_IPADDRESS}"''$'\n' "${TEMP_PATH}"/member1.yaml
|
||||||
|
sed -i'' -e '/networking:/a\'$'\n'' apiServerAddress: '"${HOST_IPADDRESS}"''$'\n' "${TEMP_PATH}"/member2.yaml
|
||||||
|
util::create_cluster "${HOST_CLUSTER_NAME}" "${MAIN_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/karmada-host.yaml
|
||||||
|
else
|
||||||
|
util::create_cluster "${HOST_CLUSTER_NAME}" "${MAIN_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"
|
||||||
|
fi
|
||||||
|
util::create_cluster "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member1.yaml
|
||||||
|
util::create_cluster "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}" "${TEMP_PATH}"/member2.yaml
|
||||||
|
util::create_cluster "${PULL_MODE_CLUSTER_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${CLUSTER_VERSION}" "${KIND_LOG_FILE}"
|
||||||
|
|
||||||
# make controller-manager image
|
#step2. make images and get karmadactl
|
||||||
export VERSION="latest"
|
export VERSION="latest"
|
||||||
export REGISTRY="swr.ap-southeast-1.myhuaweicloud.com/karmada"
|
export REGISTRY="swr.ap-southeast-1.myhuaweicloud.com/karmada"
|
||||||
make images GOOS="linux" --directory="${SCRIPT_ROOT}"
|
make images GOOS="linux" --directory="${REPO_ROOT}"
|
||||||
|
|
||||||
echo "Waiting for the host clusters to be ready... it may take a long time for pulling the kind image"
|
GO111MODULE=on go install "github.com/karmada-io/karmada/cmd/karmadactl"
|
||||||
util::check_clusters_ready "${HOST_CLUSTER_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
GOPATH=$(go env GOPATH | awk -F ':' '{print $1}')
|
||||||
|
KARMADACTL_BIN="${GOPATH}/bin/karmadactl"
|
||||||
|
|
||||||
# load controller-manager image
|
#step3. wait until the host cluster ready
|
||||||
|
echo "Waiting for the host clusters to be ready..."
|
||||||
|
util::check_clusters_ready "${MAIN_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
||||||
|
|
||||||
|
#step4. load components images to kind cluster
|
||||||
|
export VERSION="latest"
|
||||||
|
export REGISTRY="swr.ap-southeast-1.myhuaweicloud.com/karmada"
|
||||||
kind load docker-image "${REGISTRY}/karmada-controller-manager:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
kind load docker-image "${REGISTRY}/karmada-controller-manager:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
||||||
|
|
||||||
# load scheduler image
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-scheduler:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
kind load docker-image "${REGISTRY}/karmada-scheduler:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
||||||
|
|
||||||
# load webhook image
|
|
||||||
kind load docker-image "${REGISTRY}/karmada-webhook:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
kind load docker-image "${REGISTRY}/karmada-webhook:${VERSION}" --name="${HOST_CLUSTER_NAME}"
|
||||||
|
|
||||||
# deploy karmada control plane
|
#step5. install karmada control plane components
|
||||||
"${SCRIPT_ROOT}"/hack/deploy-karmada.sh "${HOST_CLUSTER_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
"${REPO_ROOT}"/hack/deploy-karmada.sh "${MAIN_KUBECONFIG}" "${HOST_CLUSTER_NAME}"
|
||||||
kubectl config use-context karmada-apiserver --kubeconfig="${HOST_CLUSTER_KUBECONFIG}"
|
|
||||||
|
#step6. wait until the member cluster ready and join member clusters
|
||||||
|
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_1_NAME}"
|
||||||
|
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_2_NAME}"
|
||||||
|
|
||||||
|
# connecting networks between member1 and member2 clusters
|
||||||
|
echo "connecting cluster networks..."
|
||||||
|
util::add_routes "${MEMBER_CLUSTER_1_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_2_NAME}"
|
||||||
|
util::add_routes "${MEMBER_CLUSTER_2_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${MEMBER_CLUSTER_1_NAME}"
|
||||||
|
echo "cluster networks connected"
|
||||||
|
|
||||||
|
#join push mode member clusters
|
||||||
|
export KUBECONFIG="${MAIN_KUBECONFIG}"
|
||||||
|
kubectl config use-context "${KARMADA_APISERVER_CLUSTER_NAME}"
|
||||||
|
${KARMADACTL_BIN} join member1 --cluster-kubeconfig="${MEMBER_CLUSTER_KUBECONFIG}"
|
||||||
|
${KARMADACTL_BIN} join member2 --cluster-kubeconfig="${MEMBER_CLUSTER_KUBECONFIG}"
|
||||||
|
|
||||||
|
# wait until the pull mode cluster ready
|
||||||
|
util::check_clusters_ready "${MEMBER_CLUSTER_KUBECONFIG}" "${PULL_MODE_CLUSTER_NAME}"
|
||||||
|
kind load docker-image "${REGISTRY}/karmada-agent:${VERSION}" --name="${PULL_MODE_CLUSTER_NAME}"
|
||||||
|
|
||||||
|
#step7. deploy karmada agent in pull mode member clusters
|
||||||
|
"${REPO_ROOT}"/hack/deploy-karmada-agent.sh "${MAIN_KUBECONFIG}" "${KARMADA_APISERVER_CLUSTER_NAME}" "${MEMBER_CLUSTER_KUBECONFIG}" "${PULL_MODE_CLUSTER_NAME}"
|
||||||
|
|
||||||
function print_success() {
|
function print_success() {
|
||||||
echo
|
echo -e "---\n"
|
||||||
echo "Local Karmada is running."
|
echo "Local Karmada is running."
|
||||||
echo
|
echo -e "\nTo start using your karmada, run:"
|
||||||
echo "Kubeconfig for karmada in file: ${HOST_CLUSTER_KUBECONFIG}, so you can run:"
|
echo -e " export KUBECONFIG=${MAIN_KUBECONFIG}"
|
||||||
cat <<EOF
|
echo "Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster."
|
||||||
export KUBECONFIG="${HOST_CLUSTER_KUBECONFIG}"
|
echo -e "\nTo manage your member clusters, run:"
|
||||||
EOF
|
echo -e " export KUBECONFIG=${MEMBER_CLUSTER_KUBECONFIG}"
|
||||||
echo "Or use kubectl with --kubeconfig=${HOST_CLUSTER_KUBECONFIG}"
|
echo "Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster."
|
||||||
echo "Please use 'kubectl config use-context <Context_Name>' to switch cluster to operate, the following is context intro:"
|
|
||||||
cat <<EOF
|
|
||||||
------------------------------------------------------
|
|
||||||
| Context Name | Purpose |
|
|
||||||
|----------------------------------------------------|
|
|
||||||
| karmada-host | the cluster karmada install in |
|
|
||||||
|----------------------------------------------------|
|
|
||||||
| karmada-apiserver | karmada control plane |
|
|
||||||
------------------------------------------------------
|
|
||||||
EOF
|
|
||||||
}
|
}
|
||||||
|
|
||||||
print_success
|
print_success
|
||||||
|
|
|
@ -86,12 +86,12 @@ function util::install_kubectl {
|
||||||
local ARCH=${2}
|
local ARCH=${2}
|
||||||
local OS=${3:-linux}
|
local OS=${3:-linux}
|
||||||
echo "Installing 'kubectl ${KUBECTL_VERSION}' for you, may require the root privileges"
|
echo "Installing 'kubectl ${KUBECTL_VERSION}' for you, may require the root privileges"
|
||||||
curl --retry 5 -sSLo ./kubectl -w "%{http_code}" https://dl.k8s.io/release/"$KUBECTL_VERSION"/bin/"$OS"/"$ARCH"/kubectl | grep '200'
|
curl --retry 5 -sSLo ./kubectl -w "%{http_code}" https://dl.k8s.io/release/"$KUBECTL_VERSION"/bin/"$OS"/"$ARCH"/kubectl | grep '200' > /dev/null
|
||||||
ret=$?
|
ret=$?
|
||||||
if [ ${ret} -eq 0 ]; then
|
if [ ${ret} -eq 0 ]; then
|
||||||
chmod +x ./kubectl
|
chmod +x ./kubectl
|
||||||
echo "$PATH" | grep '/usr/local/bin' || export PATH=$PATH:/usr/local/bin
|
echo "$PATH" | grep '/usr/local/bin' || export PATH=$PATH:/usr/local/bin
|
||||||
sudo rm -rf "$(which kubectl)"
|
sudo rm -rf "$(which kubectl 2> /dev/null)"
|
||||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||||
else
|
else
|
||||||
echo "Failed to install kubectl, can not download the binary file at https://dl.k8s.io/release/$KUBECTL_VERSION/bin/$OS/$ARCH/kubectl"
|
echo "Failed to install kubectl, can not download the binary file at https://dl.k8s.io/release/$KUBECTL_VERSION/bin/$OS/$ARCH/kubectl"
|
||||||
|
@ -107,12 +107,12 @@ function util::install_kind {
|
||||||
os_name=$(go env GOOS)
|
os_name=$(go env GOOS)
|
||||||
local arch_name
|
local arch_name
|
||||||
arch_name=$(go env GOARCH)
|
arch_name=$(go env GOARCH)
|
||||||
curl --retry 5 -sSLo ./kind -w "%{http_code}" "https://kind.sigs.k8s.io/dl/${kind_version}/kind-${os_name:-linux}-${arch_name:-amd64}" | grep '200'
|
curl --retry 5 -sSLo ./kind -w "%{http_code}" "https://kind.sigs.k8s.io/dl/${kind_version}/kind-${os_name:-linux}-${arch_name:-amd64}" | grep '200' > /dev/null
|
||||||
ret=$?
|
ret=$?
|
||||||
if [ ${ret} -eq 0 ]; then
|
if [ ${ret} -eq 0 ]; then
|
||||||
chmod +x ./kind
|
chmod +x ./kind
|
||||||
echo "$PATH" | grep '/usr/local/bin' || export PATH=$PATH:/usr/local/bin
|
echo "$PATH" | grep '/usr/local/bin' || export PATH=$PATH:/usr/local/bin
|
||||||
sudo rm -f "$(which kind)"
|
sudo rm -f "$(which kind 2> /dev/null)"
|
||||||
sudo mv ./kind /usr/local/bin/kind
|
sudo mv ./kind /usr/local/bin/kind
|
||||||
else
|
else
|
||||||
echo "Failed to install kind, can not download the binary file at https://kind.sigs.k8s.io/dl/${kind_version}/kind-${os_name:-linux}-${arch_name:-amd64}"
|
echo "Failed to install kind, can not download the binary file at https://kind.sigs.k8s.io/dl/${kind_version}/kind-${os_name:-linux}-${arch_name:-amd64}"
|
||||||
|
|
|
@ -5,7 +5,7 @@ metadata:
|
||||||
labels:
|
labels:
|
||||||
app: nginx
|
app: nginx
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
app: nginx
|
app: nginx
|
||||||
|
|
|
@ -11,3 +11,17 @@ spec:
|
||||||
clusterAffinity:
|
clusterAffinity:
|
||||||
clusterNames:
|
clusterNames:
|
||||||
- member1
|
- member1
|
||||||
|
- member2
|
||||||
|
replicaScheduling:
|
||||||
|
replicaDivisionPreference: Weighted
|
||||||
|
replicaSchedulingType: Divided
|
||||||
|
weightPreference:
|
||||||
|
staticWeightList:
|
||||||
|
- targetCluster:
|
||||||
|
clusterNames:
|
||||||
|
- member1
|
||||||
|
weight: 1
|
||||||
|
- targetCluster:
|
||||||
|
clusterNames:
|
||||||
|
- member2
|
||||||
|
weight: 1
|
||||||
|
|
Loading…
Reference in New Issue