Add docs about working with submariner

Signed-off-by: changzhen <changzhen5@huawei.com>
This commit is contained in:
changzhen 2021-09-16 20:30:06 +08:00
parent a0959d6214
commit 7359bf44cf
3 changed files with 87 additions and 2 deletions

View File

@ -13,7 +13,7 @@ We can install Karmada by referring to [quick-start](https://github.com/karmada-
Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
- If you use `Kind` tool to create member clusters, you can refer to the [hack/util.sh](https://github.com/karmada-io/karmada/blob/af5f544cbe68d6b37a730b42dcc1ead0fac16915/hack/util.sh#L452-L470) script to enable the container networks.
- You can use `Submariner` or other related open source projects to connected networks between member clusters.
### The ServiceExport and ServiceImport CRDs have been installed

View File

@ -0,0 +1,85 @@
# Use Submariner to connect the network between Karmada member clusters
This document uses an example to demonstrate how to use the `Submariner` to connect the network between member clusters.
[Submariner](https://github.com/submariner-io/submariner) flattens the networks between the connected clusters, and enables IP reachability between Pods and Services.
## Install Karmada
### Install karmada control plane
Following the steps [Install karmada control plane](https://github.com/karmada-io/karmada#install-karmada-control-plane) in Quick Start, you can get a Karmada.
### Join member cluster
In the following steps, we are going to create a member cluster and then join the cluster to karmada control plane.
1. Create member cluster
We are going to create a cluster named `cluster1` and we want the KUBECONFIG file in $HOME/.kube/cluster.config. Run following command:
```
# hack/create-cluster.sh cluster1 $HOME/.kube/cluster1.config
```
This will create a cluster by kind.
2. Join member cluster to karmada control plane
Export `KUBECONFIG` and switch to `karmada apiserver`:
```
# export KUBECONFIG=$HOME/.kube/karmada.config
# kubectl config use-context karmada-apiserver
```
Then, install `karmadactl` command and join the member cluster:
```
# go get github.com/karmada-io/karmada/cmd/karmadactl
# karmadactl join cluster1 --cluster-kubeconfig=$HOME/.kube/cluster1.config
```
In addition to the original member clusters, ensure that at least two member clusters are joined to the Karmada.
In this example, we have joined two member clusters to the Karmada:
```
# kubectl get clusters
NAME VERSION MODE READY AGE
cluster1 v1.21.1 Push True 16s
cluster2 v1.21.1 Push True 5s
...
```
## Deploy Submariner
We are going to deploy `Submariner` componnets on the `host cluster` and `member clusters` by using the `subctl` CLI as it's the recommended deployment method according to [Submariner official documentation](https://github.com/submariner-io/submariner/tree/b4625514061c1d85c10432a78ca0ad46e679367a#installation).
`Submariner` uses a central Broker component to facilitate the exchange of metadata information between Gateway Engines deployed in participating clusters. The Broker must be deployed on a single Kubernetes cluster. This clusters API server must be reachable by all Kubernetes clusters connected by Submariner, therefore, we deployed it on the karmada-host cluster.
### Install subctl
Please refer to the [SUBCTL Installation](https://submariner.io/operations/deployment/subctl/).
### Use karmada-host as Broker
```
subctl deploy-broker --kubeconfig /root/.kube/karmada.config --kubecontext karmada-host
```
### Join cluster1 and cluster2 to the Broker
```
subctl join --kubeconfig /root/.kube/cluster1.config broker-info.subm --natt=false
```
```
subctl join --kubeconfig /root/.kube/cluster2.config broker-info.subm --natt=false
```
## Connectivity test
Please refer to the [Multi-cluster Service Discovery](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md).

View File

@ -95,7 +95,7 @@ kubectl config rename-context "kind-${CLUSTER_NAME}" "${CLUSTER_NAME}" --kubecon
# Kind cluster uses `127.0.0.1` as kube-apiserver endpoint by default, thus kind clusters can't reach each other.
# So we need to update endpoint with container IP.
container_ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "${CLUSTER_NAME}-control-plane")
kubectl config set-cluster "kind-\"${CLUSTER_NAME}\"" --server="https://${container_ip}:6443" --kubeconfig="${KUBECONFIG}"
kubectl config set-cluster "kind-${CLUSTER_NAME}" --server="https://${container_ip}:6443" --kubeconfig="${KUBECONFIG}"
deploy_weave_cni "${KUBECONFIG}" "${POD_CIDR}"