IBM Cloud Private
This example demonstrates how to setup network connectivity between two IBM Cloud Private clusters and then compose them into a multicluster mesh using a single control plane with VPN connectivity topology.
Create the IBM Cloud Private Clusters
Install two IBM Cloud Private clusters.
## Network in IPv4 CIDR format network_cidr: 10.1.0.0/16 ## Kubernetes Settings service_cluster_ip_range: 10.0.0.1/24After IBM Cloud Private cluster install finishes, validate
kubectlaccess to each cluster. In this example, consider two clusterscluster-1andcluster-2.Check the cluster status:
$ kubectl get nodes $ kubectl get pods --all-namespacesRepeat above two steps to validate
cluster-2.
Configure Pod Communication Across IBM Cloud Private Clusters
IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client on each node distributes the IP router information to all nodes.
To ensure pods can communicate across different clusters, you need to configure IP routers on all nodes in the cluster. You need two steps:
Add IP routers from
cluster-1tocluster-2.Add IP routers from
cluster-2tocluster-1.
You can check how to add IP routers from cluster-1 to cluster-2 to validate pod to pod communication
across clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes in
the cluster. In this example, both clusters have three nodes.
The hosts file for cluster-1:
9.111.255.21 gyliu-icp-1
9.111.255.129 gyliu-icp-2
9.111.255.29 gyliu-icp-3
The hosts file for cluster-2:
9.111.255.152 gyliu-ubuntu-3
9.111.255.155 gyliu-ubuntu-2
9.111.255.77 gyliu-ubuntu-1
Obtain routing information on all nodes in
cluster-1with the commandip route | grep bird.$ ip route | grep bird 10.1.43.0/26 via 9.111.255.29 dev tunl0 proto bird onlink 10.1.158.192/26 via 9.111.255.129 dev tunl0 proto bird onlink blackhole 10.1.198.128/26 proto bird$ ip route | grep bird 10.1.43.0/26 via 9.111.255.29 dev tunl0 proto bird onlink blackhole 10.1.158.192/26 proto bird 10.1.198.128/26 via 9.111.255.21 dev tunl0 proto bird onlink$ ip route | grep bird blackhole 10.1.43.0/26 proto bird 10.1.158.192/26 via 9.111.255.129 dev tunl0 proto bird onlink 10.1.198.128/26 via 9.111.255.21 dev tunl0 proto bird onlinkThere are three IP routers total for those three nodes in
cluster-1.10.1.158.192/26 via 9.111.255.129 dev tunl0 proto bird onlink 10.1.198.128/26 via 9.111.255.21 dev tunl0 proto bird onlink 10.1.43.0/26 via 9.111.255.29 dev tunl0 proto bird onlinkAdd those three IP routers to all nodes in
cluster-2by the command to follows:$ ip route add 10.1.158.192/26 via 9.111.255.129 $ ip route add 10.1.198.128/26 via 9.111.255.21 $ ip route add 10.1.43.0/26 via 9.111.255.29You can use the same steps to add all IP routers from
cluster-2tocluster-1. After configuration is complete, all the pods in those two different clusters can communication with each other.Verify across pod communication by pinging pod IP in
cluster-2fromcluster-1. The following is a pod fromcluster-2with pod IP as20.1.47.150.$ kubectl get pods -owide -n kube-system | grep platform-ui platform-ui-lqccp 1/1 Running 0 3d 20.1.47.150 9.111.255.77From a node in
cluster-1ping the pod IP which should succeed.$ ping 20.1.47.150 PING 20.1.47.150 (20.1.47.150) 56(84) bytes of data. 64 bytes from 20.1.47.150: icmp_seq=1 ttl=63 time=0.759 ms
The steps in this section enables Pod communication across clusters by configuring a full IP routing mesh across all nodes in the two IBM Cloud Private Clusters.
Install Istio for multicluster
Follow the VPN-based multicluster installation steps to install and configure
local Istio control plane and Istio remote on cluster-1 and cluster-2.
This example uses cluster-1 as the local Istio control plane and cluster-2 as the Istio remote.
Deploy the Bookinfo example across clusters
The following example enables automatic sidecar injection.
Install
bookinfoon the first clustercluster-1. Remove thereviews-v3deployment which will be deployed on clustercluster-2in the following step:$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ $ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ $ kubectl delete deployment reviews-v3Deploy the
reviews-v3service along with any corresponding services on the remotecluster-2cluster:$ cat <<EOF | kubectl apply -f - --- ################################################################################################## # Ratings service ################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings spec: ports: - port: 9080 name: http --- ################################################################################################## # Reviews service ################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v3 labels: app: reviews version: v3 spec: replicas: 1 template: metadata: labels: app: reviews version: v3 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v3:1.10.1 imagePullPolicy: IfNotPresent ports: - containerPort: 9080Note: The
ratingsservice definition is added to the remote cluster becausereviews-v3is a client ofratingsand creating the service object creates a DNS entry. The Istio sidecar in thereviews-v3pod will determine the properratingsendpoint after the DNS lookup is resolved to a service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as in a federated Kubernetes environment.Determine the ingress IP and ports for
istio-ingressgateway’sINGRESS_HOSTandINGRESS_PORTvariables for accessing the gateway.Access
http://<INGRESS_HOST>:<INGRESS_PORT>/productpagerepeatedly and each version ofreviewsshould be equally load balanced, includingreviews-v3in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancing betweenreviewsversions.