13 KiB
		
	
	
	
	
	
			
		
		
	
	| title | description | weight | keywords | ||
|---|---|---|---|---|---|
| Google Kubernetes Engine | Set up a multicluster mesh over two GKE clusters. | 65 | 
 | 
This example shows how to configure a multicluster mesh with a single control plane topology over 2 Google Kubernetes Engine clusters.
Before you begin
In addition to the prerequisites for installing Istio the following setup is required for this example:
- 
This sample requires a valid Google Cloud Platform project with billing enabled. If you are not an existing GCP user, you may be able to enroll for a $300 US Free Trial credit. - Create a Google Cloud Project to host your GKE clusters.
 
- 
Install and initialize the Google Cloud SDK 
Create the GKE Clusters
- 
Set the default project for gcloudto perform actions on:{{< text bash >}} $ gcloud config set project myProject proj=(gcloud config list --format='value(core.project)') {{< /text >}}
- 
Create 2 GKE clusters for use with the multicluster feature. Note: --enable-ip-aliasis required to allow inter-cluster direct pod-to-pod communication. Thezonevalue must be one of the GCP zones.{{< text bash >}} $ zone="us-east1-b" $ cluster="cluster-1" $ gcloud container clusters create $cluster --zone $zone --username "admin" 
 --machine-type "n1-standard-2" --image-type "COS" --disk-size "100"
 --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",
 "https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",
 "https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",
 "https://www.googleapis.com/auth/trace.append"
 --num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async $ cluster="cluster-2" $ gcloud container clusters create $cluster --zone $zone --username "admin"
 --machine-type "n1-standard-2" --image-type "COS" --disk-size "100"
 --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",
 "https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",
 "https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",
 "https://www.googleapis.com/auth/trace.append"
 --num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async {{< /text >}}
- 
Wait for clusters to transition to the RUNNINGstate by polling their statuses via the following command:{{< text bash >}} $ gcloud container clusters list {{< /text >}} 
- 
Get the clusters' credentials (command details): {{< text bash >}} $ gcloud container clusters get-credentials cluster-1 --zone $zone $ gcloud container clusters get-credentials cluster-2 --zone $zone {{< /text >}} 
- 
Validate kubectlaccess to each cluster and create acluster-admincluster role binding tied to the Kubernetes credentials associated with your GCP user.- 
For cluster-1: {{< text bash >}} kubectl config use-context "gke_{proj}_${zone}_cluster-1" $ kubectl get pods --all-namespaceskubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="(gcloud config get-value core/account)" {{< /text >}}
- 
For cluster-2: {{< text bash >}} kubectl config use-context "gke_{proj}_${zone}_cluster-2" $ kubectl get pods --all-namespaceskubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="(gcloud config get-value core/account)" {{< /text >}}
 
- 
Create a Google Cloud firewall rule
To allow the pods on each cluster to directly communicate, create the following rule:
{{< text bash >}}
 function join_by { local IFS="$1"; shift; echo "*"; }
 ALL_CLUSTER_CIDRS=(gcloud container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
 ALL_CLUSTER_CIDRS=(join_by , (echo "{ALL_CLUSTER_CIDRS}"))
 ALL_CLUSTER_NETTAGS=(gcloud compute instances list --format='value(tags.items.[0])' | sort | uniq)
 ALL_CLUSTER_NETTAGS=(join_by , (echo "{ALL_CLUSTER_NETTAGS}"))
$ gcloud compute firewall-rules create istio-multicluster-test-pods 
--allow=tcp,udp,icmp,esp,ah,sctp 
--direction=INGRESS 
--priority=900 
--source-ranges="${ALL_CLUSTER_CIDRS}" 
--target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
{{< /text >}}
Install the Istio control plane
The following generates an Istio installation manifest, installs it, and enables automatic sidecar injection in
the default namespace:
{{< text bash >}}
 kubectl config use-context "gke_{proj}_${zone}_cluster-1"
$ cat install/kubernetes/helm/istio-init/files/crd-* > $HOME/istio_master.yaml
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system >> $HOME/istio_master.yaml
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio_master.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Wait for pods to come up by polling their statuses via the following command:
{{< text bash >}} $ kubectl get pods -n istio-system {{< /text >}}
Generate remote cluster manifest
- 
Get the IPs of the control plane pods: {{< text bash >}} export PILOT_POD_IP=(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')export POLICY_POD_IP=(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')export TELEMETRY_POD_IP=(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}') {{< /text >}}
- 
Generate remote cluster manifest: {{< text bash >}} $ helm template install/kubernetes/helm/istio 
 --namespace istio-system --name istio-remote
 --values install/kubernetes/helm/istio/values-istio-remote.yaml
 --set global.remotePilotAddress=${PILOT_POD_IP}
 --set global.remotePolicyAddress=${POLICY_POD_IP}
 --set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} > $HOME/istio-remote.yaml {{< /text >}}
Install remote cluster manifest
The following installs the minimal Istio components and enables automatic sidecar injection on
the namespace default in the remote cluster:
{{< text bash >}}
 kubectl config use-context "gke_{proj}_${zone}_cluster-2"
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio-remote.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Create remote cluster's kubeconfig for Istio Pilot
The istio-remote Helm chart creates a service account with minimal access for use by Istio Pilot
discovery.
- 
Prepare environment variables for building the kubeconfigfile for the service accountistio-multi:{{< text bash >}} export WORK_DIR=(pwd)CLUSTER_NAME=(kubectl config view --minify=true -o "jsonpath={.clusters[].name}")CLUSTER_NAME="{CLUSTER_NAME##*_}"export KUBECFG_FILE={WORK_DIR}/${CLUSTER_NAME}SERVER=(kubectl config view --minify=true -o "jsonpath={.clusters[].cluster.server}") $ NAMESPACE=istio-system $ SERVICE_ACCOUNT=istio-multiSECRET_NAME=(kubectl get sa{SERVICE_ACCOUNT} -n{NAMESPACE} -o jsonpath='{.secrets[].name}')CA_DATA=(kubectl get secret{SECRET_NAME} -n{NAMESPACE} -o "jsonpath={.data['ca.crt']}")TOKEN=(kubectl get secret{SECRET_NAME} -n{NAMESPACE} -o "jsonpath={.data['token']}" | base64 --decode) {{< /text >}}{{< tip >}} An alternative to base64 --decodeisopenssl enc -d -base64 -Aon many systems. {{< /tip >}}
- 
Create a kubeconfigfile in the working directory for the service accountistio-multi:{{< text bash >}} cat <<EOF >{KUBECFG_FILE} apiVersion: v1 clusters:- cluster: certificate-authority-data: ${CA_DATA} server: ${SERVER} name: ${CLUSTER_NAME} contexts:
- context: cluster: ${CLUSTER_NAME} user: ${CLUSTER_NAME} name: ${CLUSTER_NAME} current-context: ${CLUSTER_NAME} kind: Config preferences: {} users:
- name: ${CLUSTER_NAME} user: token: ${TOKEN} EOF {{< /text >}}
 
At this point, the remote clusters' kubeconfig files have been created in the ${WORK_DIR} directory.
The filename for a cluster is the same as the original kubeconfig cluster name.
Configure Istio control plane to discover the remote cluster
Create a secret and label it properly for each remote cluster:
{{< text bash >}}
 kubectl config use-context "gke_{proj}_${zone}_cluster-1"
kubectl create secret generic{CLUSTER_NAME} --from-file {KUBECFG_FILE} -n {NAMESPACE}
kubectl label secret{CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< /text >}}
Deploy Bookinfo Example Across Clusters
- 
Install Bookinfo on the first cluster. Remove the reviews-v3deployment to deploy on remote:{{< text bash >}} kubectl config use-context "gke_{proj}_${zone}_cluster-1" $ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml $ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml $ kubectl delete deployment reviews-v3 {{< /text >}}
- 
Create the reviews-v3.yamlmanifest for deployment on the remote:{{< text yaml plain "reviews-v3.yaml" >}}################################################################################################## Ratings service################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings spec: ports: - port: 9080 name: http
 
 ################################################################################################## Reviews service################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews spec: ports: - port: 9080 name: http selector: app: reviews
 
 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v3 spec: replicas: 1 template: metadata: labels: app: reviews version: v3 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v3:1.5.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 {{< /text >}} Note: The ratingsservice definition is added to the remote cluster becausereviews-v3is a client ofratingsand creating the service object creates a DNS entry. The Istio sidecar in thereviews-v3pod will determine the properratingsendpoint after the DNS lookup is resolved to a service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as in a federated Kubernetes environment.
- 
Install the reviews-v3deployment on the remote.{{< text bash >}} kubectl config use-context "gke_{proj}_${zone}_cluster-2" $ kubectl apply -f $HOME/reviews-v3.yaml {{< /text >}}
- 
Get the istio-ingressgatewayservice's external IP to access thebookinfopage to validate that Istio is including the remote'sreviews-v3instance in the load balancing of reviews versions:{{< text bash >}} kubectl config use-context "gke_{proj}_${zone}_cluster-1" $ kubectl get svc istio-ingressgateway -n istio-system {{< /text >}}Access http://<GATEWAY_IP>/productpagerepeatedly and each version of reviews should be equally loadbalanced, includingreviews-v3in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal loadbalancing betweenreviewsversions.
Uninstalling
The following should be done in addition to the uninstall of Istio as described in the VPN-based multicluster uninstall section:
- 
Delete the Google Cloud firewall rule: {{< text bash >}} $ gcloud compute firewall-rules delete istio-multicluster-test-pods --quiet {{< /text >}} 
- 
Delete the cluster-admincluster role binding from each cluster no longer being used for Istio:{{< text bash >}} $ kubectl delete clusterrolebinding gke-cluster-admin-binding {{< /text >}} 
- 
Delete any GKE clusters no longer in use. The following is an example delete command for the remote cluster, cluster-2:{{< text bash >}} $ gcloud container clusters delete cluster-2 --zone $zone {{< /text >}}