12 KiB
title | description | weight | keywords | ||
---|---|---|---|---|---|
Enabling Multicluster on Google Kubernetes Engine | Example multicluster GKE install of Istio. | 65 |
|
This example demonstrates how to use Istio' multicluster feature to join 2 Google Kubernetes Engine clusters together, using the Kubernetes multicluster installation instructions.
Before you begin
In addition to the prerequisites for installing Istio the following setup is required for this example:
-
This sample requires a valid Google Cloud Platform project with billing enabled. If you are not an existing GCP user, you may be able to enroll for a $300 US Free Trial credit.
- Create a Google Cloud Project to host your GKE clusters.
-
Install and initialize the Google Cloud SDK
Create the GKE Clusters
-
Set the default project for
gcloud
to perform actions on:{{< text bash >}} $ gcloud config set project myProject
proj=
(gcloud config list --format='value(core.project)') {{< /text >}} -
Create 2 GKE clusters for use with the multicluster feature. Note:
--enable-ip-alias
is required to allow inter-cluster direct pod-to-pod communication. Thezone
value must be one of the GCP zones.{{< text bash >}} $ zone="us-east1-b" $ cluster="cluster-1" $ gcloud container clusters create $cluster --zone $zone --username "admin"
--cluster-version "1.9.6-gke.1" --machine-type "n1-standard-2" --image-type "COS" --disk-size "100"
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async $ cluster="cluster-2" $ gcloud container clusters create $cluster --zone $zone --username "admin"
--cluster-version "1.9.6-gke.1" --machine-type "n1-standard-2" --image-type "COS" --disk-size "100"
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async {{< /text >}} -
Wait for clusters to transition to the
RUNNING
state by polling their statuses via the following command:{{< text bash >}} $ gcloud container clusters list {{< /text >}}
-
Get the clusters' credentials (command details):
{{< text bash >}} $ gcloud container clusters get-credentials cluster-1 --zone $zone $ gcloud container clusters get-credentials cluster-2 --zone $zone {{< /text >}}
-
Validate
kubectl
access to each cluster:-
Check cluster-1
{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-1" $ kubectl get pods --all-namespaces {{< /text >}} -
Check cluster-2:
{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-2" $ kubectl get pods --all-namespaces {{< /text >}}
-
-
Create a
cluster-admin
cluster role binding tied to the Kubernetes credentials associated with your GCP user. Note: replacemygcp@gmail.com
with the email tied to your Google Cloud account:{{< text bash >}} $ KUBE_USER="mygcp@gmail.com" $ kubectl create clusterrolebinding gke-cluster-admin-binding
--clusterrole=cluster-admin
--user="${KUBE_USER}" {{< /text >}}
Create a Google Cloud firewall rule
To allow the pods on each cluster to directly communicate, create the following rule:
{{< text bash >}}
function join_by { local IFS="$1"; shift; echo "
*"; }
ALL_CLUSTER_CIDRS=
(gcloud container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
ALL_CLUSTER_CIDRS=
(join_by , ${ALL_CLUSTER_CIDRS})
ALL_CLUSTER_NETTAGS=
(gcloud compute instances list --format='value(tags.items.[0])' | sort | uniq)
ALL_CLUSTER_NETTAGS=
(join_by , ${ALL_CLUSTER_NETTAGS})
$ gcloud compute firewall-rules create istio-multicluster-test-pods
--allow=tcp,udp,icmp,esp,ah,sctp
--direction=INGRESS
--priority=900
--source-ranges="${ALL_CLUSTER_CIDRS}"
--target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
{{< /text >}}
Install the Istio control plane
The following generates an Istio installation manifest, installs it, and enables automatic sidecar injection in
the default
namespace:
{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-1"
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio_master.yaml
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio_master.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Wait for pods to come up by polling their statuses via the following command:
{{< text bash >}} $ kubectl get pods -n istio-system {{< /text >}}
Generate remote cluster manifest
-
Get the IPs of the control plane pods:
{{< text bash >}}
export PILOT_POD_IP=
(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')export POLICY_POD_IP=
(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')export STATSD_POD_IP=
(kubectl -n istio-system get pod -l istio=statsd-prom-bridge -o jsonpath='{.items[0].status.podIP}')export TELEMETRY_POD_IP=
(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}') {{< /text >}} -
Generate remote cluster manifest:
{{< text bash >}} $ helm template install/kubernetes/helm/istio-remote --namespace istio-system
--name istio-remote
--set global.remotePilotAddress=${PILOT_POD_IP}
--set global.remotePolicyAddress=${POLICY_POD_IP}
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP}
--set global.proxy.envoyStatsd.enabled=true
--set global.proxy.envoyStatsd.host=${STATSD_POD_IP} > $HOME/istio-remote.yaml {{< /text >}}
Install remote cluster manifest
The following installs the minimal Istio components and enables automatic sidecar injection on
the namespace default
in the remote cluster:
{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-2"
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio-remote.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Create remote cluster's kubeconfig for Istio Pilot
The istio-remote
Helm chart creates a service account with minimal access for use by Istio Pilot
discovery.
-
Prepare environment variables for building the
kubeconfig
file for the service accountistio-multi
:{{< text bash >}}
export WORK_DIR=
(pwd)CLUSTER_NAME=
(kubectl config view --minify=true -o "jsonpath={.clusters[].name}")CLUSTER_NAME="
{CLUSTER_NAME##*_}"export KUBECFG_FILE=
{WORK_DIR}/${CLUSTER_NAME}SERVER=
(kubectl config view --minify=true -o "jsonpath={.clusters[].cluster.server}") $ NAMESPACE=istio-system $ SERVICE_ACCOUNT=istio-multiSECRET_NAME=
(kubectl get sa{SERVICE_ACCOUNT} -n
{NAMESPACE} -o jsonpath='{.secrets[].name}')CA_DATA=
(kubectl get secret{SECRET_NAME} -n
{NAMESPACE} -o "jsonpath={.data['ca.crt']}")TOKEN=
(kubectl get secret{SECRET_NAME} -n
{NAMESPACE} -o "jsonpath={.data['token']}" | base64 --decode) {{< /text >}}NOTE: An alternative to
base64 --decode
isopenssl enc -d -base64 -A
on many systems. -
Create a
kubeconfig
file in the working directory for the service accountistio-multi
:{{< text bash >}}
cat <<EOF >
{KUBECFG_FILE} apiVersion: v1 clusters:- cluster: certificate-authority-data: ${CA_DATA} server: ${SERVER} name: ${CLUSTER_NAME} contexts:
- context: cluster: ${CLUSTER_NAME} user: ${CLUSTER_NAME} name: ${CLUSTER_NAME} current-context: ${CLUSTER_NAME} kind: Config preferences: {} users:
- name: ${CLUSTER_NAME} user: token: ${TOKEN} EOF {{< /text >}}
At this point, the remote clusters' kubeconfig
files have been created in the ${WORK_DIR}
directory.
The filename for a cluster is the same as the original kubeconfig
cluster name.
Configure Istio control plane to discover the remote cluster
Create a secret and label it properly for each remote cluster:
{{< text bash >}}
kubectl create secret generic
{CLUSTER_NAME} --from-file {KUBECFG_FILE} -n
{NAMESPACE}
kubectl label secret
{CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< /text >}}
Deploy Bookinfo Example Across Clusters
-
Install Bookinfo on the first cluster. Remove the
reviews-v3
deployment to deploy on remote:{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-1" $ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml $ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml $ kubectl delete deployment reviews-v3 {{< /text >}} -
Create the
reviews-v3.yaml
manifest for deployment on the remote:{{< text yaml plain "reviews-v3.yaml" >}}
##################################################################################################
Ratings service
################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings spec: ports:
- port: 9080 name: http
##################################################################################################
Reviews service
################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews spec: ports:
- port: 9080 name: http selector: app: reviews
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reviews-v3 spec: replicas: 1 template: metadata: labels: app: reviews version: v3 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v3:1.5.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 EOF {{< /text >}}
Note: The
ratings
service definition is added to the remote cluster becausereviews-v3
is a client ofratings
and creating the service object creates a DNS entry. The Istio sidecar in thereviews-v3
pod will determine the properratings
endpoint after the DNS lookup is resolved to a service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as in a federated Kubernetes environment. -
Install the
reviews-v3
deployment on the remote.{{< text bash >}}
kubectl config use-context "gke_
{proj}_${zone}_cluster-2" $ kube apply -f $HOME/reviews-v3.yaml {{< /text >}} -
Get the
istio-ingressgateway
service's external IP to access thebookinfo
page to validate that Istio is including the remote'sreviews-v3
instance in the load balancing of reviews versions:{{< text bash >}} $ kubectl get svc istio-ingressgateway -n istio-system {{< /text >}}
Access
http://<GATEWAY_IP>/productpage
repeatedly and each version of reviews should be equally loadbalanced, includingreviews-v3
in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal loadbalancing betweenreviews
versions.