remove deprecated platform specific examples (#6468)

* remove deprecated platform specific examples

This examples were deprecated in the 1.4 release with
https://github.com/istio/istio.io/pull/5663. They were scheduled to be
removed in the next release (1.5).

* undo zh changes
This commit is contained in:
Jason Young 2020-02-18 16:40:55 -08:00 committed by GitHub
parent 2213f1fbba
commit 1d2830fc2b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 0 additions and 1282 deletions

View File

@ -1,10 +0,0 @@
---
title: Platform-specific Examples (Deprecated)
description: Examples for specific platform installations of Istio.
weight: 110
keywords: [multicluster]
---
{{< warning >}}
These examples are platform-specific and deprecated. They will be removed in the next release.
{{< /warning >}}

View File

@ -1,102 +0,0 @@
---
title: Install Istio for Google Cloud Endpoints Services
description: Explains how to manually integrate Google Cloud Endpoints services with Istio.
weight: 10
aliases:
- /docs/guides/endpoints/index.html
- /docs/examples/endpoints/
---
This document shows how to manually integrate Istio with existing
Google Cloud Endpoints services.
## Before you begin
If you don't have an Endpoints service and want to try it out, you can follow
the [instructions](https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine)
to setup an Endpoints service on GKE.
After setup, you should be able to get an API key and store it in `ENDPOINTS_KEY` environment variable and the external IP address `EXTERNAL_IP`.
You may test the service using the following command:
{{< text bash >}}
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${EXTERNAL_IP}/echo?key=${ENDPOINTS_KEY}"
{{< /text >}}
To install Istio for GKE, follow our [Quick Start with Google Kubernetes Engine](/docs/setup/platform-setup/gke).
## HTTP endpoints service
1. Inject the service and the deployment into the mesh using `--includeIPRanges` by following the
[instructions](/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
so that Egress is allowed to call external services directly.
Otherwise, ESP will not be able to access Google cloud service control.
1. After injection, issue the same test command as above to ensure that calling ESP continues to work.
1. If you want to access the service through Istio ingress, create the following networking definitions:
{{< text bash >}}
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo
spec:
hosts:
- "*"
gateways:
- echo-gateway
http:
- match:
- uri:
prefix: /echo
route:
- destination:
port:
number: 80
host: esp-echo
---
EOF
{{< /text >}}
1. Get the ingress gateway IP and port by following the [instructions](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports).
You can verify accessing the Endpoints service through Istio ingress:
{{< text bash >}}
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${INGRESS_HOST}:${INGRESS_PORT}/echo?key=${ENDPOINTS_KEY}"
{{< /text >}}
## HTTPS endpoints service using secured Ingress
The recommended way to securely access a mesh Endpoints service is through an ingress configured with TLS.
1. Install Istio with strict mutual TLS enabled. Confirm that the following command outputs either `STRICT` or empty:
{{< text bash >}}
$ kubectl get meshpolicy default -n istio-system -o=jsonpath='{.spec.peers[0].mtls.mode}'
{{< /text >}}
1. Re-inject the service and the deployment into the mesh using `--includeIPRanges` by following the
[instructions](/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
so that Egress is allowed to call external services directly.
Otherwise, ESP will not be able to access Google cloud service control.
1. After this, you will find access to `ENDPOINTS_IP` no longer works because the Istio proxy only accepts secure mesh connections.
Accessing through Istio ingress should continue to work since the ingress proxy initiates mutual TLS connections within the mesh.
1. To secure the access at the ingress, follow the [instructions](/docs/tasks/traffic-management/ingress/secure-ingress-mount/).

View File

@ -1,284 +0,0 @@
---
title: Google Kubernetes Engine
description: Set up a multicluster mesh over two GKE clusters.
weight: 65
keywords: [kubernetes,multicluster]
aliases:
- /docs/tasks/multicluster/gke/
- /docs/examples/multicluster/gke/
---
This example shows how to configure a multicluster mesh with a
[single-network deployment](/docs/ops/deployment/deployment-models/#single-network)
over 2 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters.
## Before you begin
In addition to the prerequisites for installing Istio the following setup is required for this example:
* This sample requires a valid Google Cloud Platform project with billing enabled. If you are
not an existing GCP user, you may be able to enroll for a $300 US [Free Trial](https://cloud.google.com/free/) credit.
* [Create a Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) to
host your GKE clusters.
* Install and initialize the [Google Cloud SDK](https://cloud.google.com/sdk/install)
## Create the GKE clusters
1. Set the default project for `gcloud` to perform actions on:
{{< text bash >}}
$ gcloud config set project myProject
$ proj=$(gcloud config list --format='value(core.project)')
{{< /text >}}
1. Create 2 GKE clusters for use with the multicluster feature. _Note:_ `--enable-ip-alias` is required to
allow inter-cluster direct pod-to-pod communication. The `zone` value must be one of the
[GCP zones](https://cloud.google.com/compute/docs/regions-zones/).
{{< text bash >}}
$ zone="us-east1-b"
$ cluster="cluster-1"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
$ cluster="cluster-2"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
{{< /text >}}
1. Wait for clusters to transition to the `RUNNING` state by polling their statuses via the following command:
{{< text bash >}}
$ gcloud container clusters list
{{< /text >}}
1. Get the clusters' credentials ([command details](https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials)):
{{< text bash >}}
$ gcloud container clusters get-credentials cluster-1 --zone $zone
$ gcloud container clusters get-credentials cluster-2 --zone $zone
{{< /text >}}
1. Validate `kubectl` access to each cluster and create a `cluster-admin` cluster role binding tied to the Kubernetes credentials associated with your GCP user.
1. For cluster-1:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
{{< /text >}}
1. For cluster-2:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
{{< /text >}}
## Create a Google Cloud firewall rule
To allow the pods on each cluster to directly communicate, create the following rule:
{{< text bash >}}
$ function join_by { local IFS="$1"; shift; echo "$*"; }
$ ALL_CLUSTER_CIDRS=$(gcloud container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
$ ALL_CLUSTER_CIDRS=$(join_by , $(echo "${ALL_CLUSTER_CIDRS}"))
$ ALL_CLUSTER_NETTAGS=$(gcloud compute instances list --format='value(tags.items.[0])' | sort | uniq)
$ ALL_CLUSTER_NETTAGS=$(join_by , $(echo "${ALL_CLUSTER_NETTAGS}"))
$ gcloud compute firewall-rules create istio-multicluster-test-pods \
--allow=tcp,udp,icmp,esp,ah,sctp \
--direction=INGRESS \
--priority=900 \
--source-ranges="${ALL_CLUSTER_CIDRS}" \
--target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
{{< /text >}}
## Install the Istio control plane
The following generates an Istio installation manifest, installs it, and enables automatic sidecar injection in
the `default` namespace:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio_master.yaml
$ kubectl create ns istio-system
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
$ kubectl apply -f $HOME/istio_master.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Wait for pods to come up by polling their statuses via the following command:
{{< text bash >}}
$ kubectl get pods -n istio-system
{{< /text >}}
## Generate remote cluster manifest
1. Get the IPs of the control plane pods:
{{< text bash >}}
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
{{< /text >}}
1. Generate remote cluster manifest:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio \
--namespace istio-system --name istio-remote \
--values @install/kubernetes/helm/istio/values-istio-remote.yaml@ \
--set global.remotePilotAddress=${PILOT_POD_IP} \
--set global.remotePolicyAddress=${POLICY_POD_IP} \
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} > $HOME/istio-remote.yaml
{{< /text >}}
## Install remote cluster manifest
The following installs the minimal Istio components and enables automatic sidecar injection on
the namespace `default` in the remote cluster:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio-remote.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
## Create remote cluster's kubeconfig for Istio Pilot
The `istio-remote` Helm chart creates a service account with minimal access for use by Istio Pilot
discovery.
1. Prepare environment variables for building the `kubeconfig` file for the service account `istio-multi`:
{{< text bash >}}
$ export WORK_DIR=$(pwd)
$ CLUSTER_NAME=$(kubectl config view --minify=true -o jsonpath='{.clusters[].name}')
$ CLUSTER_NAME="${CLUSTER_NAME##*_}"
$ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
$ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
$ NAMESPACE=istio-system
$ SERVICE_ACCOUNT=istio-multi
$ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
$ CA_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['ca\.crt']}")
$ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)
{{< /text >}}
{{< tip >}}
An alternative to `base64 --decode` is `openssl enc -d -base64 -A` on many systems.
{{< /tip >}}
1. Create a `kubeconfig` file in the working directory for the service account `istio-multi`:
{{< text bash >}}
$ cat <<EOF > ${KUBECFG_FILE}
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${SERVER}
name: ${CLUSTER_NAME}
contexts:
- context:
cluster: ${CLUSTER_NAME}
user: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: ${CLUSTER_NAME}
user:
token: ${TOKEN}
EOF
{{< /text >}}
At this point, the remote clusters' `kubeconfig` files have been created in the `${WORK_DIR}` directory.
The filename for a cluster is the same as the original `kubeconfig` cluster name.
## Configure Istio control plane to discover the remote cluster
Create a secret and label it properly for each remote cluster:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
$ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< /text >}}
## Deploy the Bookinfo example across clusters
1. Install Bookinfo on the first cluster. Remove the `reviews-v3` deployment to deploy on remote:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}
1. Install the `reviews-v3` deployment on the remote cluster.
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=ratings
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l account=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l app=reviews,version=v3
{{< /text >}}
_Note:_ The `ratings` service definition is added to the remote cluster because `reviews-v3` is a
client of `ratings` and creating the service object creates a DNS entry. The Istio sidecar in the
`reviews-v3` pod will determine the proper `ratings` endpoint after the DNS lookup is resolved to a
service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as
in a federated Kubernetes environment.
1. Get the `istio-ingressgateway` service's external IP to access the `bookinfo` page to validate that Istio
is including the remote's `reviews-v3` instance in the load balancing of reviews versions:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get svc istio-ingressgateway -n istio-system
{{< /text >}}
Access `http://<GATEWAY_IP>/productpage` repeatedly and each version of reviews should be equally loadbalanced,
including `reviews-v3` in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate
the equal loadbalancing between `reviews` versions.
## Uninstalling
The following should be done in addition to the uninstall of Istio as described in the
[VPN-based multicluster uninstall section](/docs/setup/install/multicluster/shared-vpn/):
1. Delete the Google Cloud firewall rule:
{{< text bash >}}
$ gcloud compute firewall-rules delete istio-multicluster-test-pods --quiet
{{< /text >}}
1. Delete the `cluster-admin` cluster role binding from each cluster no longer being used for Istio:
{{< text bash >}}
$ kubectl delete clusterrolebinding gke-cluster-admin-binding
{{< /text >}}
1. Delete any GKE clusters no longer in use. The following is an example delete command for the remote cluster, `cluster-2`:
{{< text bash >}}
$ gcloud container clusters delete cluster-2 --zone $zone
{{< /text >}}

View File

@ -1,245 +0,0 @@
---
title: IBM Cloud Private
description: Example multicluster mesh over two IBM Cloud Private clusters.
weight: 70
keywords: [kubernetes,multicluster]
aliases:
- /docs/tasks/multicluster/icp/
- /docs/examples/multicluster/icp/
---
This example demonstrates how to setup network connectivity between two
[IBM Cloud Private](https://www.ibm.com/cloud/private) clusters
and then compose them into a multicluster mesh using a
[single-network deployment](/docs/ops/deployment/deployment-models/#single-network).
## Create the IBM Cloud Private clusters
1. [Install two IBM Cloud Private clusters](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/installing/install.html).
{{< warning >}}
Make sure individual cluster Pod CIDR ranges and service CIDR ranges are unique and do not overlap
across the multicluster environment and may not overlap. This can be configured by `network_cidr` and
`service_cluster_ip_range` in `cluster/config.yaml`.
{{< /warning >}}
{{< text plain >}}
# Default IPv4 CIDR is 10.1.0.0/16
# Default IPv6 CIDR is fd03::0/112
network_cidr: 10.1.0.0/16
## Kubernetes Settings
# Default IPv4 Service Cluster Range is 10.0.0.0/16
# Default IPv6 Service Cluster Range is fd02::0/112
service_cluster_ip_range: 10.0.0.0/16
{{< /text >}}
1. After IBM Cloud Private cluster install finishes, validate `kubectl` access to each cluster. In this example, consider
two clusters `cluster-1` and `cluster-2`.
1. [Configure `cluster-1` with `kubectl`](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/install_kubectl.html).
1. Check the cluster status:
{{< text bash >}}
$ kubectl get nodes
$ kubectl get pods --all-namespaces
{{< /text >}}
1. Repeat above two steps to validate `cluster-2`.
## Configure pod communication across IBM Cloud Private clusters
IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client
on each node distributes the IP router information to all nodes.
To ensure pods can communicate across different clusters, you need to configure IP routers on all nodes
across the two clusters. In summary, you need the following two steps to configure pod communication across
two IBM Cloud Private Clusters:
1. Add IP routers from `cluster-1` to `cluster-2`.
1. Add IP routers from `cluster-2` to `cluster-1`.
{{< warning >}}
This approach works if all the nodes within the multiple IBM Cloud Private clusters are located in the same subnet. It is unable to add BGP routers directly for nodes located in different subnets because the IP addresses must be reachable with a single hop. Alternatively, you can use a VPN for pod communication across clusters. Refer to [this article](https://medium.com/ibm-cloud/setup-pop-to-pod-communication-across-ibm-cloud-private-clusters-add0b079ebf3) for more details.
{{< /warning >}}
You can check how to add IP routers from `cluster-1` to `cluster-2` to validate pod to pod communication
across clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes in
the cluster. In this example, both clusters have three nodes.
The `hosts` file for `cluster-1`:
{{< text plain >}}
172.16.160.23 micpnode1
172.16.160.27 micpnode2
172.16.160.29 micpnode3
{{< /text >}}
The `hosts` file for `cluster-2`:
{{< text plain >}}
172.16.187.14 nicpnode1
172.16.187.16 nicpnode2
172.16.187.18 nicpnode3
{{< /text >}}
1. Obtain routing information on all nodes in `cluster-1` with the command `ip route | grep bird`.
{{< text bash >}}
$ ip route | grep bird
blackhole 10.1.103.128/26 proto bird
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
{{< text bash >}}
$ ip route | grep bird
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
blackhole 10.1.192.0/26 proto bird
{{< /text >}}
{{< text bash >}}
$ ip route | grep bird
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
blackhole 10.1.176.64/26 proto bird
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
1. There are three IP routers total for those three nodes in `cluster-1`.
{{< text plain >}}
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
1. Add those three IP routers to all nodes in `cluster-2` by the command to follows:
{{< text bash >}}
$ ip route add 10.1.176.64/26 via 172.16.160.29
$ ip route add 10.1.103.128/26 via 172.16.160.23
$ ip route add 10.1.192.0/26 via 172.16.160.27
{{< /text >}}
1. You can use the same steps to add all IP routers from `cluster-2` to `cluster-1`. After the configuration
is complete, all the pods in those two different clusters can communicate with each other.
1. Verify across pod communication by pinging pod IP in `cluster-2` from `cluster-1`. The following is a pod
from `cluster-2` with pod IP as `20.1.58.247`.
{{< text bash >}}
$ kubectl -n kube-system get pod -owide | grep dns
kube-dns-ksmq6 1/1 Running 2 28d 20.1.58.247 172.16.187.14 <none>
{{< /text >}}
1. From a node in `cluster-1` ping the pod IP which should succeed.
{{< text bash >}}
$ ping 20.1.58.247
PING 20.1.58.247 (20.1.58.247) 56(84) bytes of data.
64 bytes from 20.1.58.247: icmp_seq=1 ttl=63 time=1.73 ms
{{< /text >}}
The steps above in this section enables pod communication across the two clusters by configuring a full IP routing mesh
across all nodes in the two IBM Cloud Private Clusters.
## Install Istio for multicluster
Follow the [single-network shared control plane instructions](/docs/setup/install/multicluster/shared-vpn/) to install and configure
local Istio control plane and Istio remote on `cluster-1` and `cluster-2`.
In this guide, it is assumed that the local Istio control plane is deployed in `cluster-1`, while the Istio remote is deployed in `cluster-2`.
## Deploy the Bookinfo example across clusters
The following example enables [automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection).
1. Install `bookinfo` on the first cluster `cluster-1`. Remove the `reviews-v3` deployment which will be deployed on cluster `cluster-2` in the following step:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}
1. Deploy the `reviews-v3` service along with any corresponding services on the remote `cluster-2` cluster:
{{< text bash >}}
$ cat <<EOF | kubectl apply -f -
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.12.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
EOF
{{< /text >}}
_Note:_ The `ratings` service definition is added to the remote cluster because `reviews-v3` is client
of `ratings` service, thus a DNS entry for `ratings` service is required for `reviews-v3`. The Istio sidecar
in the `reviews-v3` pod will determine the proper `ratings` endpoint after the DNS lookup is resolved to a
service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as
in a federated Kubernetes environment.
1. [Determine the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
for `istio-ingressgateway`'s `INGRESS_HOST` and `INGRESS_PORT` variables to access the gateway.
Access `http://<INGRESS_HOST>:<INGRESS_PORT>/productpage` repeatedly and each version of `reviews` should be equally load balanced,
including `reviews-v3` in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancing
between `reviews` versions.

View File

@ -1,10 +0,0 @@
---
title: Platform-specific Examples (Deprecated)
description: Examples for specific platform installations of Istio.
weight: 110
keywords: [multicluster]
---
{{< warning >}}
These examples are platform-specific and deprecated. They will be removed in the next release.
{{< /warning >}}

View File

@ -1,102 +0,0 @@
---
title: Install Istio for Google Cloud Endpoints Services
description: Explains how to manually integrate Google Cloud Endpoints services with Istio.
weight: 10
aliases:
- /docs/guides/endpoints/index.html
- /docs/examples/endpoints/
---
This document shows how to manually integrate Istio with existing
Google Cloud Endpoints services.
## Before you begin
If you don't have an Endpoints service and want to try it out, you can follow
the [instructions](https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine)
to setup an Endpoints service on GKE.
After setup, you should be able to get an API key and store it in `ENDPOINTS_KEY` environment variable and the external IP address `EXTERNAL_IP`.
You may test the service using the following command:
{{< text bash >}}
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${EXTERNAL_IP}/echo?key=${ENDPOINTS_KEY}"
{{< /text >}}
To install Istio for GKE, follow our [Quick Start with Google Kubernetes Engine](/pt-br/docs/setup/platform-setup/gke).
## HTTP endpoints service
1. Inject the service and the deployment into the mesh using `--includeIPRanges` by following the
[instructions](/pt-br/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
so that Egress is allowed to call external services directly.
Otherwise, ESP will not be able to access Google cloud service control.
1. After injection, issue the same test command as above to ensure that calling ESP continues to work.
1. If you want to access the service through Istio ingress, create the following networking definitions:
{{< text bash >}}
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo
spec:
hosts:
- "*"
gateways:
- echo-gateway
http:
- match:
- uri:
prefix: /echo
route:
- destination:
port:
number: 80
host: esp-echo
---
EOF
{{< /text >}}
1. Get the ingress gateway IP and port by following the [instructions](/pt-br/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports).
You can verify accessing the Endpoints service through Istio ingress:
{{< text bash >}}
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${INGRESS_HOST}:${INGRESS_PORT}/echo?key=${ENDPOINTS_KEY}"
{{< /text >}}
## HTTPS endpoints service using secured Ingress
The recommended way to securely access a mesh Endpoints service is through an ingress configured with TLS.
1. Install Istio with strict mutual TLS enabled. Confirm that the following command outputs either `STRICT` or empty:
{{< text bash >}}
$ kubectl get meshpolicy default -n istio-system -o=jsonpath='{.spec.peers[0].mtls.mode}'
{{< /text >}}
1. Re-inject the service and the deployment into the mesh using `--includeIPRanges` by following the
[instructions](/pt-br/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
so that Egress is allowed to call external services directly.
Otherwise, ESP will not be able to access Google cloud service control.
1. After this, you will find access to `ENDPOINTS_IP` no longer works because the Istio proxy only accepts secure mesh connections.
Accessing through Istio ingress should continue to work since the ingress proxy initiates mutual TLS connections within the mesh.
1. To secure the access at the ingress, follow the [instructions](/pt-br/docs/tasks/traffic-management/ingress/secure-ingress-mount/).

View File

@ -1,284 +0,0 @@
---
title: Google Kubernetes Engine
description: Set up a multicluster mesh over two GKE clusters.
weight: 65
keywords: [kubernetes,multicluster]
aliases:
- /docs/tasks/multicluster/gke/
- /docs/examples/multicluster/gke/
---
This example shows how to configure a multicluster mesh with a
[single-network deployment](/pt-br/docs/ops/deployment/deployment-models/#single-network)
over 2 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters.
## Before you begin
In addition to the prerequisites for installing Istio the following setup is required for this example:
* This sample requires a valid Google Cloud Platform project with billing enabled. If you are
not an existing GCP user, you may be able to enroll for a $300 US [Free Trial](https://cloud.google.com/free/) credit.
* [Create a Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) to
host your GKE clusters.
* Install and initialize the [Google Cloud SDK](https://cloud.google.com/sdk/install)
## Create the GKE clusters
1. Set the default project for `gcloud` to perform actions on:
{{< text bash >}}
$ gcloud config set project myProject
$ proj=$(gcloud config list --format='value(core.project)')
{{< /text >}}
1. Create 2 GKE clusters for use with the multicluster feature. _Note:_ `--enable-ip-alias` is required to
allow inter-cluster direct pod-to-pod communication. The `zone` value must be one of the
[GCP zones](https://cloud.google.com/compute/docs/regions-zones/).
{{< text bash >}}
$ zone="us-east1-b"
$ cluster="cluster-1"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
$ cluster="cluster-2"
$ gcloud container clusters create $cluster --zone $zone --username "admin" \
--machine-type "n1-standard-2" --image-type "COS" --disk-size "100" \
--scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only",\
"https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring",\
"https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly",\
"https://www.googleapis.com/auth/trace.append" \
--num-nodes "4" --network "default" --enable-cloud-logging --enable-cloud-monitoring --enable-ip-alias --async
{{< /text >}}
1. Wait for clusters to transition to the `RUNNING` state by polling their statuses via the following command:
{{< text bash >}}
$ gcloud container clusters list
{{< /text >}}
1. Get the clusters' credentials ([command details](https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials)):
{{< text bash >}}
$ gcloud container clusters get-credentials cluster-1 --zone $zone
$ gcloud container clusters get-credentials cluster-2 --zone $zone
{{< /text >}}
1. Validate `kubectl` access to each cluster and create a `cluster-admin` cluster role binding tied to the Kubernetes credentials associated with your GCP user.
1. For cluster-1:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
{{< /text >}}
1. For cluster-2:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl get pods --all-namespaces
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
{{< /text >}}
## Create a Google Cloud firewall rule
To allow the pods on each cluster to directly communicate, create the following rule:
{{< text bash >}}
$ function join_by { local IFS="$1"; shift; echo "$*"; }
$ ALL_CLUSTER_CIDRS=$(gcloud container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
$ ALL_CLUSTER_CIDRS=$(join_by , $(echo "${ALL_CLUSTER_CIDRS}"))
$ ALL_CLUSTER_NETTAGS=$(gcloud compute instances list --format='value(tags.items.[0])' | sort | uniq)
$ ALL_CLUSTER_NETTAGS=$(join_by , $(echo "${ALL_CLUSTER_NETTAGS}"))
$ gcloud compute firewall-rules create istio-multicluster-test-pods \
--allow=tcp,udp,icmp,esp,ah,sctp \
--direction=INGRESS \
--priority=900 \
--source-ranges="${ALL_CLUSTER_CIDRS}" \
--target-tags="${ALL_CLUSTER_NETTAGS}" --quiet
{{< /text >}}
## Install the Istio control plane
The following generates an Istio installation manifest, installs it, and enables automatic sidecar injection in
the `default` namespace:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio_master.yaml
$ kubectl create ns istio-system
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
$ kubectl apply -f $HOME/istio_master.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
Wait for pods to come up by polling their statuses via the following command:
{{< text bash >}}
$ kubectl get pods -n istio-system
{{< /text >}}
## Generate remote cluster manifest
1. Get the IPs of the control plane pods:
{{< text bash >}}
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio=mixer -o jsonpath='{.items[0].status.podIP}')
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
{{< /text >}}
1. Generate remote cluster manifest:
{{< text bash >}}
$ helm template install/kubernetes/helm/istio \
--namespace istio-system --name istio-remote \
--values @install/kubernetes/helm/istio/values-istio-remote.yaml@ \
--set global.remotePilotAddress=${PILOT_POD_IP} \
--set global.remotePolicyAddress=${POLICY_POD_IP} \
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} > $HOME/istio-remote.yaml
{{< /text >}}
## Install remote cluster manifest
The following installs the minimal Istio components and enables automatic sidecar injection on
the namespace `default` in the remote cluster:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl create ns istio-system
$ kubectl apply -f $HOME/istio-remote.yaml
$ kubectl label namespace default istio-injection=enabled
{{< /text >}}
## Create remote cluster's kubeconfig for Istio Pilot
The `istio-remote` Helm chart creates a service account with minimal access for use by Istio Pilot
discovery.
1. Prepare environment variables for building the `kubeconfig` file for the service account `istio-multi`:
{{< text bash >}}
$ export WORK_DIR=$(pwd)
$ CLUSTER_NAME=$(kubectl config view --minify=true -o jsonpath='{.clusters[].name}')
$ CLUSTER_NAME="${CLUSTER_NAME##*_}"
$ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
$ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
$ NAMESPACE=istio-system
$ SERVICE_ACCOUNT=istio-multi
$ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
$ CA_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['ca\.crt']}")
$ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)
{{< /text >}}
{{< tip >}}
An alternative to `base64 --decode` is `openssl enc -d -base64 -A` on many systems.
{{< /tip >}}
1. Create a `kubeconfig` file in the working directory for the service account `istio-multi`:
{{< text bash >}}
$ cat <<EOF > ${KUBECFG_FILE}
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${SERVER}
name: ${CLUSTER_NAME}
contexts:
- context:
cluster: ${CLUSTER_NAME}
user: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: ${CLUSTER_NAME}
user:
token: ${TOKEN}
EOF
{{< /text >}}
At this point, the remote clusters' `kubeconfig` files have been created in the `${WORK_DIR}` directory.
The filename for a cluster is the same as the original `kubeconfig` cluster name.
## Configure Istio control plane to discover the remote cluster
Create a secret and label it properly for each remote cluster:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
$ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< /text >}}
## Deploy the Bookinfo example across clusters
1. Install Bookinfo on the first cluster. Remove the `reviews-v3` deployment to deploy on remote:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}
1. Install the `reviews-v3` deployment on the remote cluster.
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-2"
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=ratings
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l service=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l account=reviews
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ -l app=reviews,version=v3
{{< /text >}}
_Note:_ The `ratings` service definition is added to the remote cluster because `reviews-v3` is a
client of `ratings` and creating the service object creates a DNS entry. The Istio sidecar in the
`reviews-v3` pod will determine the proper `ratings` endpoint after the DNS lookup is resolved to a
service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as
in a federated Kubernetes environment.
1. Get the `istio-ingressgateway` service's external IP to access the `bookinfo` page to validate that Istio
is including the remote's `reviews-v3` instance in the load balancing of reviews versions:
{{< text bash >}}
$ kubectl config use-context "gke_${proj}_${zone}_cluster-1"
$ kubectl get svc istio-ingressgateway -n istio-system
{{< /text >}}
Access `http://<GATEWAY_IP>/productpage` repeatedly and each version of reviews should be equally loadbalanced,
including `reviews-v3` in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate
the equal loadbalancing between `reviews` versions.
## Uninstalling
The following should be done in addition to the uninstall of Istio as described in the
[VPN-based multicluster uninstall section](/pt-br/docs/setup/install/multicluster/shared-vpn/):
1. Delete the Google Cloud firewall rule:
{{< text bash >}}
$ gcloud compute firewall-rules delete istio-multicluster-test-pods --quiet
{{< /text >}}
1. Delete the `cluster-admin` cluster role binding from each cluster no longer being used for Istio:
{{< text bash >}}
$ kubectl delete clusterrolebinding gke-cluster-admin-binding
{{< /text >}}
1. Delete any GKE clusters no longer in use. The following is an example delete command for the remote cluster, `cluster-2`:
{{< text bash >}}
$ gcloud container clusters delete cluster-2 --zone $zone
{{< /text >}}

View File

@ -1,245 +0,0 @@
---
title: IBM Cloud Private
description: Example multicluster mesh over two IBM Cloud Private clusters.
weight: 70
keywords: [kubernetes,multicluster]
aliases:
- /docs/tasks/multicluster/icp/
- /docs/examples/multicluster/icp/
---
This example demonstrates how to setup network connectivity between two
[IBM Cloud Private](https://www.ibm.com/cloud/private) clusters
and then compose them into a multicluster mesh using a
[single-network deployment](/pt-br/docs/ops/deployment/deployment-models/#single-network).
## Create the IBM Cloud Private clusters
1. [Install two IBM Cloud Private clusters](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/installing/install.html).
{{< warning >}}
Make sure individual cluster Pod CIDR ranges and service CIDR ranges are unique and do not overlap
across the multicluster environment and may not overlap. This can be configured by `network_cidr` and
`service_cluster_ip_range` in `cluster/config.yaml`.
{{< /warning >}}
{{< text plain >}}
# Default IPv4 CIDR is 10.1.0.0/16
# Default IPv6 CIDR is fd03::0/112
network_cidr: 10.1.0.0/16
## Kubernetes Settings
# Default IPv4 Service Cluster Range is 10.0.0.0/16
# Default IPv6 Service Cluster Range is fd02::0/112
service_cluster_ip_range: 10.0.0.0/16
{{< /text >}}
1. After IBM Cloud Private cluster install finishes, validate `kubectl` access to each cluster. In this example, consider
two clusters `cluster-1` and `cluster-2`.
1. [Configure `cluster-1` with `kubectl`](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/install_kubectl.html).
1. Check the cluster status:
{{< text bash >}}
$ kubectl get nodes
$ kubectl get pods --all-namespaces
{{< /text >}}
1. Repeat above two steps to validate `cluster-2`.
## Configure pod communication across IBM Cloud Private clusters
IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client
on each node distributes the IP router information to all nodes.
To ensure pods can communicate across different clusters, you need to configure IP routers on all nodes
across the two clusters. In summary, you need the following two steps to configure pod communication across
two IBM Cloud Private Clusters:
1. Add IP routers from `cluster-1` to `cluster-2`.
1. Add IP routers from `cluster-2` to `cluster-1`.
{{< warning >}}
This approach works if all the nodes within the multiple IBM Cloud Private clusters are located in the same subnet. It is unable to add BGP routers directly for nodes located in different subnets because the IP addresses must be reachable with a single hop. Alternatively, you can use a VPN for pod communication across clusters. Refer to [this article](https://medium.com/ibm-cloud/setup-pop-to-pod-communication-across-ibm-cloud-private-clusters-add0b079ebf3) for more details.
{{< /warning >}}
You can check how to add IP routers from `cluster-1` to `cluster-2` to validate pod to pod communication
across clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes in
the cluster. In this example, both clusters have three nodes.
The `hosts` file for `cluster-1`:
{{< text plain >}}
172.16.160.23 micpnode1
172.16.160.27 micpnode2
172.16.160.29 micpnode3
{{< /text >}}
The `hosts` file for `cluster-2`:
{{< text plain >}}
172.16.187.14 nicpnode1
172.16.187.16 nicpnode2
172.16.187.18 nicpnode3
{{< /text >}}
1. Obtain routing information on all nodes in `cluster-1` with the command `ip route | grep bird`.
{{< text bash >}}
$ ip route | grep bird
blackhole 10.1.103.128/26 proto bird
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
{{< text bash >}}
$ ip route | grep bird
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
blackhole 10.1.192.0/26 proto bird
{{< /text >}}
{{< text bash >}}
$ ip route | grep bird
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
blackhole 10.1.176.64/26 proto bird
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
1. There are three IP routers total for those three nodes in `cluster-1`.
{{< text plain >}}
10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
{{< /text >}}
1. Add those three IP routers to all nodes in `cluster-2` by the command to follows:
{{< text bash >}}
$ ip route add 10.1.176.64/26 via 172.16.160.29
$ ip route add 10.1.103.128/26 via 172.16.160.23
$ ip route add 10.1.192.0/26 via 172.16.160.27
{{< /text >}}
1. You can use the same steps to add all IP routers from `cluster-2` to `cluster-1`. After the configuration
is complete, all the pods in those two different clusters can communicate with each other.
1. Verify across pod communication by pinging pod IP in `cluster-2` from `cluster-1`. The following is a pod
from `cluster-2` with pod IP as `20.1.58.247`.
{{< text bash >}}
$ kubectl -n kube-system get pod -owide | grep dns
kube-dns-ksmq6 1/1 Running 2 28d 20.1.58.247 172.16.187.14 <none>
{{< /text >}}
1. From a node in `cluster-1` ping the pod IP which should succeed.
{{< text bash >}}
$ ping 20.1.58.247
PING 20.1.58.247 (20.1.58.247) 56(84) bytes of data.
64 bytes from 20.1.58.247: icmp_seq=1 ttl=63 time=1.73 ms
{{< /text >}}
The steps above in this section enables pod communication across the two clusters by configuring a full IP routing mesh
across all nodes in the two IBM Cloud Private Clusters.
## Install Istio for multicluster
Follow the [single-network shared control plane instructions](/pt-br/docs/setup/install/multicluster/shared-vpn/) to install and configure
local Istio control plane and Istio remote on `cluster-1` and `cluster-2`.
In this guide, it is assumed that the local Istio control plane is deployed in `cluster-1`, while the Istio remote is deployed in `cluster-2`.
## Deploy the Bookinfo example across clusters
The following example enables [automatic sidecar injection](/pt-br/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection).
1. Install `bookinfo` on the first cluster `cluster-1`. Remove the `reviews-v3` deployment which will be deployed on cluster `cluster-2` in the following step:
{{< text bash >}}
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
$ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
$ kubectl delete deployment reviews-v3
{{< /text >}}
1. Deploy the `reviews-v3` service along with any corresponding services on the remote `cluster-2` cluster:
{{< text bash >}}
$ cat <<EOF | kubectl apply -f -
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.12.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
EOF
{{< /text >}}
_Note:_ The `ratings` service definition is added to the remote cluster because `reviews-v3` is client
of `ratings` service, thus a DNS entry for `ratings` service is required for `reviews-v3`. The Istio sidecar
in the `reviews-v3` pod will determine the proper `ratings` endpoint after the DNS lookup is resolved to a
service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as
in a federated Kubernetes environment.
1. [Determine the ingress IP and ports](/pt-br/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
for `istio-ingressgateway`'s `INGRESS_HOST` and `INGRESS_PORT` variables to access the gateway.
Access `http://<INGRESS_HOST>:<INGRESS_PORT>/productpage` repeatedly and each version of `reviews` should be equally load balanced,
including `reviews-v3` in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancing
between `reviews` versions.