8.4 KiB
		
	
	
	
	
	
			
		
		
	
	| title | description | weight | keywords | ||
|---|---|---|---|---|---|
| Gateway-Connected Clusters | Configuring remote services in a gateway-connected multicluster mesh. | 20 | 
 | 
This example shows how to configure and call remote services in a multicluster mesh with a multiple control plane topology. To demonstrate cross cluster access, the sleep service running in one cluster is configured to call the httpbin service running in a second cluster.
Before you begin
- Set up a multicluster environment with two Istio clusters by following the multiple control planes with gateways instructions.
{{< boilerplate kubectl-multicluster-contexts >}}
Configure the example services
- 
Deploy the sleepservice incluster1.{{< text bash >}} $ kubectl create --context=$CTX_CLUSTER1 namespace foo $ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@ export SLEEP_POD=(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name}) {{< /text >}}
- 
Deploy the httpbinservice incluster2.{{< text bash >}} $ kubectl create --context=$CTX_CLUSTER2 namespace bar $ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@ {{< /text >}} 
- 
Export the cluster2gateway address:{{< text bash >}} export CLUSTER2_GW_ADDR=(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway
 -n istio-system -o jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}") {{< /text >}}This command sets the value to the gateway's public IP, but note that you can set it to a DNS name instead, if you have one. {{< tip >}} If cluster2is running in an environment that does not support external load balancers, you will need to use a nodePort to access the gateway. Instructions for obtaining the IP to use can be found in the Control Ingress Traffic guide. You will also need to change the service entry endpoint port in the following step from 15443 to its corresponding nodePort (i.e.,kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'). {{< /tip >}}
- 
Create a service entry for the httpbinservice incluster1.To allow sleepincluster1to accesshttpbinincluster2, we need to create a service entry for it. The host name of the service entry should be of the form<name>.<namespace>.globalwhere name and namespace correspond to the remote service's name and namespace respectively.For DNS resolution for services under the *.globaldomain, you need to assign these services an IP address.{{< tip >}} Each service (in the .globalDNS domain) must have a unique IP within the cluster. {{< /tip >}}If the global services have actual VIPs, you can use those, but otherwise we suggest using IPs from the loopback range 127.0.0.0/8that are not already allocated. These IPs are non-routable outside of a pod. In this example we'll use IPs in127.255.0.0/16which avoids conflicting with well known IPs such as127.0.0.1(localhost). Application traffic for these IPs will be captured by the sidecar and routed to the appropriate remote service.{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts: must be of form name.namespace.global- httpbin.bar.global
 Treat remote cluster services as part of the service meshas all clusters in the service mesh share the same root of trust.location: MESH_INTERNAL ports: - name: http1 number: 8000 protocol: http resolution: DNS addresses:
 the IP address to which httpbin.bar.global will resolve tomust be unique for each remote service, within a given cluster.This address need not be routable. Traffic for this IP will be capturedby the sidecar and routed appropriately.- 127.255.0.2 endpoints:
 This is the routable address of the ingress gateway in cluster2 thatsits in front of sleep.bar service. Traffic from the sidecar will berouted to this address.- address: ${CLUSTER2_GW_ADDR} ports: http1: 15443 # Do not change this port value EOF {{< /text >}}
 The configurations above will result in all traffic in cluster1forhttpbin.bar.globalon any port to be routed to the endpoint<IPofCluster2IngressGateway>:15443over an mTLS connection.The gateway for port 15443 is a special SNI-aware Envoy preconfigured and installed as part of the multicluster Istio installation step in the before you begin section. Traffic entering port 15443 will be load balanced among pods of the appropriate internal service of the target cluster (in this case, httpbin.barincluster2).{{< warning >}} Do not create a Gatewayconfiguration for port 15443. {{< /warning >}}
- 
Verify that httpbinis accessible from thesleepservice.{{< text bash >}} $ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl httpbin.bar.global:8000/headers {{< /text >}} 
Send remote cluster traffic using egress gateway
If you want to route traffic from cluster1 via a dedicated
egress gateway, instead of directly from the sidecars,
use the following service entry for httpbin.bar instead of the one in the previous section.
{{< tip >}} The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic. {{< /tip >}}
{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:
must be of form name.namespace.global
- httpbin.bar.global location: MESH_INTERNAL ports:
- name: http1 number: 8000 protocol: http resolution: DNS addresses:
- 127.255.0.2 endpoints:
- address: ${CLUSTER2_GW_ADDR} network: external ports: http1: 15443 # Do not change this port value
- address: istio-egressgateway.istio-system.svc.cluster.local ports: http1: 15443 EOF {{< /text >}}
Version-aware routing to remote services
If the remote service has multiple versions, you can add labels to the service entry endpoints. For example:
{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:
must be of form name.namespace.global
- httpbin.bar.global location: MESH_INTERNAL ports:
- name: http1 number: 8000 protocol: http resolution: DNS addresses:
the IP address to which httpbin.bar.global will resolve to
must be unique for each service.
- 127.255.0.2 endpoints:
- address: ${CLUSTER2_GW_ADDR} labels: cluster: cluster2 ports: http1: 15443 # Do not change this port value EOF {{< /text >}}
You can then create virtual services and destination rules
to define subsets of the httpbin.bar.global service using the appropriate gateway label selectors.
The instructions are the same as those used for routing to a local service.
See multicluster version routing
for a complete example.
Cleanup
Execute the following commands to clean up the example services.
- 
Cleanup cluster1:{{< text bash >}} $ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/httpbin/sleep.yaml@ $ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar $ kubectl delete --context=$CTX_CLUSTER1 ns foo {{< /text >}} 
- 
Cleanup cluster2:{{< text bash >}} $ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@ $ kubectl delete --context=$CTX_CLUSTER1 ns bar {{< /text >}}