istio.io/content/en/docs/setup/install/multicluster/gateways/index.md

18 KiB
Raw Blame History

title description weight aliases keywords
Replicated control planes Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances. 2
/docs/setup/kubernetes/multicluster-install/gateways/
/docs/examples/multicluster/gateways/
/docs/tasks/multicluster/gateways/
/docs/setup/kubernetes/install/multicluster/gateways/
kubernetes
multicluster
gateway

Follow this guide to install an Istio multicluster deployment with replicated control plane instances in every cluster and using gateways to connect services across clusters.

Instead of using a shared Istio control plane to manage the mesh, in this configuration each cluster has its own Istio control plane installation, each managing its own endpoints. All of the clusters are under a shared administrative control for the purposes of policy enforcement and security.

A single Istio service mesh across the clusters is achieved by replicating shared services and namespaces and using a common root CA in all of the clusters. Cross-cluster communication occurs over Istio gateways of the respective clusters.

{{< image width="80%" link="./multicluster-with-gateways.svg" caption="Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods" >}}

Prerequisites

  • Two or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.

  • Authority to deploy the Istio control plane on each Kubernetes cluster.

  • The IP address of the istio-ingressgateway service in each cluster must be accessible from every other cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require special annotations to use them, so please consult your cloud providers documentation for enabling NLBs for service object type load balancers. When deploying on platforms without NLB support, it may be necessary to modify the health checks for the load balancer to register the ingress gateway.

  • A Root CA. Cross cluster communication requires mutual TLS connection between services. To enable mutual TLS communication across clusters, each cluster's Citadel will be configured with intermediate CA credentials generated by a shared root CA. For illustration purposes, you use a sample root CA certificate available in the Istio installation under the samples/certs directory.

Deploy the Istio control plane in each cluster

  1. Generate intermediate CA certificates for each cluster's Citadel from your organization's root CA. The shared root CA enables mutual TLS communication across different clusters.

    For illustration purposes, the following instructions use the certificates from the Istio samples directory for both clusters. In real world deployments, you would likely use a different CA certificate for each cluster, all signed by a common root CA.

  2. Run the following commands in every cluster to deploy an identical Istio control plane configuration in all of them.

    {{< tip >}} Make sure that the current user has cluster administrator (cluster-admin) permissions and grant them if not. On the GKE platform, for example, the following command can be used:

    {{< text bash >}} kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="(gcloud config get-value core/account)" {{< /text >}}

    {{< /tip >}}

    • Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.

      {{< warning >}} The root and intermediate certificate from the samples directory are widely distributed and known. Do not use these certificates in production as your clusters would then be open to security vulnerabilities and compromise. {{< /warning >}}

      {{< text bash >}} $ kubectl create namespace istio-system $ kubectl create secret generic cacerts -n istio-system
      --from-file=@samples/certs/ca-cert.pem@
      --from-file=@samples/certs/ca-key.pem@
      --from-file=@samples/certs/root-cert.pem@
      --from-file=@samples/certs/cert-chain.pem@ {{< /text >}}

    • Install Istio:

      {{< text bash >}} $ istioctl manifest apply
      -f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-gateways.yaml {{< /text >}}

    For further details and customization options, refer to the installation instructions.

Setup DNS

Providing DNS resolution for services in remote clusters will allow existing applications to function unmodified, as applications typically expect to resolve services by their DNS names and access the resulting IP. Istio itself does not use the DNS for routing requests between services. Services local to a cluster share a common DNS suffix (e.g., svc.cluster.local). Kubernetes DNS provides DNS resolution for these services.

To provide a similar setup for services from remote clusters, you name services from remote clusters in the format <name>.<namespace>.global. Istio also ships with a CoreDNS server that will provide DNS resolution for these services. In order to utilize this DNS, Kubernetes' DNS must be configured to stub a domain for .global.

{{< warning >}} Some cloud providers have different specific DNS domain stub capabilities and procedures for their Kubernetes services. Reference the cloud provider's documentation to determine how to stub DNS domains for each unique environment. The objective of this bash is to stub a domain for .global on port 53 to reference or proxy the istiocoredns service in Istio's service namespace. {{< /warning >}}

Create one of the following ConfigMaps, or update an existing one, in each cluster that will be calling services in remote clusters (every cluster in the general case):

{{< tabset cookie-name="platform" >}} {{< tab name="KubeDNS" cookie-value="kube-dns" >}}

{{< text bash >}} $ kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]} EOF {{< /text >}}

{{< /tab >}}

{{< tab name="CoreDNS (< 1.4.0)" cookie-value="coredns-prev-1.4.0" >}}

{{< text bash >}} $ kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } global:53 { errors cache 30 proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}) } EOF {{< /text >}}

{{< /tab >}}

{{< tab name="CoreDNS (>= 1.4.0)" cookie-value="coredns-after-1.4.0" >}}

{{< text bash >}} $ kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } global:53 { errors cache 30 forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}) } EOF {{< /text >}}

{{< /tab >}} {{< /tabset >}}

Configure application services

Every service in a given cluster that needs to be accessed from a different remote cluster requires a ServiceEntry configuration in the remote cluster. The host used in the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the service's name and namespace respectively.

To demonstrate cross cluster access, configure the [sleep service]({{< github_tree >}}/samples/sleep) running in one cluster to call the [httpbin]({{< github_tree >}}/samples/httpbin) service running in a second cluster. Before you begin:

  • Choose two of your Istio clusters, to be referred to as cluster1 and cluster2.

{{< boilerplate kubectl-multicluster-contexts >}}

Configure the example services

  1. Deploy the sleep service in cluster1.

    {{< text bash >}} $ kubectl create --context=$CTX_CLUSTER1 namespace foo $ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@ export SLEEP_POD=(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name}) {{< /text >}}

  2. Deploy the httpbin service in cluster2.

    {{< text bash >}} $ kubectl create --context=$CTX_CLUSTER2 namespace bar $ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@ {{< /text >}}

  3. Export the cluster2 gateway address:

    {{< text bash >}} export CLUSTER2_GW_ADDR=(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway
    -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}') {{< /text >}}

    This command sets the value to the gateway's public IP, but note that you can set it to a DNS name instead, if you have one.

    {{< tip >}} If cluster2 is running in an environment that does not support external load balancers, you will need to use a nodePort to access the gateway. Instructions for obtaining the IP to use can be found in the Control Ingress Traffic guide. You will also need to change the service entry endpoint port in the following step from 15443 to its corresponding nodePort (i.e., kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'). {{< /tip >}}

  4. Create a service entry for the httpbin service in cluster1.

    To allow sleep in cluster1 to access httpbin in cluster2, we need to create a service entry for it. The host name of the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the remote service's name and namespace respectively.

    For DNS resolution for services under the *.global domain, you need to assign these services an IP address.

    {{< tip >}} Each service (in the .global DNS domain) must have a unique IP within the cluster. {{< /tip >}}

    If the global services have actual VIPs, you can use those, but otherwise we suggest using IPs from the class E addresses range 240.0.0.0/4. Application traffic for these IPs will be captured by the sidecar and routed to the appropriate remote service.

    {{< warning >}} Multicast addresses (224.0.0.0 ~ 239.255.255.255) should not be used because there is no route to them by default. Loopback addresses (127.0.0.0/8) should also not be used because traffic sent to them may be redirected to the sidecar inbound listener. {{< /warning >}}

    {{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:

    must be of form name.namespace.global

    • httpbin.bar.global

    Treat remote cluster services as part of the service mesh

    as all clusters in the service mesh share the same root of trust.

    location: MESH_INTERNAL ports:

    • name: http1 number: 8000 protocol: http resolution: DNS addresses:

    the IP address to which httpbin.bar.global will resolve to

    must be unique for each remote service, within a given cluster.

    This address need not be routable. Traffic for this IP will be captured

    by the sidecar and routed appropriately.

    • 240.0.0.2 endpoints:

    This is the routable address of the ingress gateway in cluster2 that

    sits in front of sleep.foo service. Traffic from the sidecar will be

    routed to this address.

    • address: ${CLUSTER2_GW_ADDR} ports: http1: 15443 # Do not change this port value EOF {{< /text >}}

    The configurations above will result in all traffic in cluster1 for httpbin.bar.global on any port to be routed to the endpoint <IPofCluster2IngressGateway>:15443 over a mutual TLS connection.

    The gateway for port 15443 is a special SNI-aware Envoy preconfigured and installed when you deployed the Istio control plane in the cluster. Traffic entering port 15443 will be load balanced among pods of the appropriate internal service of the target cluster (in this case, httpbin.bar in cluster2).

    {{< warning >}} Do not create a Gateway configuration for port 15443. {{< /warning >}}

  5. Verify that httpbin is accessible from the sleep service.

    {{< text bash >}} $ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers {{< /text >}}

Send remote traffic via an egress gateway

If you want to route traffic from cluster1 via a dedicated egress gateway, instead of directly from the sidecars, use the following service entry for httpbin.bar instead of the one in the previous section.

{{< tip >}} The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic. {{< /tip >}}

If $CLUSTER2_GW_ADDR is an IP address, use the $CLUSTER2_GW_ADDR - IP address option. If $CLUSTER2_GW_ADDR is a hostname, use the $CLUSTER2_GW_ADDR - hostname option.

{{< tabset cookie-name="profile" >}}

{{< tab name="$CLUSTER2_GW_ADDR - IP address" cookie-value="option1" >}}

  • Export the cluster1 egress gateway address:

{{< text bash >}} export CLUSTER1_EGW_ADDR=(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}') {{< /text >}}

  • Apply the httpbin-bar service entry:

{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:

must be of form name.namespace.global

  • httpbin.bar.global location: MESH_INTERNAL ports:
  • name: http1 number: 8000 protocol: http resolution: STATIC addresses:
  • 240.0.0.2 endpoints:
  • address: ${CLUSTER2_GW_ADDR} network: external ports: http1: 15443 # Do not change this port value
  • address: ${CLUSTER1_EGW_ADDR} ports: http1: 15443 EOF {{< /text >}}

{{< /tab >}}

{{< tab name="$CLUSTER2_GW_ADDR - hostname" cookie-value="option2" >}} If the ${CLUSTER2_GW_ADDR} is a hostname, you can use resolution: DNS for the endpoint resolution:

{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:

must be of form name.namespace.global

  • httpbin.bar.global location: MESH_INTERNAL ports:
  • name: http1 number: 8000 protocol: http resolution: DNS addresses:
  • 240.0.0.2 endpoints:
  • address: ${CLUSTER2_GW_ADDR} network: external ports: http1: 15443 # Do not change this port value
  • address: istio-egressgateway.istio-system.svc.cluster.local ports: http1: 15443 EOF {{< /text >}}

{{< /tab >}}

{{< /tabset >}}

Cleanup the example

Execute the following commands to clean up the example services.

  • Cleanup cluster1:

    {{< text bash >}} $ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@ $ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar $ kubectl delete --context=$CTX_CLUSTER1 ns foo {{< /text >}}

  • Cleanup cluster2:

    {{< text bash >}} $ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@ $ kubectl delete --context=$CTX_CLUSTER2 ns bar {{< /text >}}

Version-aware routing to remote services

If the remote service has multiple versions, you can add labels to the service entry endpoints. For example:

{{< text bash >}} $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts:

must be of form name.namespace.global

  • httpbin.bar.global location: MESH_INTERNAL ports:
  • name: http1 number: 8000 protocol: http resolution: DNS addresses:

the IP address to which httpbin.bar.global will resolve to

must be unique for each service.

  • 240.0.0.2 endpoints:
  • address: ${CLUSTER2_GW_ADDR} labels: cluster: cluster2 ports: http1: 15443 # Do not change this port value EOF {{< /text >}}

You can then create virtual services and destination rules to define subsets of the httpbin.bar.global service using the appropriate gateway label selectors. The instructions are the same as those used for routing to a local service. See multicluster version routing for a complete example.

Uninstalling

Uninstall Istio by running the following commands on every cluster:

{{< text bash >}} $ kubectl delete -f $HOME/istio.yaml $ kubectl delete ns istio-system {{< /text >}}

Summary

Using Istio gateways, a common root CA, and service entries, you can configure a single Istio service mesh across multiple Kubernetes clusters. Once configured this way, traffic can be transparently routed to remote clusters without any application involvement. Although this approach requires a certain amount of manual configuration for remote service access, the service entry creation process could be automated.