7.6 KiB
| title | description | weight | aliases | keywords | |||||
|---|---|---|---|---|---|---|---|---|---|
| Gateway Connectivity | Install an Istio mesh across multiple Kubernetes clusters using Istio Gateway to reach remote pods. | 2 |
|
|
Follow this flow to install an Istio multicluster service mesh where the Kubernetes cluster services and the applications in each cluster are limited to remote communication using gateway IPs.
Instead of using a central Istio control plane to manage the mesh, in this configuration each cluster has an identical Istio control plane installation, each managing its own endpoints. All of the clusters are under a shared administrative control for the purposes of policy enforcement and security.
A single Istio service mesh across the clusters is achieved by replicating shared services and namespaces and using a common root CA in all of the clusters. Cross-cluster communication occurs over Istio Gateways of the respective clusters.
{{< image width="80%" link="./multicluster-with-gateways.svg" caption="Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods" >}}
Prerequisites
-
Two or more Kubernetes clusters with 1.10 or newer.
-
Authority to deploy the Istio control plane using Helm on each Kubernetes cluster.
-
The IP address of the
istio-ingressgatewayservice in each cluster must be accessible from every other cluster. -
A Root CA. Cross cluster communication requires mutual TLS connection between services. To enable mutual TLS communication across clusters, each cluster's Citadel will be configured with intermediate CA credentials generated by a shared root CA. For illustration purposes, we use a sample root CA certificate available in the Istio installation under the
samples/certsdirectory.
Deploy the Istio control plane in each cluster
-
Generate intermediate CA certificates for each cluster's Citadel from your organization's root CA. The shared root CA enables mutual TLS communication across different clusters.
{{< tip >}} For illustration purposes, the following instructions use the certificates from the Istio samples directory for both clusters. In real world deployments, you would likely use a different CA certificate for each cluster, all signed by a common root CA. {{< /tip >}}
-
Generate a multicluster-gateways Istio configuration file using
helm:{{< warning >}} If you're not sure if your
helmdependencies are up to date, update them using the command shown in Helm installation steps before running the following command. {{< /warning >}}{{< text bash >}} $ cat install/kubernetes/helm/istio-init/files/crd-* > $HOME/istio.yaml $ helm template install/kubernetes/helm/istio --name istio --namespace istio-system
-f @install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml@ >> $HOME/istio.yaml {{< /text >}}For further details and customization options, refer to the Installation with Helm instructions.
-
Run the following commands in every cluster to deploy an identical Istio control plane configuration in all of them.
{{< warning >}}
Make sure that the current user has cluster administrator (
cluster-admin) permissions and grant them if not. On the GKE platform, for example, the following command can be used:{{< text bash >}}
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="(gcloud config get-value core/account)" {{< /text >}}{{< /warning >}}
-
Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.
{{< text bash >}} $ kubectl create namespace istio-system $ kubectl create secret generic cacerts -n istio-system
--from-file=@samples/certs/ca-cert.pem@
--from-file=@samples/certs/ca-key.pem@
--from-file=@samples/certs/root-cert.pem@
--from-file=@samples/certs/cert-chain.pem@ {{< /text >}} -
Use the Istio installation yaml file generated in a previous step to install Istio:
{{< text bash >}} $ kubectl apply -f $HOME/istio.yaml {{< /text >}}
-
Setup DNS
Providing DNS resolution for services in remote clusters will allow
existing applications to function unmodified, as applications typically
expect to resolve services by their DNS names and access the resulting
IP. Istio itself does not use the DNS for routing requests between
services. Services local to a cluster share a common DNS suffix
(e.g., svc.cluster.local). Kubernetes DNS provides DNS resolution for these
services.
To provide a similar setup for services from remote clusters, we name
services from remote clusters in the format
<name>.<namespace>.global. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes' DNS needs to be configured to point to CoreDNS as the DNS
server for the .global DNS domain.
Create one of the following ConfigMaps, or update an existing one, in each cluster that will be calling services in remote clusters (every cluster in the general case):
For clusters that use kube-dns:
{{< text bash >}} $ kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]} EOF {{< /text >}}
For clusters that use CoreDNS:
{{< text bash >}} $ kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } global:53 { errors cache 30 proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}) } EOF {{< /text >}}
Configure application services
Every service in a given cluster that needs to be accessed from a different remote
cluster requires a ServiceEntry configuration in the remote cluster.
The host used in the service entry should be of the form <name>.<namespace>.global
where name and namespace correspond to the service's name and namespace respectively.
To confirm that your multicluster configuration is working, we suggest you proceed to our simple multicluster using gateways example to test your setup.
Uninstalling
Uninstall Istio by running the following commands on every cluster:
{{< text bash >}} $ kubectl delete -f $HOME/istio.yaml $ kubectl delete ns istio-system {{< /text >}}
Summary
Using Istio gateways, a common root CA, and service entries, you can configure a single Istio service mesh across multiple Kubernetes clusters. Once configured this way, traffic can be transparently routed to remote clusters without any application involvement. Although this approach requires a certain amount of manual configuration for remote service access, the service entry creation process could be automated.