Rewrite multicluster installation docs (#8180)
* Rewrite multicluster installation docs This completely replaces the existing docs for multi-primary, primary-remote, and single/multi-network. Changed blogs dependent on the `.global` hack to reference the 1.6 docs. * addressing sven's comments * renaming resources and changing heading style within tabs * using <h3> for headings in tabs
|
@ -1,7 +1,7 @@
|
|||
<!-- WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. UPDATE THE OWNER ATTRIBUTE IN THE DOCUMENT FILES, INSTEAD -->
|
||||
# Istio.io Document Owners
|
||||
|
||||
There are 145 owned istio.io docs.
|
||||
There are 144 owned istio.io docs.
|
||||
|
||||
## istio/wg-docs-maintainers: 15 docs
|
||||
|
||||
|
@ -21,7 +21,7 @@ There are 145 owned istio.io docs.
|
|||
- [docs/examples/microservices-istio/single/index.md](https://preliminary.istio.io/latest/docs/examples/microservices-istio/single)
|
||||
- [docs/reference/glossary/index.md](https://preliminary.istio.io/latest/docs/reference/glossary)
|
||||
|
||||
## istio/wg-environments-maintainers: 37 docs
|
||||
## istio/wg-environments-maintainers: 36 docs
|
||||
|
||||
- [docs/examples/virtual-machines/bookinfo/index.md](https://preliminary.istio.io/latest/docs/examples/virtual-machines/bookinfo)
|
||||
- [docs/examples/virtual-machines/multi-network/index.md](https://preliminary.istio.io/latest/docs/examples/virtual-machines/multi-network)
|
||||
|
@ -42,8 +42,7 @@ There are 145 owned istio.io docs.
|
|||
- [docs/setup/additional-setup/sidecar-injection/index.md](https://preliminary.istio.io/latest/docs/setup/additional-setup/sidecar-injection)
|
||||
- [docs/setup/getting-started/index.md](https://preliminary.istio.io/latest/docs/setup/getting-started)
|
||||
- [docs/setup/install/istioctl/index.md](https://preliminary.istio.io/latest/docs/setup/install/istioctl)
|
||||
- [docs/setup/install/multicluster/gateways/index.md](https://preliminary.istio.io/latest/docs/setup/install/multicluster/gateways)
|
||||
- [docs/setup/install/multicluster/shared/index.md](https://preliminary.istio.io/latest/docs/setup/install/multicluster/shared)
|
||||
- [docs/setup/install/multicluster/index.md](https://preliminary.istio.io/latest/docs/setup/install/multicluster)
|
||||
- [docs/setup/install/operator/index.md](https://preliminary.istio.io/latest/docs/setup/install/operator)
|
||||
- [docs/setup/install/virtual-machine/index.md](https://preliminary.istio.io/latest/docs/setup/install/virtual-machine)
|
||||
- [docs/setup/platform-setup/MicroK8s/index.md](https://preliminary.istio.io/latest/docs/setup/platform-setup/MicroK8s)
|
||||
|
|
|
@ -84,7 +84,7 @@ Below is our list of existing features and their current phases. This informatio
|
|||
| [Kubernetes: Istio Control Plane Installation](/docs/setup/) | Stable
|
||||
| [Attribute Expression Language](https://istio.io/v1.6/docs/reference/config/policy-and-telemetry/expression-language/) | Stable
|
||||
| Mixer Out-of-Process Adapter Authoring Model | Beta
|
||||
| [Multicluster Mesh over VPN](/docs/setup/install/multicluster/) | Alpha
|
||||
| [Multicluster Mesh](/docs/setup/install/multicluster/) | Alpha
|
||||
| [Kubernetes: Istio Control Plane Upgrade](/docs/setup/) | Beta
|
||||
| Consul Integration | Alpha
|
||||
| Basic Configuration Resource Validation | Beta
|
||||
|
|
|
@ -37,7 +37,7 @@ running in one cluster, versions `v2` and `v3` running in a second cluster.
|
|||
To start, you'll need two Kubernetes clusters, both running a slightly customized configuration of Istio.
|
||||
|
||||
* Set up a multicluster environment with two Istio clusters by following the
|
||||
[replicated control planes](/docs/setup/install/multicluster/gateways/) instructions.
|
||||
[replicated control planes](/docs/setup/install/multicluster) instructions.
|
||||
|
||||
* The `kubectl` command is used to access both clusters with the `--context` flag.
|
||||
Use the following command to list your contexts:
|
||||
|
@ -271,7 +271,7 @@ is running on `cluster1` and we have not yet configured access to `cluster2`.
|
|||
|
||||
## Create a service entry and destination rule on `cluster1` for the remote reviews service
|
||||
|
||||
As described in the [setup instructions](/docs/setup/install/multicluster/gateways/#setup-dns),
|
||||
As described in the [setup instructions](https://istio.io/v1.6/docs/docs/setup/install/multicluster/gateways/#setup-dns),
|
||||
remote services are accessed with a `.global` DNS name. In our case, it's `reviews.default.global`,
|
||||
so we need to create a service entry and destination rule for that host.
|
||||
The service entry will use the `cluster2` gateway as the endpoint address to access the service.
|
||||
|
@ -330,7 +330,7 @@ EOF
|
|||
The address `240.0.0.3` of the service entry can be any arbitrary unallocated IP.
|
||||
Using an IP from the class E addresses range 240.0.0.0/4 is a good choice.
|
||||
Check out the
|
||||
[gateway-connected multicluster example](/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
[gateway-connected multicluster example](/docs/setup/install/multicluster)
|
||||
for more details.
|
||||
|
||||
Note that the labels of the subsets in the destination rule map to the service entry
|
||||
|
|
|
@ -15,7 +15,7 @@ This blog post explains how we solved these problems using [Admiral](https://git
|
|||
|
||||
## Background
|
||||
|
||||
Using Istio, we realized the configuration for multi-cluster was complex and challenging to maintain over time. As a result, we chose the model described in [Multi-Cluster Istio Service Mesh with replicated control planes](/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster) for scalability and other operational reasons. Following this model, we had to solve these key requirements before widely adopting an Istio service mesh:
|
||||
Using Istio, we realized the configuration for multi-cluster was complex and challenging to maintain over time. As a result, we chose the model described in [Multi-Cluster Istio Service Mesh with replicated control planes](https://istio.io/v1.6/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster) for scalability and other operational reasons. Following this model, we had to solve these key requirements before widely adopting an Istio service mesh:
|
||||
|
||||
- Creation of service DNS entries decoupled from the namespace, as described in [Features of multi-mesh deployments](/blog/2019/isolated-clusters/#features-of-multi-mesh-deployments).
|
||||
- Service discovery across many clusters.
|
||||
|
@ -164,6 +164,6 @@ spec:
|
|||
|
||||
## Summary
|
||||
|
||||
Admiral provides a new Global Traffic Routing and unique service naming functionality to address some challenges posed by the Istio model described in [multi-cluster deployment with replicated control planes](/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster). It removes the need for manual configuration synchronization between clusters and generates contextual configuration for each cluster. This makes it possible to operate a Service Mesh composed of many Kubernetes clusters.
|
||||
Admiral provides a new Global Traffic Routing and unique service naming functionality to address some challenges posed by the Istio model described in [multi-cluster deployment with replicated control planes](https://istio.io/v1.6/docs/setup/install/multicluster/gateways/#deploy-the-istio-control-plane-in-each-cluster). It removes the need for manual configuration synchronization between clusters and generates contextual configuration for each cluster. This makes it possible to operate a Service Mesh composed of many Kubernetes clusters.
|
||||
|
||||
We think Istio/Service Mesh community would benefit from this approach, so we [open sourced Admiral](https://github.com/istio-ecosystem/admiral) and would love your feedback and support!
|
||||
|
|
|
@ -525,7 +525,7 @@ traffic for services running outside of the mesh, including the following tasks:
|
|||
- Run a mesh service in a Virtual Machine (VM) by
|
||||
[adding VMs to your mesh](/docs/examples/virtual-machines/).
|
||||
- Logically add services from a different cluster to the mesh to configure a
|
||||
[multicluster Istio mesh](/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
[multicluster Istio mesh](/docs/setup/install/multicluster)
|
||||
on Kubernetes.
|
||||
|
||||
You don’t need to add a service entry for every external service that you want
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
---
|
||||
title: Multicluster Installation
|
||||
description: Configure an Istio mesh spanning multiple Kubernetes clusters.
|
||||
weight: 30
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/
|
||||
- /docs/setup/kubernetes/multicluster/
|
||||
- /docs/setup/kubernetes/install/multicluster/
|
||||
keywords: [kubernetes,multicluster]
|
||||
test: n/a
|
||||
---
|
||||
|
||||
{{< tip >}}
|
||||
Note that these instructions are not mutually exclusive.
|
||||
In a large multicluster deployment, composed from more than two clusters,
|
||||
a combination of the approaches can be used. For example,
|
||||
two clusters might share a control plane while a third has its own.
|
||||
{{< /tip >}}
|
||||
|
||||
Refer to the [multicluster deployment model](/docs/ops/deployment/deployment-models/#multiple-clusters)
|
||||
concept documentation for more information.
|
|
@ -1,559 +0,0 @@
|
|||
---
|
||||
title: Replicated control planes
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances.
|
||||
weight: 2
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/gateways/
|
||||
- /docs/examples/multicluster/gateways/
|
||||
- /docs/tasks/multicluster/gateways/
|
||||
- /docs/setup/kubernetes/install/multicluster/gateways/
|
||||
keywords: [kubernetes,multicluster,gateway]
|
||||
owner: istio/wg-environments-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
Follow this guide to install an Istio
|
||||
[multicluster deployment](/docs/ops/deployment/deployment-models/#multiple-clusters)
|
||||
using multiple {{< gloss "primary cluster" >}}primary clusters{{< /gloss >}},
|
||||
each with its own replicated [control plane](/docs/ops/deployment/deployment-models/#control-plane-models),
|
||||
and using gateways to connect services across clusters.
|
||||
|
||||
Instead of using a shared Istio control plane to manage the mesh,
|
||||
in this configuration each cluster has its own Istio control plane
|
||||
installation, each managing its own endpoints.
|
||||
All of the clusters are under a shared administrative control for the purposes of
|
||||
policy enforcement and security.
|
||||
|
||||
A single Istio service mesh across the clusters is achieved by replicating
|
||||
shared services and namespaces and using a common root CA in all of the clusters.
|
||||
Cross-cluster communication occurs over the Istio gateways of the respective clusters.
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-gateways.svg" caption="Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods" >}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Two or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
|
||||
* Authority to [deploy the Istio control plane](/docs/setup/install/istioctl/)
|
||||
on **each** Kubernetes cluster.
|
||||
|
||||
* The IP address of the `istio-ingressgateway` service in each cluster must be accessible
|
||||
from every other cluster, ideally using L4 network load balancers (NLB).
|
||||
Not all cloud providers support NLBs and some require special annotations to use them,
|
||||
so please consult your cloud provider’s documentation for enabling NLBs for
|
||||
service object type load balancers. When deploying on platforms without
|
||||
NLB support, it may be necessary to modify the health checks for the load
|
||||
balancer to register the ingress gateway.
|
||||
|
||||
* A **Root CA**. Cross cluster communication requires mutual TLS connection
|
||||
between services. To enable mutual TLS communication across clusters, each
|
||||
cluster's Istio CA will be configured with intermediate CA credentials
|
||||
generated by a shared root CA. For illustration purposes, you use a
|
||||
sample root CA certificate available in the Istio installation
|
||||
under the `samples/certs` directory.
|
||||
|
||||
## Deploy the Istio control plane in each cluster
|
||||
|
||||
1. Generate intermediate CA certificates for each cluster's Istio CA from your
|
||||
organization's root CA. The shared root CA enables mutual TLS communication
|
||||
across different clusters.
|
||||
|
||||
For illustration purposes, the following instructions use the certificates
|
||||
from the Istio samples directory for both clusters. In real world deployments,
|
||||
you would likely use a different CA certificate for each cluster, all signed
|
||||
by a common root CA.
|
||||
|
||||
1. Run the following commands in **every cluster** to deploy an identical Istio control plane
|
||||
configuration in all of them.
|
||||
|
||||
* Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/docs/tasks/security/cert-management/plugin-ca-cert/) for more details.
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
* Install Istio:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl install \
|
||||
-f manifests/examples/multicluster/values-istio-multicluster-gateways.yaml
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/docs/setup/install/istioctl/).
|
||||
|
||||
## Setup DNS
|
||||
|
||||
Providing DNS resolution for services in remote clusters will allow
|
||||
existing applications to function unmodified, as applications typically
|
||||
expect to resolve services by their DNS names and access the resulting
|
||||
IP. Istio itself does not use the DNS for routing requests between
|
||||
services. Services local to a cluster share a common DNS suffix
|
||||
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
|
||||
services.
|
||||
|
||||
To provide a similar setup for services from remote clusters, you name
|
||||
services from remote clusters in the format
|
||||
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
|
||||
will provide DNS resolution for these services. In order to utilize this
|
||||
DNS, Kubernetes' DNS must be configured to *stub a domain* for `.global`.
|
||||
|
||||
{{< warning >}}
|
||||
Some cloud providers have different specific DNS domain stub capabilities
|
||||
and procedures for their Kubernetes services. Reference the cloud provider's
|
||||
documentation to determine how to stub DNS domains for each unique
|
||||
environment. The objective here is to stub a domain for `.global` on
|
||||
port `53` to reference or proxy the `istiocoredns` service in Istio's service
|
||||
namespace.
|
||||
{{< /warning >}}
|
||||
|
||||
Create one of the following ConfigMaps, or update an existing one, in each
|
||||
cluster that will be calling services in remote clusters
|
||||
(every cluster in the general case):
|
||||
|
||||
{{< tabset category-name="platform" >}}
|
||||
{{< tab name="KubeDNS" category-value="kube-dns" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
data:
|
||||
stubDomains: |
|
||||
{"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="CoreDNS (< 1.4.0)" category-value="coredns-prev-1.4.0" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
proxy . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
global:53 {
|
||||
errors
|
||||
cache 30
|
||||
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
|
||||
}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="CoreDNS (== 1.4.0)" cookie-value="coredns-1.4.0" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
forward . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
global:53 {
|
||||
errors
|
||||
cache 30
|
||||
forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
|
||||
}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="CoreDNS (>= 1.4.0)" cookie-value="coredns-after-1.4.0" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
ready
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
forward . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
global:53 {
|
||||
errors
|
||||
cache 30
|
||||
forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
|
||||
}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
{{< /tabset >}}
|
||||
|
||||
## Configure application services
|
||||
|
||||
Every service in a given cluster that needs to be accessed from a different remote
|
||||
cluster requires a `ServiceEntry` configuration in the remote cluster.
|
||||
The host used in the service entry should be of the form `<name>.<namespace>.global`
|
||||
where name and namespace correspond to the service's name and namespace respectively.
|
||||
|
||||
To demonstrate cross cluster access, configure the
|
||||
[sleep service]({{< github_tree >}}/samples/sleep)
|
||||
running in one cluster to call the [httpbin]({{< github_tree >}}/samples/httpbin) service
|
||||
running in a second cluster. Before you begin:
|
||||
|
||||
* Choose two of your Istio clusters, to be referred to as `cluster1` and `cluster2`.
|
||||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
### Configure the example services
|
||||
|
||||
1. Deploy the `sleep` service in `cluster1`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 namespace foo
|
||||
$ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy the `httpbin` service in `cluster2`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 namespace bar
|
||||
$ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
1. Export the `cluster2` gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
|
||||
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
This command sets the value to the gateway's public IP, but note that you can set it to
|
||||
a DNS name instead, if you have one.
|
||||
|
||||
{{< tip >}}
|
||||
If `cluster2` is running in an environment that does not
|
||||
support external load balancers, you will need to use a nodePort to access the gateway.
|
||||
Instructions for obtaining the IP to use can be found in the
|
||||
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
guide. You will also need to change the service entry endpoint port in the following step from 15443
|
||||
to its corresponding nodePort
|
||||
(i.e., `kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'`).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Create a service entry for the `httpbin` service in `cluster1`.
|
||||
|
||||
To allow `sleep` in `cluster1` to access `httpbin` in `cluster2`, we need to create
|
||||
a service entry for it. The host name of the service entry should be of the form
|
||||
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||
remote service's name and namespace respectively.
|
||||
|
||||
For DNS resolution for services under the `*.global` domain, you need to assign these
|
||||
services an IP address.
|
||||
|
||||
{{< tip >}}
|
||||
Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||
{{< /tip >}}
|
||||
|
||||
If the global services have actual VIPs, you can use those, but otherwise we suggest
|
||||
using IPs from the class E addresses range `240.0.0.0/4`.
|
||||
Application traffic for these IPs will be captured by the sidecar and routed to the
|
||||
appropriate remote service.
|
||||
|
||||
{{< warning >}}
|
||||
Multicast addresses (224.0.0.0 ~ 239.255.255.255) should not be used because there is no route to them by default.
|
||||
Loopback addresses (127.0.0.0/8) should also not be used because traffic sent to them may be redirected to the sidecar inbound listener.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
# Treat remote cluster services as part of the service mesh
|
||||
# as all clusters in the service mesh share the same root of trust.
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each remote service, within a given cluster.
|
||||
# This address need not be routable. Traffic for this IP will be captured
|
||||
# by the sidecar and routed appropriately.
|
||||
- 240.0.0.2
|
||||
endpoints:
|
||||
# This is the routable address of the ingress gateway in cluster2 that
|
||||
# sits in front of sleep.foo service. Traffic from the sidecar will be
|
||||
# routed to this address.
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The configurations above will result in all traffic in `cluster1` for
|
||||
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||
`$CLUSTER2_GW_ADDR:15443` over a mutual TLS connection.
|
||||
|
||||
The gateway for port 15443 is a special SNI-aware Envoy
|
||||
preconfigured and installed when you deployed the Istio control plane in the cluster.
|
||||
Traffic entering port 15443 will be
|
||||
load balanced among pods of the appropriate internal service of the target
|
||||
cluster (in this case, `httpbin.bar` in `cluster2`).
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a `Gateway` configuration for port 15443.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Verify that `httpbin` is accessible from the `sleep` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers
|
||||
{{< /text >}}
|
||||
|
||||
### Send remote traffic via an egress gateway
|
||||
|
||||
If you want to route traffic from `cluster1` via a dedicated egress gateway, instead of directly from the sidecars,
|
||||
use the following service entry for `httpbin.bar` instead of the one in the previous section.
|
||||
|
||||
{{< tip >}}
|
||||
The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.
|
||||
{{< /tip >}}
|
||||
|
||||
If `$CLUSTER2_GW_ADDR` is an IP address, use the `$CLUSTER2_GW_ADDR - IP address` option. If `$CLUSTER2_GW_ADDR` is a hostname, use the `$CLUSTER2_GW_ADDR - hostname` option.
|
||||
|
||||
{{< tabset category-name="profile" >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - IP address" category-value="option1" >}}
|
||||
* Export the `cluster1` egress gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
|
||||
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
|
||||
{{< /text >}}
|
||||
|
||||
* Apply the httpbin-bar service entry:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: STATIC
|
||||
addresses:
|
||||
- 240.0.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: ${CLUSTER1_EGW_ADDR}
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - hostname" category-value="option2" >}}
|
||||
If the `${CLUSTER2_GW_ADDR}` is a hostname, you can use `resolution: DNS` for the endpoint resolution:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
- 240.0.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: istio-egressgateway.istio-system.svc.cluster.local
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
### Cleanup the example
|
||||
|
||||
Execute the following commands to clean up the example services.
|
||||
|
||||
* Cleanup `cluster1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
|
||||
{{< /text >}}
|
||||
|
||||
* Cleanup `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 ns bar
|
||||
{{< /text >}}
|
||||
|
||||
* Cleanup `environment variables`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ unset SLEEP_POD CLUSTER2_GW_ADDR CLUSTER1_EGW_ADDR CTX_CLUSTER1 CTX_CLUSTER2
|
||||
{{< /text >}}
|
||||
|
||||
## Version-aware routing to remote services
|
||||
|
||||
If the remote service has multiple versions, you can add
|
||||
labels to the service entry endpoints.
|
||||
For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each service.
|
||||
- 240.0.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
labels:
|
||||
cluster: cluster2
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
You can then create virtual services and destination rules
|
||||
to define subsets of the `httpbin.bar.global` service using the appropriate gateway label selectors.
|
||||
The instructions are the same as those used for routing to a local service.
|
||||
See [multicluster version routing](/blog/2019/multicluster-version-routing/)
|
||||
for a complete example.
|
||||
|
||||
## Uninstalling
|
||||
|
||||
Uninstall Istio by running the following commands on **every cluster**:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest generate \
|
||||
-f manifests/examples/multicluster/values-istio-multicluster-gateways.yaml \
|
||||
| kubectl delete -f -
|
||||
{{< /text >}}
|
||||
|
||||
## Summary
|
||||
|
||||
Using Istio gateways, a common root CA, and service entries, you can configure
|
||||
a single Istio service mesh across multiple Kubernetes clusters.
|
||||
Once configured this way, traffic can be transparently routed to remote clusters
|
||||
without any application involvement.
|
||||
Although this approach requires a certain amount of manual configuration for
|
||||
remote service access, the service entry creation process could be automated.
|
Before Width: | Height: | Size: 250 KiB |
|
@ -0,0 +1,744 @@
|
|||
---
|
||||
title: Multicluster Installation
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters.
|
||||
weight: 30
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/
|
||||
- /docs/setup/kubernetes/multicluster/
|
||||
- /docs/setup/kubernetes/install/multicluster/
|
||||
keywords: [kubernetes,multicluster]
|
||||
test: yes
|
||||
owner: istio/wg-environments-maintainers
|
||||
---
|
||||
Follow this guide to install an Istio {{< gloss >}}service mesh{{< /gloss >}}
|
||||
that spans multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}.
|
||||
|
||||
This guide covers some of the most common concerns when creating a
|
||||
{{< gloss >}}multicluster{{< /gloss >}} mesh:
|
||||
|
||||
- [Network topologies](/docs/ops/deployment/deployment-models#network-models):
|
||||
one or two networks
|
||||
|
||||
- [Control plane topologies](/docs/ops/deployment/deployment-models#control-plane-models):
|
||||
multi-primary, primary-remote
|
||||
|
||||
Before you begin, review the [deployment models guide](/docs/ops/deployment/deployment-models)
|
||||
which describes the foundational concepts used throughout this guide.
|
||||
|
||||
## Requirements
|
||||
|
||||
This guide requires that you have two Kubernetes clusters with any of the
|
||||
[supported Kubernetes versions](/docs/setup/platform-setup).
|
||||
|
||||
The API Server in each cluster must be accessible to the other clusters in the
|
||||
mesh. Many cloud providers make API Servers publicly accessible via network
|
||||
load balancers (NLB). If the API Server is not directly accessible, you will
|
||||
have to modify the installation procedure to enable access. For example, the
|
||||
[east-west](https://en.wikipedia.org/wiki/East-west_traffic) gateway used in
|
||||
the multi-network and primary-remote configurations below could also be used
|
||||
to enable access to the API Server.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This guide will refer to two clusters named `CLUSTER1` and `CLUSTER2`. The following
|
||||
environment variables will be used throughout to simplify the instructions:
|
||||
|
||||
Variable | Description
|
||||
-------- | -----------
|
||||
`CTX_CLUSTER1` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `CLUSTER1` cluster.
|
||||
`CTX_CLUSTER2` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `CLUSTER2` cluster.
|
||||
|
||||
For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CTX_CLUSTER1=cluster1
|
||||
$ export CTX_CLUSTER2=cluster2
|
||||
{{< /text >}}
|
||||
|
||||
## Configure Trust
|
||||
|
||||
A multicluster service mesh deployment requires that you establish trust
|
||||
between all clusters in the mesh. Depending on the requirements for your
|
||||
system, there may be multiple options available for establishing trust.
|
||||
See [certificate management](/docs/tasks/security/cert-management/) for
|
||||
detailed descriptions and instructions for all available options.
|
||||
Depending on which option you choose, the installation instructions for
|
||||
Istio may change slightly.
|
||||
|
||||
This guide will assume that you use a common root to generate intermediate
|
||||
certificates for each cluster. Follow the [instructions](/docs/tasks/security/cert-management/plugin-ca-cert/)
|
||||
to generate and push a ca certificate secrets to both the `CLUSTER1` and `CLUSTER2`
|
||||
clusters.
|
||||
|
||||
{{< tip >}}
|
||||
If you currently have a single cluster with a self-signed CA (as described
|
||||
in [Getting Started](/docs/setup/getting-started/)), you need to
|
||||
change the CA using one of the methods described in
|
||||
[certificate management](/docs/tasks/security/cert-management/). Changing the
|
||||
CA typically requires reinstalling Istio. The installation instructions
|
||||
below may have to be altered based on your choice of CA.
|
||||
{{< /tip >}}
|
||||
|
||||
## Install Istio
|
||||
|
||||
The steps for installing Istio on multiple clusters depend on your
|
||||
requirements for network and control plane topology. This section
|
||||
illustrates the options for two clusters. Meshes that span many clusters may
|
||||
employ more than one of these options. See
|
||||
[deployment models](/docs/ops/deployment/deployment-models) for more
|
||||
information.
|
||||
|
||||
{{< tabset category-name="install-istio" >}}
|
||||
|
||||
{{< tab name="Multi-Primary" category-value="multi-primary" >}}
|
||||
|
||||
The steps that follow will install the Istio control plane on both `CLUSTER1` and
|
||||
`CLUSTER2`, making each a {{< gloss >}}primary cluster{{< /gloss >}}. Both
|
||||
clusters reside on the `NETWORK1` network, meaning there is direct
|
||||
connectivity between the pods in both clusters.
|
||||
|
||||
Each control plane observes the API Servers in both clusters for endpoints.
|
||||
|
||||
Service workloads communicate directly (pod-to-pod) across cluster boundaries.
|
||||
|
||||
{{< image width="75%"
|
||||
link="multi-primary.svg"
|
||||
caption="Multiple primary clusters on the same network"
|
||||
>}}
|
||||
|
||||
<h3>Configure CLUSTER1 as a primary</h3>
|
||||
|
||||
Install Istio on `CLUSTER1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Configure CLUSTER2 as a primary</h3>
|
||||
|
||||
Install Istio on `CLUSTER2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Enable Endpoint Discovery</h3>
|
||||
|
||||
Install a remote secret in `CLUSTER2` that provides access to `CLUSTER1`’s API server.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
--name=CLUSTER1 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER2}
|
||||
{{< /text >}}
|
||||
|
||||
Install a remote secret in `CLUSTER1` that provides access to `CLUSTER2`’s API server.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="Multi-Primary, Multi-Network" category-value="multi-primary-multi-network" >}}
|
||||
|
||||
The following steps will install the Istio control plane on both `CLUSTER1` and
|
||||
`CLUSTER2`, making each a {{< gloss >}}primary cluster{{< /gloss >}}. `CLUSTER1` is
|
||||
on the `NETWORK1` network, while `CLUSTER2` is on the `NETWORK2` network. This means there
|
||||
is no direct connectivity between pods across cluster boundaries.
|
||||
|
||||
Both `CLUSTER1` and `CLUSTER2` observe the API Servers in each cluster for endpoints.
|
||||
|
||||
Service workloads across cluster boundaries communicate indirectly, via
|
||||
dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west_traffic)
|
||||
traffic. The gateway in each cluster must be reachable from the other cluster.
|
||||
|
||||
{{< image width="75%"
|
||||
link="multi-primary-multi-network.svg"
|
||||
caption="Multiple primary clusters on separate networks"
|
||||
>}}
|
||||
|
||||
<h3>Configure CLUSTER1 as a primary with services exposed</h3>
|
||||
|
||||
Install Istio on `CLUSTER1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
{{< /text >}}
|
||||
|
||||
Install a gateway in `CLUSTER1` that is dedicated to
|
||||
[east-west](https://en.wikipedia.org/wiki/East-west_traffic) traffic. By
|
||||
default, this gateway will be public on the Internet. Production systems may
|
||||
require additional access restrictions (e.g. via firewall rules) to prevent
|
||||
external attacks. Check with your cloud vendor to see what options are
|
||||
available.
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
{{< /text >}}
|
||||
|
||||
Since the clusters are on separate networks, we need to expose all services
|
||||
(*.local) on the east-west gateway in both clusters. While this gateway is
|
||||
public on the Internet, services behind it can only be accessed by services
|
||||
with a trusted mTLS certificate and workload ID, just as if they were on the
|
||||
same network.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --context=${CTX_CLUSTER1} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Configure CLUSTER2 as a primary with services exposed</h3>
|
||||
|
||||
Install Istio on `CLUSTER2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK2
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
{{< /text >}}
|
||||
|
||||
As we did with `CLUSTER1` above, install a gateway in `CLUSTER2` that is dedicated
|
||||
to east-west traffic and expose user services.
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER=CLUSTER2 NETWORK=NETWORK2 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER2} -f -
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --context=${CTX_CLUSTER2} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Enable Endpoint Discovery for CLUSTER1 and CLUSTER2</h3>
|
||||
|
||||
Install a remote secret in `CLUSTER2` that provides access to `CLUSTER1`’s API server.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
--name=CLUSTER1 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER2}
|
||||
{{< /text >}}
|
||||
|
||||
Install a remote secret in `CLUSTER1` that provides access to `CLUSTER2`’s API server.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="Primary-Remote" category-value="primary-remote" >}}
|
||||
|
||||
The following steps will install the Istio control plane on `CLUSTER1` (the
|
||||
{{< gloss >}}primary cluster{{< /gloss >}}) and configure `CLUSTER2` (the
|
||||
{{< gloss >}}remote cluster{{< /gloss >}}) to use the control plane in `CLUSTER1`.
|
||||
Both clusters reside on the `NETWORK1` network, meaning there is direct
|
||||
connectivity between the pods in both clusters.
|
||||
|
||||
`CLUSTER1` will be configured to observe the API Servers in both clusters for
|
||||
endpoints. In this way, the control plane will be able to provide service
|
||||
discovery for workloads in both clusters.
|
||||
|
||||
Service workloads communicate directly (pod-to-pod) across cluster boundaries.
|
||||
|
||||
Services in `CLUSTER2` will reach the control plane in `CLUSTER1` via a
|
||||
dedicated gateway for [east-west](https://en.wikipedia.org/wiki/East-west_traffic)
|
||||
traffic.
|
||||
|
||||
{{< image width="75%"
|
||||
link="primary-remote.svg"
|
||||
caption="Primary and remote clusters on the same network"
|
||||
>}}
|
||||
|
||||
<h3>Configure CLUSTER1 as a primary with control plane exposed</h3>
|
||||
|
||||
Install Istio on `CLUSTER1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
${NETWORK1}:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
{{< /text >}}
|
||||
|
||||
Install a gateway in `CLUSTER1` that is dedicated to
|
||||
[east-west](https://en.wikipedia.org/wiki/East-west_traffic) traffic. By
|
||||
default, this gateway will be public on the Internet. Production systems may
|
||||
require additional access restrictions (e.g. via firewall rules) to prevent
|
||||
external attacks. Check with your cloud vendor to see what options are
|
||||
available.
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
{{< /text >}}
|
||||
|
||||
Before we can install on `CLUSTER2`, we need to first expose the control plane in
|
||||
`CLUSTER1` so that services in `CLUSTER2` will be able to access service discovery:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER1} -f \
|
||||
samples/multicluster/expose-istiod.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Configure CLUSTER2 as a remote</h3>
|
||||
|
||||
Save the address of `CLUSTER1`’s ingress gateway.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export DISCOVERY_ADDRESS=$(kubectl \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
-n istio-system get svc istio-eastwestgateway \
|
||||
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
Now install a remote configuration on `CLUSTER2`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK1
|
||||
remotePilotAddress: ${DISCOVERY_ADDRESS}
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Enable Endpoint Discovery for CLUSTER2</h3>
|
||||
|
||||
Create a remote secret that will allow the control plane in `CLUSTER1` to access the
|
||||
API Server in `CLUSTER2` for endpoints.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="Primary-Remote, Multi-Network" category-value="primary-remote-multi-network" >}}
|
||||
|
||||
The following steps will install the Istio control plane on `CLUSTER1` (the
|
||||
{{< gloss >}}primary cluster{{< /gloss >}}) and configure `CLUSTER2` (the
|
||||
{{< gloss >}}remote cluster{{< /gloss >}}) to use the control plane in `CLUSTER1`.
|
||||
`CLUSTER1` is on the `NETWORK1` network, while `CLUSTER2` is on the `NETWORK2` network.
|
||||
This means there is no direct connectivity between pods across cluster
|
||||
boundaries.
|
||||
|
||||
`CLUSTER1` will be configured to observe the API Servers in both clusters for
|
||||
endpoints. In this way, the control plane will be able to provide service
|
||||
discovery for workloads in both clusters.
|
||||
|
||||
Service workloads across cluster boundaries communicate indirectly, via
|
||||
dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west_traffic)
|
||||
traffic. The gateway in each cluster must be reachable from the other cluster.
|
||||
|
||||
Services in `CLUSTER2` will reach the control plane in `CLUSTER1` via the
|
||||
same east-west gateway.
|
||||
|
||||
{{< image width="75%"
|
||||
link="primary-remote-multi-network.svg"
|
||||
caption="Primary and remote clusters on separate networks"
|
||||
>}}
|
||||
|
||||
<h3>Configure CLUSTER1 as a primary with control plane and services exposed</h3>
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
{{< /text >}}
|
||||
|
||||
Install a gateway in `CLUSTER1` that is dedicated to east-west traffic. By
|
||||
default, this gateway will be public on the Internet. Production systems may
|
||||
require additional access restrictions (e.g. via firewall rules) to prevent
|
||||
external attacks. Check with your cloud vendor to see what options are
|
||||
available.
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
{{< /text >}}
|
||||
|
||||
Before we can install on `CLUSTER2`, we need to first expose the control plane in
|
||||
`CLUSTER1` so that services in `CLUSTER2` will be able to access service discovery:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER1} -f \
|
||||
samples/multicluster/expose-istiod.yaml
|
||||
{{< /text >}}
|
||||
|
||||
Since the clusters are on separate networks, we also need to expose all user
|
||||
services (*.local) on the east-west gateway in both clusters. While this
|
||||
gateway is public on the Internet, services behind it can only be accessed by
|
||||
services with a trusted mTLS certificate and workload ID, just as if they were
|
||||
on the same network.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --context=${CTX_CLUSTER1} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Configure CLUSTER2 as a remote with services exposed</h3>
|
||||
|
||||
Save the address of `CLUSTER1`’s ingress gateway.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export DISCOVERY_ADDRESS=$(kubectl \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
-n istio-system get svc istio-eastwestgateway \
|
||||
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
Now install a remote configuration on `CLUSTER2`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK2
|
||||
remotePilotAddress: ${DISCOVERY_ADDRESS}
|
||||
EOF
|
||||
$ istioctl install --context=${CTX_CLUSTER2} -f - cluster2.yaml
|
||||
{{< /text >}}
|
||||
|
||||
As we did with `CLUSTER1` above, install a gateway in `CLUSTER2` that is dedicated
|
||||
to east-west traffic and expose user services.
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER=CLUSTER2 NETWORK=NETWORK2 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER2} -f -
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --context=${CTX_CLUSTER2} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
{{< /text >}}
|
||||
|
||||
<h3>Enable Endpoint Discovery for CLUSTER2 on NETWORK2</h3>
|
||||
|
||||
Create a remote secret that will allow the control plane in `CLUSTER1` to access the API Server in `CLUSTER2` for endpoints.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
## Verify the Installation
|
||||
|
||||
To verify that your Istio installation is working as intended, we will deploy
|
||||
the `HelloWorld` application `V1` to `CLUSTER1` and `V2` to `CLUSTER2`. Upon receiving a
|
||||
request, `HelloWorld` will include its version in its response.
|
||||
|
||||
We will also deploy the `Sleep` container to both clusters. We will use these
|
||||
pods as the source of requests to the `HelloWorld` service,
|
||||
simulating in-mesh traffic. Finally, after generating traffic, we will observe
|
||||
which cluster received the requests.
|
||||
|
||||
### Deploy the `HelloWorld` Service
|
||||
|
||||
In order to make the `HelloWorld` service callable from any cluster, the DNS
|
||||
lookup must succeed in each cluster (see
|
||||
[deployment models](/docs/ops/deployment/deployment-models#dns-with-multiple-clusters)
|
||||
for details). We will address this by deploying the `HelloWorld` Service to
|
||||
each cluster in the mesh.
|
||||
|
||||
To begin, create the `sample` namespace in each cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=${CTX_CLUSTER1} namespace sample
|
||||
$ kubectl create --context=${CTX_CLUSTER2} namespace sample
|
||||
{{< /text >}}
|
||||
|
||||
Enable automatic sidecar injection for the `sample` namespace:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl label --context=${CTX_CLUSTER1} namespace sample \
|
||||
istio-injection=enabled
|
||||
$ kubectl label --context=${CTX_CLUSTER2} namespace sample \
|
||||
istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
Create the `HelloWorld` service in both clusters:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -n sample
|
||||
$ kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -n sample
|
||||
{{< /text >}}
|
||||
|
||||
### Deploy `HelloWorld` `V1`
|
||||
|
||||
Deploy the `helloworld-v1` application to `CLUSTER1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -l version=v1 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
Confirm the `helloworld-v1` pod status:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${CTX_CLUSTER1} -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v1-86f77cd7bd-cpxhv 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
Wait until the status of `helloworld-v1` is `Running`.
|
||||
|
||||
### Deploy `HelloWorld` `V2`
|
||||
|
||||
Deploy the `helloworld-v2` application to `CLUSTER2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -l version=v2 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
Confirm the status the `helloworld-v2` pod status:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${CTX_CLUSTER2} -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
Wait until the status of `helloworld-v2` is `Running`.
|
||||
|
||||
### Deploy `Sleep`
|
||||
|
||||
Deploy the `Sleep` application to both clusters:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/sleep/sleep.yaml -n sample
|
||||
$ kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/sleep/sleep.yaml -n sample
|
||||
{{< /text >}}
|
||||
|
||||
Confirm the status `Sleep` pod on `CLUSTER1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${CTX_CLUSTER1} -n sample -l app=sleep
|
||||
sleep-754684654f-n6bzf 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
Wait until the status of the `Sleep` pod is `Running`.
|
||||
|
||||
Confirm the status of the `Sleep` pod on `CLUSTER2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${CTX_CLUSTER2} -n sample -l app=sleep
|
||||
sleep-754684654f-dzl9j 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
Wait until the status of the `Sleep` pod is `Running`.
|
||||
|
||||
### Verifying Cross-Cluster Traffic
|
||||
|
||||
To verify that cross-cluster load balancing works as expected, call the
|
||||
`HelloWorld` service several times using the `Sleep` pod. To ensure load
|
||||
balancing is working properly, call the `HelloWorld` service from all
|
||||
clusters in your deployment.
|
||||
|
||||
Send one request from the `Sleep` pod on `CLUSTER1` to the `HelloWorld` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=${CTX_CLUSTER1} -n sample -c sleep \
|
||||
"$(kubectl get pod --context=${CTX_CLUSTER1} -n sample -l \
|
||||
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
|
||||
-- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
Repeat this request several times and verify that the `HelloWorld` version
|
||||
should toggle between `v1` and `v2`:
|
||||
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
...
|
||||
{{< /text >}}
|
||||
|
||||
Now repeat this process from the `Sleep` pod on `CLUSTER2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=${CTX_CLUSTER2} -n sample -c sleep \
|
||||
"$(kubectl get pod --context=${CTX_CLUSTER2} -n sample -l \
|
||||
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
|
||||
-- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
Repeat this request several times and verify that the `HelloWorld` version
|
||||
should toggle between `v1` and `v2`:
|
||||
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
...
|
||||
{{< /text >}}
|
||||
|
||||
**Congratulations!** You successfully installed and verified Istio on multiple
|
||||
clusters!
|
After Width: | Height: | Size: 110 KiB |
After Width: | Height: | Size: 91 KiB |
After Width: | Height: | Size: 97 KiB |
After Width: | Height: | Size: 83 KiB |
Before Width: | Height: | Size: 116 KiB |
|
@ -1,544 +0,0 @@
|
|||
---
|
||||
title: Shared control plane (single and multiple networks)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with a shared control plane.
|
||||
weight: 5
|
||||
keywords: [kubernetes,multicluster,federation,vpn,gateway]
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-vpn/
|
||||
- /docs/examples/multicluster/split-horizon-eds/
|
||||
- /docs/tasks/multicluster/split-horizon-eds/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-gateways/
|
||||
owner: istio/wg-environments-maintainers
|
||||
test: no
|
||||
---
|
||||
|
||||
Follow this guide to
|
||||
set up a [multicluster Istio service mesh](/docs/ops/deployment/deployment-models/#multiple-clusters)
|
||||
across multiple clusters with a shared control plane.
|
||||
|
||||
In this configuration, multiple Kubernetes {{< gloss "remote cluster" >}}remote clusters{{< /gloss >}}
|
||||
connect to a shared Istio [control plane](/docs/ops/deployment/deployment-models/#control-plane-models)
|
||||
running in a {{< gloss >}}primary cluster{{< /gloss >}}.
|
||||
Remote clusters can be in the same network as the primary cluster or in different networks.
|
||||
After one or more remote clusters are connected, the control plane of the primary cluster will
|
||||
manage the service mesh across all {{< gloss "service endpoint" >}}service endpoints{{< /gloss >}}.
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-vpn.svg" caption="Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN" >}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Two or more clusters running a supported Kubernetes version ({{< supported_kubernetes_versions >}}).
|
||||
|
||||
* All Kubernetes control plane API servers must be routable to each other.
|
||||
|
||||
* Clusters on the same network must be an RFC1918 network, VPN, or an alternative more advanced network technique
|
||||
meeting the following requirements:
|
||||
* Individual cluster Pod CIDR ranges and service CIDR ranges must be unique across the network and may not overlap.
|
||||
* All pod CIDRs in the same network must be routable to each other.
|
||||
|
||||
* Clusters on different networks must have `istio-ingressgateway` services which are accessible from every other
|
||||
cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require
|
||||
special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for
|
||||
service object type load balancers. When deploying on platforms without NLB support, it may be necessary to
|
||||
modify the health checks for the load balancer to register the ingress gateway.
|
||||
|
||||
## Preparation
|
||||
|
||||
### Certificate Authority
|
||||
|
||||
Generate intermediate CA certificates for each cluster's CA from your
|
||||
organization's root CA. The shared root CA enables mutual TLS communication
|
||||
across different clusters. For illustration purposes, the following instructions
|
||||
use the certificates from the Istio samples directory for both clusters.
|
||||
|
||||
Run the following commands on each cluster in the mesh to install the certificates.
|
||||
See [Certificate Authority (CA) certificates](/docs/tasks/security/cert-management/plugin-ca-cert/)
|
||||
for more details on configuring an external CA.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
|
||||
### Cross-cluster control plane access
|
||||
|
||||
Decide how to expose the primary cluster's Istiod discovery service to
|
||||
the remote clusters. Pick one of the two options:
|
||||
|
||||
* Option (1) - Use the `istio-ingressgateway` gateway shared with data traffic.
|
||||
|
||||
* Option (2) - Use a cloud provider’s internal load balancer on the Istiod
|
||||
service. For additional requirements and restrictions that may apply when using
|
||||
an internal load balancer between clusters, see
|
||||
[Kubernetes internal load balancer documentation](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
|
||||
and your cloud provider's documentation.
|
||||
|
||||
### Cluster and network naming
|
||||
|
||||
Determine the name of the clusters and networks in the mesh. These names will be used
|
||||
in the mesh network configuration and when configuring the mesh's service registries.
|
||||
Assign a unique name to each cluster. The name must be a
|
||||
[DNS label name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).
|
||||
In the example below the primary cluster is called `main0` and the remote cluster is `remote0`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_CTX=<...>
|
||||
$ export REMOTE_CLUSTER_CTX=<...>
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NAME=main0
|
||||
$ export REMOTE_CLUSTER_NAME=remote0
|
||||
{{< /text >}}
|
||||
|
||||
If the clusters are on different networks, assign a unique network name for each network.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NETWORK=network1
|
||||
$ export REMOTE_CLUSTER_NETWORK=network2
|
||||
{{< /text >}}
|
||||
|
||||
If clusters are on the same network, the same network name is used for those clusters.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NETWORK=network1
|
||||
$ export REMOTE_CLUSTER_NETWORK=network1
|
||||
{{< /text >}}
|
||||
|
||||
## Deployment
|
||||
|
||||
### Primary cluster
|
||||
|
||||
Create the primary cluster's configuration. Pick one of the two options for cross-cluster
|
||||
control plane access.
|
||||
|
||||
{{< tabset category-name="platform" >}}
|
||||
{{< tab name="istio-ingressgateway" category-value="istio-ingressgateway" >}}
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> istio-main-cluster.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
multiCluster:
|
||||
clusterName: ${MAIN_CLUSTER_NAME}
|
||||
network: ${MAIN_CLUSTER_NETWORK}
|
||||
|
||||
# Mesh network configuration. This is optional and may be omitted if
|
||||
# all clusters are on the same network.
|
||||
meshNetworks:
|
||||
${MAIN_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
- fromRegistry: ${MAIN_CLUSTER_NAME}
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
|
||||
${REMOTE_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
- fromRegistry: ${REMOTE_CLUSTER_NAME}
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="Internal Load Balancer" category-value="internal-load-balancer" >}}
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> istio-main-cluster.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
multiCluster:
|
||||
clusterName: ${MAIN_CLUSTER_NAME}
|
||||
network: ${MAIN_CLUSTER_NETWORK}
|
||||
|
||||
# Mesh network configuration. This is optional and may be omitted if
|
||||
# all clusters are on the same network.
|
||||
meshNetworks:
|
||||
${MAIN_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
- fromRegistry: ${MAIN_CLUSTER_NAME}
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
|
||||
${REMOTE_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
- fromRegistry: ${REMOTE_CLUSTER_NAME}
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
|
||||
# Change the Istio service `type=LoadBalancer` and add the cloud provider specific annotations. See
|
||||
# https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer for more
|
||||
# information. The example below shows the configuration for GCP/GKE.
|
||||
components:
|
||||
pilot:
|
||||
k8s:
|
||||
service:
|
||||
type: LoadBalancer
|
||||
service_annotations:
|
||||
cloud.google.com/load-balancer-type: Internal
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
Apply the primary cluster's configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl install -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
If you selected the `istio-ingressgateway` option, expose the control plane using the provided sample configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/istiod-gateway/istiod-gateway.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the control plane to be ready before proceeding.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n istio-system --context=${MAIN_CLUSTER_CTX}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-ingressgateway-7c8dd65766-lv9ck 1/1 Running 0 136m
|
||||
istiod-f756bbfc4-thkmk 1/1 Running 0 136m
|
||||
{{< /text >}}
|
||||
|
||||
Set the `ISTIOD_REMOTE_EP` environment variable based on which remote control
|
||||
plane configuration option was selected earlier.
|
||||
|
||||
{{< tabset category-name="platform" >}}
|
||||
|
||||
{{< tab name="istio-ingressgateway" category-value="istio-ingressgateway" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ISTIOD_REMOTE_EP=$(kubectl get svc -n istio-system --context=${MAIN_CLUSTER_CTX} istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo "ISTIOD_REMOTE_EP is ${ISTIOD_REMOTE_EP}"
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="Internal Load Balancer" category-value="internal-load-balancer" >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ISTIOD_REMOTE_EP=$(kubectl get svc -n istio-system --context=${MAIN_CLUSTER_CTX} istiod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
$ echo "ISTIOD_REMOTE_EP is ${ISTIOD_REMOTE_EP}"
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
### Remote cluster
|
||||
|
||||
Create the remote cluster's configuration.
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> istio-remote0-cluster.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
# The remote cluster's name and network name must match the values specified in the
|
||||
# mesh network configuration of the primary cluster.
|
||||
multiCluster:
|
||||
clusterName: ${REMOTE_CLUSTER_NAME}
|
||||
network: ${REMOTE_CLUSTER_NETWORK}
|
||||
|
||||
# Replace ISTIOD_REMOTE_EP with the the value of ISTIOD_REMOTE_EP set earlier.
|
||||
remotePilotAddress: ${ISTIOD_REMOTE_EP}
|
||||
|
||||
## The istio-ingressgateway is not required in the remote cluster if both clusters are on
|
||||
## the same network. To disable the istio-ingressgateway component, uncomment the lines below.
|
||||
#
|
||||
# components:
|
||||
# ingressGateways:
|
||||
# - name: istio-ingressgateway
|
||||
# enabled: false
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
Apply the remote cluster configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl install -f istio-remote0-cluster.yaml --context ${REMOTE_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the remote cluster to be ready.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n istio-system --context=${REMOTE_CLUSTER_CTX}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-ingressgateway-55f784779d-s5hwl 1/1 Running 0 91m
|
||||
istiod-7b4bfd7b4f-fwmks 1/1 Running 0 91m
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
The istiod deployment running in the remote cluster is providing automatic sidecar injection and CA
|
||||
services to the remote cluster's pods. These services were previously provided by the sidecar injector
|
||||
and Citadel deployments, which no longer exist with Istiod. The remote cluster's pods are
|
||||
getting configuration from the primary cluster's Istiod for service discovery.
|
||||
{{< /tip >}}
|
||||
|
||||
## Cross-cluster load balancing
|
||||
|
||||
### Configure ingress gateways
|
||||
|
||||
{{< tip >}}
|
||||
Skip this next step and move onto configuring the service registries if both cluster are on the same network.
|
||||
{{< /tip >}}
|
||||
|
||||
Cross-network traffic is securely routed through each destination cluster's ingress gateway. When clusters in a mesh are
|
||||
on different networks you need to configure port 443 on the ingress gateway to pass incoming traffic through to the
|
||||
target service specified in a request's SNI header, for SNI values of the _local_
|
||||
top-level domain (i.e., the [Kubernetes DNS domain](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)).
|
||||
Mutual TLS connections will be used all the way from the source to the destination sidecar.
|
||||
|
||||
Apply the following configuration to each cluster.
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> cluster-aware-gateway.yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: cluster-aware-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 443
|
||||
name: tls
|
||||
protocol: TLS
|
||||
tls:
|
||||
mode: AUTO_PASSTHROUGH
|
||||
hosts:
|
||||
- "*.local"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f cluster-aware-gateway.yaml --context=${MAIN_CLUSTER_CTX}
|
||||
$ kubectl apply -f cluster-aware-gateway.yaml --context=${REMOTE_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
### Configure cross-cluster service registries
|
||||
|
||||
To enable cross-cluster load balancing, the Istio control plane requires
|
||||
access to all clusters in the mesh to discover services, endpoints, and
|
||||
pod attributes. To configure access, create a secret for each remote
|
||||
cluster with credentials to access the remote cluster's `kube-apiserver` and
|
||||
install it in the primary cluster. This secret uses the credentials of the
|
||||
`istio-reader-service-account` in the remote cluster. `--name` specifies the
|
||||
remote cluster's name. It must match the cluster name in primary cluster's IstioOperator
|
||||
configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
|
||||
kubectl apply -f - --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a remote secret for the local cluster running the Istio control plane. Istio is always
|
||||
aware of the local cluster's Kubernetes credentials.
|
||||
{{< /warning >}}
|
||||
|
||||
## Deploy an example service
|
||||
|
||||
Deploy two instances of the `helloworld` service, one in each cluster. The difference
|
||||
between the two instances is the version of their `helloworld` image.
|
||||
|
||||
### Deploy helloworld v2 in the remote cluster
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace sample --context=${REMOTE_CLUSTER_CTX}
|
||||
$ kubectl label namespace sample istio-injection=enabled --context=${REMOTE_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${REMOTE_CLUSTER_CTX}
|
||||
$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context=${REMOTE_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v2` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample --context=${REMOTE_CLUSTER_CTX}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s
|
||||
{{< /text >}}
|
||||
|
||||
### Deploy helloworld v1 in the primary cluster
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace sample --context=${MAIN_CLUSTER_CTX}
|
||||
$ kubectl label namespace sample istio-injection=enabled --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${MAIN_CLUSTER_CTX}
|
||||
$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v1` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample --context=${MAIN_CLUSTER_CTX}
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
### Cross-cluster routing in action
|
||||
|
||||
To demonstrate how traffic to the `helloworld` service is distributed across the two clusters,
|
||||
call the `helloworld` service from another in-mesh `sleep` service.
|
||||
|
||||
1. Deploy the `sleep` service in both clusters:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${MAIN_CLUSTER_CTX}
|
||||
$ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${REMOTE_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
1. Wait for the `sleep` service to start in each cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX}
|
||||
sleep-754684654f-n6bzf 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX}
|
||||
sleep-754684654f-dzl9j 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from the primary cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it -n sample -c sleep --context=${MAIN_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from the remote cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec -it -n sample -c sleep --context=${REMOTE_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
If set up correctly, the traffic to the `helloworld.sample` service will be distributed between instances
|
||||
on the main and remote clusters resulting in responses with either `v1` or `v2` in the body:
|
||||
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
{{< /text >}}
|
||||
|
||||
You can also verify the IP addresses used to access the endpoints with `istioctl proxy-config`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
|
||||
xargs -I{} istioctl -n sample --context=${MAIN_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
|
||||
ENDPOINT STATUS OUTLIER CHECK CLUSTER
|
||||
10.10.0.90:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
|
||||
192.23.120.32:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
In the primary cluster, the endpoints are the gateway IP of the remote cluster (`192.23.120.32:443`) and
|
||||
the helloworld pod IP in the primary cluster (`10.10.0.90:5000`).
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
|
||||
xargs -I{} istioctl -n sample --context=${REMOTE_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
|
||||
ENDPOINT STATUS OUTLIER CHECK CLUSTER
|
||||
10.32.0.9:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
|
||||
192.168.1.246:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
In the remote cluster, the endpoints are the gateway IP of the primary cluster (`192.168.1.246:443`) and
|
||||
the pod IP in the remote cluster (`10.32.0.9:5000`).
|
||||
|
||||
**Congratulations!**
|
||||
|
||||
You have configured a multi-cluster Istio mesh, installed samples and verified cross cluster traffic routing.
|
||||
|
||||
## Additional considerations
|
||||
|
||||
### Automatic injection
|
||||
|
||||
The Istiod service in each cluster provides automatic sidecar injection for proxies in its own cluster.
|
||||
Namespaces must be labeled in each cluster following the
|
||||
[automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) guide
|
||||
|
||||
### Access services from different clusters
|
||||
|
||||
Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied
|
||||
to the cluster, you must define the service object in every cluster where a
|
||||
client runs, regardless of the location of the service's endpoints. To ensure
|
||||
this is the case, duplicate the service object to every cluster using
|
||||
`kubectl`. Duplication ensures Kubernetes can resolve the service name in any
|
||||
cluster. Since the service objects are defined in a namespace, you must define
|
||||
the namespace if it doesn't exist, and include it in the service definitions in
|
||||
all clusters.
|
||||
|
||||
### Security
|
||||
|
||||
The Istiod service in each cluster provides CA functionality to proxies in its own
|
||||
cluster. The CA setup earlier ensures proxies across clusters in the mesh have the
|
||||
same root of trust.
|
||||
|
||||
## Uninstalling the remote cluster
|
||||
|
||||
To uninstall the remote cluster, run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
|
||||
kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
|
||||
$ istioctl manifest generate -f istio-remote0-cluster.yaml --context=${REMOTE_CLUSTER_CTX} | \
|
||||
kubectl delete -f - --context=${REMOTE_CLUSTER_CTX}
|
||||
$ kubectl delete namespace sample --context=${REMOTE_CLUSTER_CTX}
|
||||
$ unset REMOTE_CLUSTER_CTX REMOTE_CLUSTER_NAME REMOTE_CLUSTER_NETWORK
|
||||
$ rm istio-remote0-cluster.yaml
|
||||
{{< /text >}}
|
||||
|
||||
To uninstall the primary cluster, run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --ignore-not-found=true -f @samples/istiod-gateway/istiod-gateway.yaml@
|
||||
$ istioctl manifest generate -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX} | \
|
||||
kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
|
||||
$ kubectl delete namespace sample --context=${MAIN_CLUSTER_CTX}
|
||||
$ unset MAIN_CLUSTER_CTX MAIN_CLUSTER_NAME MAIN_CLUSTER_NETWORK ISTIOD_REMOTE_EP
|
||||
$ rm istio-main-cluster.yaml cluster-aware-gateway.yaml
|
||||
{{< /text >}}
|
Before Width: | Height: | Size: 152 KiB |
|
@ -0,0 +1,427 @@
|
|||
#!/bin/bash
|
||||
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
|
||||
|
||||
# Copyright Istio Authors. All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
####################################################################################################
|
||||
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
|
||||
# docs/setup/install/multicluster/index.md
|
||||
####################################################################################################
|
||||
|
||||
snip_environment_variables_1() {
|
||||
export CTX_CLUSTER1=cluster1
|
||||
export CTX_CLUSTER2=cluster2
|
||||
}
|
||||
|
||||
snip_install_istio_1() {
|
||||
cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_2() {
|
||||
cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_3() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
--name=CLUSTER1 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER2}
|
||||
}
|
||||
|
||||
snip_install_istio_4() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
}
|
||||
|
||||
snip_install_istio_5() {
|
||||
cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_6() {
|
||||
CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
}
|
||||
|
||||
snip_install_istio_7() {
|
||||
kubectl --context=${CTX_CLUSTER1} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_8() {
|
||||
cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK2
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_9() {
|
||||
CLUSTER=CLUSTER2 NETWORK=NETWORK2 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER2} -f -
|
||||
}
|
||||
|
||||
snip_install_istio_10() {
|
||||
kubectl --context=${CTX_CLUSTER2} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_11() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
--name=CLUSTER1 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER2}
|
||||
}
|
||||
|
||||
snip_install_istio_12() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
}
|
||||
|
||||
snip_install_istio_13() {
|
||||
cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
${NETWORK1}:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_14() {
|
||||
CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
}
|
||||
|
||||
snip_install_istio_15() {
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f \
|
||||
samples/multicluster/expose-istiod.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_16() {
|
||||
export DISCOVERY_ADDRESS=$(kubectl \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
-n istio-system get svc istio-eastwestgateway \
|
||||
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
}
|
||||
|
||||
snip_install_istio_17() {
|
||||
cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK1
|
||||
remotePilotAddress: ${DISCOVERY_ADDRESS}
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER2} -f cluster2.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_18() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
}
|
||||
|
||||
snip_install_istio_19() {
|
||||
cat <<EOF > ./cluster1.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER1
|
||||
network: NETWORK1
|
||||
meshNetworks:
|
||||
NETWORK1:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER1
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
NETWORK2:
|
||||
endpoints:
|
||||
- fromRegistry: CLUSTER2
|
||||
gateways:
|
||||
- registryServiceName: istio-eastwestgateway.istio-system.svc.cluster.local
|
||||
port: 15443
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER1} -f cluster1.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_20() {
|
||||
CLUSTER=CLUSTER1 NETWORK=NETWORK1 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f -
|
||||
}
|
||||
|
||||
snip_install_istio_21() {
|
||||
kubectl apply --context=${CTX_CLUSTER1} -f \
|
||||
samples/multicluster/expose-istiod.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_22() {
|
||||
kubectl --context=${CTX_CLUSTER1} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_23() {
|
||||
export DISCOVERY_ADDRESS=$(kubectl \
|
||||
--context=${CTX_CLUSTER1} \
|
||||
-n istio-system get svc istio-eastwestgateway \
|
||||
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
}
|
||||
|
||||
snip_install_istio_24() {
|
||||
cat <<EOF > ./cluster2.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
meshID: MESH1
|
||||
multiCluster:
|
||||
clusterName: CLUSTER2
|
||||
network: NETWORK2
|
||||
remotePilotAddress: ${DISCOVERY_ADDRESS}
|
||||
EOF
|
||||
istioctl install --context=${CTX_CLUSTER2} -f - cluster2.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_25() {
|
||||
CLUSTER=CLUSTER2 NETWORK=NETWORK2 \
|
||||
samples/multicluster/gen-eastwest-gateway.sh | \
|
||||
kubectl apply --context=${CTX_CLUSTER2} -f -
|
||||
}
|
||||
|
||||
snip_install_istio_26() {
|
||||
kubectl --context=${CTX_CLUSTER2} apply -n istio-system -f \
|
||||
samples/multicluster/expose-services.yaml
|
||||
}
|
||||
|
||||
snip_install_istio_27() {
|
||||
istioctl x create-remote-secret \
|
||||
--context=${CTX_CLUSTER2} \
|
||||
--name=CLUSTER2 | \
|
||||
kubectl apply -f - --context=${CTX_CLUSTER1}
|
||||
}
|
||||
|
||||
snip_deploy_the_helloworld_service_1() {
|
||||
kubectl create --context=${CTX_CLUSTER1} namespace sample
|
||||
kubectl create --context=${CTX_CLUSTER2} namespace sample
|
||||
}
|
||||
|
||||
snip_deploy_the_helloworld_service_2() {
|
||||
kubectl label --context=${CTX_CLUSTER1} namespace sample \
|
||||
istio-injection=enabled
|
||||
kubectl label --context=${CTX_CLUSTER2} namespace sample \
|
||||
istio-injection=enabled
|
||||
}
|
||||
|
||||
snip_deploy_the_helloworld_service_3() {
|
||||
kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -n sample
|
||||
kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -n sample
|
||||
}
|
||||
|
||||
snip_deploy_helloworld_v1_1() {
|
||||
kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -l version=v1 -n sample
|
||||
}
|
||||
|
||||
snip_deploy_helloworld_v1_2() {
|
||||
kubectl get pod --context=${CTX_CLUSTER1} -n sample
|
||||
}
|
||||
|
||||
! read -r -d '' snip_deploy_helloworld_v1_2_out <<\ENDSNIP
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v1-86f77cd7bd-cpxhv 2/2 Running 0 40s
|
||||
ENDSNIP
|
||||
|
||||
snip_deploy_helloworld_v2_1() {
|
||||
kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/helloworld/helloworld.yaml \
|
||||
-l app=helloworld -l version=v2 -n sample
|
||||
}
|
||||
|
||||
snip_deploy_helloworld_v2_2() {
|
||||
kubectl get pod --context=${CTX_CLUSTER2} -n sample
|
||||
}
|
||||
|
||||
! read -r -d '' snip_deploy_helloworld_v2_2_out <<\ENDSNIP
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s
|
||||
ENDSNIP
|
||||
|
||||
snip_deploy_sleep_1() {
|
||||
kubectl apply --context=${CTX_CLUSTER1} \
|
||||
-f samples/sleep/sleep.yaml -n sample
|
||||
kubectl apply --context=${CTX_CLUSTER2} \
|
||||
-f samples/sleep/sleep.yaml -n sample
|
||||
}
|
||||
|
||||
snip_deploy_sleep_2() {
|
||||
kubectl get pod --context=${CTX_CLUSTER1} -n sample -l app=sleep
|
||||
}
|
||||
|
||||
! read -r -d '' snip_deploy_sleep_2_out <<\ENDSNIP
|
||||
sleep-754684654f-n6bzf 2/2 Running 0 5s
|
||||
ENDSNIP
|
||||
|
||||
snip_deploy_sleep_3() {
|
||||
kubectl get pod --context=${CTX_CLUSTER2} -n sample -l app=sleep
|
||||
}
|
||||
|
||||
! read -r -d '' snip_deploy_sleep_3_out <<\ENDSNIP
|
||||
sleep-754684654f-dzl9j 2/2 Running 0 5s
|
||||
ENDSNIP
|
||||
|
||||
snip_verifying_crosscluster_traffic_1() {
|
||||
kubectl exec --context=${CTX_CLUSTER1} -n sample -c sleep \
|
||||
"$(kubectl get pod --context=${CTX_CLUSTER1} -n sample -l \
|
||||
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
|
||||
-- curl helloworld.sample:5000/hello
|
||||
}
|
||||
|
||||
! read -r -d '' snip_verifying_crosscluster_traffic_2 <<\ENDSNIP
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
...
|
||||
ENDSNIP
|
||||
|
||||
snip_verifying_crosscluster_traffic_3() {
|
||||
kubectl exec --context=${CTX_CLUSTER2} -n sample -c sleep \
|
||||
"$(kubectl get pod --context=${CTX_CLUSTER2} -n sample -l \
|
||||
app=sleep -o jsonpath='{.items[0].metadata.name}')" \
|
||||
-- curl helloworld.sample:5000/hello
|
||||
}
|
||||
|
||||
! read -r -d '' snip_verifying_crosscluster_traffic_4 <<\ENDSNIP
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
...
|
||||
ENDSNIP
|