mirror of https://github.com/istio/istio.io.git
Cleanup multicluster doc (#4638)
This commit is contained in:
parent
7d9cf41c86
commit
07d53225d3
|
|
@ -43,7 +43,7 @@ policy. We introduced a new component called
|
|||
sweet YAML, reducing the chance of configuration errors. Galley will also be
|
||||
instrumental in [multicluster setups](/docs/setup/kubernetes/install/multicluster/),
|
||||
gathering service discovery information from each Kubernetes cluster. We are
|
||||
also supporting additional multicluster topologies including [single control plane](/docs/concepts/multicluster-deployments/#single-control-plane-topology)
|
||||
also supporting additional multicluster topologies including [single control plane](/docs/concepts/multicluster-deployments/#shared-control-plane-topology)
|
||||
and [multiple synchronized control planes](/docs/concepts/multicluster-deployments/#multiple-control-plane-topology)
|
||||
without requiring a flat network.
|
||||
|
||||
|
|
|
|||
|
|
@ -329,7 +329,7 @@ EOF
|
|||
The address `127.255.0.3` of the service entry can be any arbitrary unallocated IP.
|
||||
Using an IP from the loopback range 127.0.0.0/8 is a good choice.
|
||||
Check out the
|
||||
[gateway-connected multicluster example](/docs/tasks/multicluster/gateways/#configure-the-example-services)
|
||||
[gateway-connected multicluster example](/docs/setup/kubernetes/install/multicluster/gateways/#configure-the-example-services)
|
||||
for more details.
|
||||
|
||||
Note that the labels of the subsets in the destination rule map to the service entry
|
||||
|
|
|
|||
|
|
@ -29,8 +29,8 @@ concise list of things you should know before upgrading your deployment to Istio
|
|||
|
||||
- **Improved Multicluster Integration**. Consolidated the 1.0 `istio-remote`
|
||||
chart previously used for
|
||||
[multicluster VPN](/docs/setup/kubernetes/install/multicluster/vpn/) and
|
||||
[multicluster split horizon](/docs/tasks/multicluster/split-horizon-eds/) remote cluster installation
|
||||
[multicluster VPN](/docs/setup/kubernetes/install/multicluster/shared-vpn/) and
|
||||
[multicluster split horizon](/docs/setup/kubernetes/install/multicluster/shared-gateways/) remote cluster installation
|
||||
into the Istio Helm chart simplifying the operational experience.
|
||||
|
||||
## Traffic management
|
||||
|
|
|
|||
|
|
@ -40,11 +40,13 @@ To achieve this behavior, a single logical control plane needs to manage all ser
|
|||
however, the single logical control plane doesn't necessarily need to be a single physical
|
||||
Istio control plane. There are two possible deployment approaches:
|
||||
|
||||
1. Multiple synchronized Istio control planes that have replicated service and routing configurations.
|
||||
1. Multiple Istio control planes that have replicated service and routing configurations.
|
||||
|
||||
1. A single Istio control plane that can access and configure all the services in the mesh.
|
||||
1. A shared Istio control plane that can access and configure the services in more than one cluster.
|
||||
|
||||
Even within these two topologies, there is more than one way to configure a multicluster service mesh.
|
||||
Even with these two approaches, there is more than one way to configure a multicluster service mesh.
|
||||
In a large multicluster mesh, a combination of the approaches might even be used. For example,
|
||||
two clusters might share a control plane while a third has its own.
|
||||
Which approach to use and how to configure it depends on the requirements of the application
|
||||
and on the features and limitations of the underlying cloud deployment platform.
|
||||
|
||||
|
|
@ -78,18 +80,18 @@ configuration. You configure service discovery of `foo.ns.global` by creating an
|
|||
[service entry](/docs/concepts/traffic-management/#service-entries).
|
||||
|
||||
To configure this type of multicluster topology, visit our
|
||||
[multiple control planes with gateways instructions](/docs/setup/kubernetes/install/multicluster/gateways/).
|
||||
[multiple control planes instructions](/docs/setup/kubernetes/install/multicluster/gateways/).
|
||||
|
||||
### Single control plane topology
|
||||
### Shared control plane topology
|
||||
|
||||
This multicluster configuration uses a single Istio control plane running on one of the clusters.
|
||||
The control plane's Pilot manages services on the local and remote clusters and configures the
|
||||
Envoy sidecars for all of the clusters.
|
||||
|
||||
#### Single control plane with VPN connectivity
|
||||
#### Single-network shared control plane
|
||||
|
||||
The following topology works best in environments where all of the participating clusters
|
||||
have VPN connectivity so every pod in the mesh is reachable from anywhere else using the
|
||||
have VPN or similar connectivity so every pod in the mesh is reachable from anywhere else using the
|
||||
same IP address.
|
||||
|
||||
{{< image width="80%"
|
||||
|
|
@ -100,16 +102,16 @@ same IP address.
|
|||
In this topology, the Istio control plane is deployed on one of the clusters while all other
|
||||
clusters run a simpler remote Istio configuration which connects them to the single Istio control plane
|
||||
that manages all of the Envoy's as a single mesh. The IP addresses on the various clusters must not
|
||||
overlap and DNS resolution for services on remote clusters is not automatic. Users need to replicate
|
||||
overlap and DNS resolution for services on remote clusters is not automatic. Users need to replicate
|
||||
the services on every participating cluster.
|
||||
|
||||
To configure this type of multicluster topology, visit our
|
||||
[single control plane with VPN instructions](/docs/setup/kubernetes/install/multicluster/vpn/).
|
||||
[single-network shared control plane instructions](/docs/setup/kubernetes/install/multicluster/shared-vpn/).
|
||||
|
||||
#### Single control plane without VPN connectivity
|
||||
#### Multi-network shared control plane
|
||||
|
||||
If setting up an environment with universal pod-to-pod connectivity is difficult or impossible,
|
||||
it may still be possible to configure a single control plane topology using Istio gateways and
|
||||
it may still be possible to configure a shared control plane topology using Istio gateways and
|
||||
by enabling Istio Pilot's location-aware service routing feature.
|
||||
|
||||
This topology requires connectivity to Kubernetes API servers from all of the clusters. If this is
|
||||
|
|
@ -121,9 +123,8 @@ not possible, a multiple control plane topology is probably a better alternative
|
|||
>}}
|
||||
|
||||
In this topology, a request from a sidecar in one cluster to a service in the same cluster
|
||||
is forwarded to the local service IP as usual. If the destination workload is running in a
|
||||
is forwarded to the local service IP as usual. If the destination workload is running in a
|
||||
different cluster, the remote cluster Gateway IP is used to connect to the service instead.
|
||||
|
||||
To configure this type of multicluster topology, visit our
|
||||
[single control plane with gateways example](/docs/tasks/multicluster/split-horizon-eds/) to experiment
|
||||
with this feature.
|
||||
[multi-network shared control plane instructions](/docs/setup/kubernetes/install/multicluster/shared-gateways/).
|
||||
|
|
|
|||
|
|
@ -992,7 +992,7 @@ outside of the mesh:
|
|||
- Add a service running in a Virtual Machine (VM) to the mesh to [expand your mesh](/docs/setup/kubernetes/additional-setup/mesh-expansion/#running-services-on-a-mesh-expansion-machine).
|
||||
|
||||
- Logically add services from a different cluster to the mesh to configure a
|
||||
[multicluster Istio mesh](/docs/tasks/multicluster/gateways/#configure-the-example-services)
|
||||
[multicluster Istio mesh](/docs/setup/kubernetes/install/multicluster/gateways/#configure-the-example-services)
|
||||
on Kubernetes.
|
||||
|
||||
You don’t need to add a service entry for every external service that you
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Multicluster Service Mesh
|
||||
description: A variety of fully working multicluster examples for Istio that you can experiment with.
|
||||
description: Multicluster service mesh examples for Istio that you can experiment with.
|
||||
weight: 100
|
||||
keywords: [multicluster]
|
||||
---
|
||||
|
|
@ -4,12 +4,12 @@ description: Set up a multicluster mesh over two GKE clusters.
|
|||
weight: 65
|
||||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /docs/examples/multicluster/gke/
|
||||
- /docs/tasks/multicluster/gke/
|
||||
---
|
||||
|
||||
This example shows how to configure a multicluster mesh with a
|
||||
[single control plane topology](/docs/concepts/multicluster-deployments/#single-control-plane-topology)
|
||||
over 2 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters.
|
||||
[single-network shared control plane](/docs/concepts/multicluster-deployments/#single-network-shared-control-plane)
|
||||
topology over 2 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters.
|
||||
|
||||
## Before you begin
|
||||
|
||||
|
|
@ -313,7 +313,7 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
|
|||
## Uninstalling
|
||||
|
||||
The following should be done in addition to the uninstall of Istio as described in the
|
||||
[VPN-based multicluster uninstall section](/docs/setup/kubernetes/install/multicluster/vpn/):
|
||||
[VPN-based multicluster uninstall section](/docs/setup/kubernetes/install/multicluster/shared-vpn/):
|
||||
|
||||
1. Delete the Google Cloud firewall rule:
|
||||
|
||||
|
|
@ -4,13 +4,13 @@ description: Example multicluster mesh over two IBM Cloud Private clusters.
|
|||
weight: 70
|
||||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /docs/examples/multicluster/icp/
|
||||
- /docs/tasks/multicluster/icp/
|
||||
---
|
||||
|
||||
This example demonstrates how to setup network connectivity between two
|
||||
[IBM Cloud Private](https://www.ibm.com/cloud/private) clusters
|
||||
and then compose them into a multicluster mesh using a
|
||||
[single control plane with VPN connectivity](/docs/concepts/multicluster-deployments/#single-control-plane-with-vpn-connectivity)
|
||||
[single-network shared control plane](/docs/concepts/multicluster-deployments/#single-network-shared-control-plane)
|
||||
topology.
|
||||
|
||||
## Create the IBM Cloud Private Clusters
|
||||
|
|
@ -148,7 +148,7 @@ across all nodes in the two IBM Cloud Private Clusters.
|
|||
|
||||
## Install Istio for multicluster
|
||||
|
||||
[Follow the VPN-based multicluster installation steps](/docs/setup/kubernetes/install/multicluster/vpn/) to install and configure
|
||||
Follow the [single-network shared control plane instructions](/docs/setup/kubernetes/install/multicluster/shared-vpn/) to install and configure
|
||||
local Istio control plane and Istio remote on `cluster-1` and `cluster-2`.
|
||||
|
||||
In this guide, it is assumed that the local Istio control plane is deployed in `cluster-1`, while the Istio remote is deployed in `cluster-2`.
|
||||
|
|
@ -71,7 +71,7 @@ Istio provides two additional built-in configuration profiles that are used excl
|
|||
[multicluster service mesh](/docs/concepts/multicluster-deployments/#multicluster-service-mesh):
|
||||
|
||||
1. **remote**: used for configuring remote clusters of a
|
||||
multicluster mesh with a [single control plane topology](/docs/concepts/multicluster-deployments/#single-control-plane-topology).
|
||||
multicluster mesh with a [shared control plane topology](/docs/concepts/multicluster-deployments/#shared-control-plane-topology).
|
||||
|
||||
1. **multicluster-gateways**: used for configuring all of the clusters of a
|
||||
multicluster mesh with a [multiple control plane topology](/docs/concepts/multicluster-deployments/#multiple-control-plane-topology).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Customizable Install with Helm
|
||||
description: Instructions to install Istio using a Helm chart.
|
||||
description: Install and configure Istio for in-depth evaluation or production use.
|
||||
weight: 20
|
||||
keywords: [kubernetes,helm]
|
||||
aliases:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Quick Start Evaluation Install
|
||||
description: Instructions to install and configure an Istio mesh in a Kubernetes cluster for evaluation.
|
||||
description: Instructions to install Istio in a Kubernetes cluster for evaluation.
|
||||
weight: 10
|
||||
keywords: [kubernetes]
|
||||
aliases:
|
||||
|
|
|
|||
|
|
@ -7,5 +7,13 @@ aliases:
|
|||
- /docs/setup/kubernetes/multicluster/
|
||||
keywords: [kubernetes,multicluster]
|
||||
---
|
||||
|
||||
{{< tip >}}
|
||||
Note that these instructions are not mutually exclusive.
|
||||
In a large multicluster mesh, composed from more than two clusters,
|
||||
a combination of the approaches can be used. For example,
|
||||
two clusters might share a control plane while a third has its own.
|
||||
{{< /tip >}}
|
||||
|
||||
Refer to the [multicluster service mesh](/docs/concepts/multicluster-deployments/) concept documentation
|
||||
for more information.
|
||||
|
|
|
|||
|
|
@ -1,24 +1,28 @@
|
|||
---
|
||||
title: Gateway Connectivity
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters using Istio Gateway to reach remote pods.
|
||||
title: Multiple control planes
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with individually deployed control planes.
|
||||
weight: 2
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/gateways/
|
||||
- /docs/setup/kubernetes/multicluster-install/gateways/
|
||||
- /docs/examples/multicluster/gateways/
|
||||
- /docs/tasks/multicluster/gateways/
|
||||
keywords: [kubernetes,multicluster,federation,gateway]
|
||||
---
|
||||
|
||||
Follow this flow to install an Istio [multicluster service mesh](/docs/concepts/multicluster-deployments/)
|
||||
when services in a Kubernetes cluster use gateways to communicate with services in other clusters.
|
||||
Follow this guide to install an Istio
|
||||
[multicluster service mesh](/docs/concepts/multicluster-deployments/#multicluster-service-mesh)
|
||||
with individually deployed Istio control planes in every cluster and using gateways to
|
||||
connect services across clusters.
|
||||
|
||||
Instead of using a central Istio control plane to manage the mesh,
|
||||
in this configuration each cluster has an **identical** Istio control plane
|
||||
Instead of using a shared Istio control plane to manage the mesh,
|
||||
in this configuration each cluster has its own Istio control plane
|
||||
installation, each managing its own endpoints.
|
||||
All of the clusters are under a shared administrative control for the purposes of
|
||||
policy enforcement and security.
|
||||
|
||||
A single Istio service mesh across the clusters is achieved by replicating
|
||||
shared services and namespaces and using a common root CA in all of the clusters.
|
||||
Cross-cluster communication occurs over Istio Gateways of the respective clusters.
|
||||
Cross-cluster communication occurs over Istio gateways of the respective clusters.
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-gateways.svg" caption="Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods" >}}
|
||||
|
||||
|
|
@ -39,27 +43,19 @@ on **each** Kubernetes cluster.
|
|||
sample root CA certificate available in the Istio installation
|
||||
under the `samples/certs` directory.
|
||||
|
||||
## Deploy the Istio control plane in each cluster
|
||||
## Deploy the Istio control plane in each cluster {#deploy-istio}
|
||||
|
||||
1. Generate intermediate CA certificates for each cluster's Citadel from your
|
||||
organization's root CA. The shared root CA enables mutual TLS communication
|
||||
across different clusters.
|
||||
|
||||
{{< tip >}}
|
||||
For illustration purposes, the following instructions use the certificates
|
||||
from the Istio samples directory for both clusters. In real world deployments,
|
||||
you would likely use a different CA certificate for each cluster, all signed
|
||||
by a common root CA.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Generate a multicluster-gateways Istio configuration file using `helm`:
|
||||
|
||||
{{< warning >}}
|
||||
If you're not sure if your `helm` dependencies are up to date, update them using the
|
||||
command shown in [Helm installation steps](/docs/setup/kubernetes/install/helm/#installation-steps)
|
||||
before running the following command.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
|
||||
-f @install/kubernetes/helm/istio/example-values/values-istio-multicluster-gateways.yaml@ > $HOME/istio.yaml
|
||||
|
|
@ -69,17 +65,17 @@ on **each** Kubernetes cluster.
|
|||
[Installation with Helm](/docs/setup/kubernetes/install/helm/) instructions.
|
||||
|
||||
1. Run the following commands in **every cluster** to deploy an identical Istio control plane
|
||||
configuration in all of them.
|
||||
configuration in all of them.
|
||||
|
||||
{{< warning >}}
|
||||
|
||||
Make sure that the current user has cluster administrator (`cluster-admin`) permissions and grant them if not. On the GKE platform, for example, the following command can be used:
|
||||
{{< tip >}}
|
||||
Make sure that the current user has cluster administrator (`cluster-admin`) permissions and grant them if not.
|
||||
On the GKE platform, for example, the following command can be used:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
|
||||
{{< /text >}}
|
||||
|
||||
{{< /warning >}}
|
||||
{{< /tip >}}
|
||||
|
||||
* Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/docs/tasks/security/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key) for more details.
|
||||
|
||||
|
|
@ -197,8 +193,279 @@ cluster requires a `ServiceEntry` configuration in the remote cluster.
|
|||
The host used in the service entry should be of the form `<name>.<namespace>.global`
|
||||
where name and namespace correspond to the service's name and namespace respectively.
|
||||
|
||||
Confirm your multicluster configuration is functional with the [multicluster using gateways
|
||||
task](/docs/tasks/multicluster/gateways).
|
||||
To demonstrate cross cluster access, configure the
|
||||
[sleep service]({{<github_tree>}}/samples/sleep)
|
||||
running in one cluster to call the [httpbin service]({{<github_tree>}}/samples/httpbin)
|
||||
running in a second cluster. Before you begin:
|
||||
|
||||
* Choose two of your Istio clusters, to be referred to as `cluster1` and `cluster2`.
|
||||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
### Configure the example services
|
||||
|
||||
1. Deploy the `sleep` service in `cluster1`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 namespace foo
|
||||
$ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy the `httpbin` service in `cluster2`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 namespace bar
|
||||
$ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
1. Export the `cluster2` gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
|
||||
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
This command sets the value to the gateway's public IP, but note that you can set it to
|
||||
a DNS name instead, if you have one.
|
||||
|
||||
{{< tip >}}
|
||||
If `cluster2` is running in an environment that does not
|
||||
support external load balancers, you will need to use a nodePort to access the gateway.
|
||||
Instructions for obtaining the IP to use can be found in the
|
||||
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
guide. You will also need to change the service entry endpoint port in the following step from 15443
|
||||
to its corresponding nodePort
|
||||
(i.e., `kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'`).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Create a service entry for the `httpbin` service in `cluster2`.
|
||||
|
||||
To allow `sleep` in `cluster1` to access `httpbin` in `cluster2`, we need to create
|
||||
a service entry for it. The host name of the service entry should be of the form
|
||||
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||
remote service's name and namespace respectively.
|
||||
|
||||
For DNS resolution for services under the `*.global` domain, you need to assign these
|
||||
services an IP address.
|
||||
|
||||
{{< tip >}}
|
||||
Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||
{{< /tip >}}
|
||||
|
||||
If the global services have actual VIPs, you can use those, but otherwise we suggest
|
||||
using IPs from the loopback range `127.0.0.0/8` that are not already allocated.
|
||||
These IPs are non-routable outside of a pod.
|
||||
In this example we'll use IPs in `127.255.0.0/16` which avoids conflicting with
|
||||
well known IPs such as `127.0.0.1` (`localhost`).
|
||||
Application traffic for these IPs will be captured by the sidecar and routed to the
|
||||
appropriate remote service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
# Treat remote cluster services as part of the service mesh
|
||||
# as all clusters in the service mesh share the same root of trust.
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each remote service, within a given cluster.
|
||||
# This address need not be routable. Traffic for this IP will be captured
|
||||
# by the sidecar and routed appropriately.
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
# This is the routable address of the ingress gateway in cluster2 that
|
||||
# sits in front of sleep.foo service. Traffic from the sidecar will be
|
||||
# routed to this address.
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The configurations above will result in all traffic in `cluster1` for
|
||||
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||
`<IPofCluster2IngressGateway>:15443` over a mutual TLS connection.
|
||||
|
||||
The gateway for port 15443 is a special SNI-aware Envoy
|
||||
preconfigured and installed as part of the multicluster Istio installation step
|
||||
in the [deploy the Istio control plane](#deploy-istio) section. Traffic entering port 15443 will be
|
||||
load balanced among pods of the appropriate internal service of the target
|
||||
cluster (in this case, `httpbin.bar` in `cluster2`).
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a `Gateway` configuration for port 15443.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Verify that `httpbin` is accessible from the `sleep` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers
|
||||
{{< /text >}}
|
||||
|
||||
### Send remote traffic via an egress gateway
|
||||
|
||||
If you want to route traffic from `cluster1` via a dedicated egress gateway, instead of directly from the sidecars,
|
||||
use the following service entry for `httpbin.bar` instead of the one in the previous section.
|
||||
|
||||
{{< tip >}}
|
||||
The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.
|
||||
{{< /tip >}}
|
||||
|
||||
If `$CLUSTER2_GW_ADDR` is an IP address, use the `$CLUSTER2_GW_ADDR - IP address` option. If `$CLUSTER2_GW_ADDR` is a hostname, use the `$CLUSTER2_GW_ADDR - hostname` option.
|
||||
|
||||
{{< tabset cookie-name="profile" >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - IP address" cookie-value="option1" >}}
|
||||
* Export the `cluster1` egress gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
|
||||
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
|
||||
{{< /text >}}
|
||||
|
||||
* Apply the httpbin-bar service entry:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: STATIC
|
||||
addresses:
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: ${CLUSTER1_EGW_ADDR}
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - hostname" cookie-value="option2" >}}
|
||||
If the `${CLUSTER2_GW_ADDR}` is a hostname, you can use `resolution: DNS` for the endpoint resolution:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: istio-egressgateway.istio-system.svc.cluster.local
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
### Cleanup the example
|
||||
|
||||
Execute the following commands to clean up the example services.
|
||||
|
||||
* Cleanup `cluster1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
|
||||
{{< /text >}}
|
||||
|
||||
* Cleanup `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 ns bar
|
||||
{{< /text >}}
|
||||
|
||||
## Version-aware routing to remote services
|
||||
|
||||
If the remote service has multiple versions, you can add
|
||||
labels to the service entry endpoints.
|
||||
For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each service.
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
labels:
|
||||
cluster: cluster2
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
You can then create virtual services and destination rules
|
||||
to define subsets of the `httpbin.bar.global` service using the appropriate gateway label selectors.
|
||||
The instructions are the same as those used for routing to a local service.
|
||||
See [multicluster version routing](/blog/2019/multicluster-version-routing/)
|
||||
for a complete example.
|
||||
|
||||
## Uninstalling
|
||||
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 116 KiB |
|
|
@ -1,31 +1,34 @@
|
|||
---
|
||||
title: Cluster-Aware Service Routing
|
||||
description: Leveraging Istio's Split-horizon EDS to create a multicluster mesh.
|
||||
title: Shared control plane (multi-network)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for diconnected cluster networks.
|
||||
weight: 85
|
||||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /docs/examples/multicluster/split-horizon-eds/
|
||||
- /docs/tasks/multicluster/split-horizon-eds/
|
||||
---
|
||||
|
||||
This example shows how to configure a multicluster mesh with a
|
||||
[single control plane topology](/docs/concepts/multicluster-deployments/#single-control-plane-topology)
|
||||
and using Istio's _Split-horizon EDS (Endpoints Discovery Service)_ feature (introduced in Istio 1.1) to
|
||||
route service requests to other clusters via their ingress gateway.
|
||||
Split-horizon EDS enables Istio to route requests to different endpoints, depending on the location of the request source.
|
||||
Follow this guide to configure a multicluster mesh using a
|
||||
[shared control plane topology](/docs/concepts/multicluster-deployments/#shared-control-plane-topology)
|
||||
with gateways to connect network-isolated clusters.
|
||||
Istio's location-aware service routing feature is used to route requests to different endpoints,
|
||||
depending on the location of the request source.
|
||||
|
||||
By following the instructions in this example, you will setup a two cluster mesh as shown in the following diagram:
|
||||
By following the instructions in this guide, you will setup a two cluster mesh as shown in the following diagram:
|
||||
|
||||
{{< image width="80%"
|
||||
link="./diagram.svg"
|
||||
caption="Single Istio control plane topology spanning multiple Kubernetes clusters with Split-horizon EDS configured" >}}
|
||||
caption="Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways" >}}
|
||||
|
||||
The primary cluster, `cluster1`, runs the full set of Istio control plane components while `cluster2` only runs Istio Citadel,
|
||||
Sidecar Injector, and Ingress gateway.
|
||||
The primary cluster, `cluster1`, runs the full set of Istio control plane components while `cluster2` only
|
||||
runs Istio Citadel, Sidecar Injector, and Ingress gateway.
|
||||
No VPN connectivity nor direct network access between workloads in different clusters is required.
|
||||
|
||||
## Before you begin
|
||||
## Prerequisites
|
||||
|
||||
In addition to the prerequisites for installing Istio, the following is required for this example:
|
||||
* Two or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
|
||||
* Authority to deploy the [Istio control plane using Helm](/docs/setup/kubernetes/install/helm/)
|
||||
|
||||
* Two Kubernetes clusters (referred to as `cluster1` and `cluster2`).
|
||||
|
||||
|
|
@ -35,9 +38,9 @@ In addition to the prerequisites for installing Istio, the following is required
|
|||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
## Example multicluster setup
|
||||
## Setup the multicluster mesh
|
||||
|
||||
In this example you will install Istio with mutual TLS enabled for both the control plane and application pods.
|
||||
In this configuration you install Istio with mutual TLS enabled for both the control plane and application pods.
|
||||
For the shared root CA, you create a `cacerts` secret on both `cluster1` and `cluster2` clusters using the same Istio
|
||||
certificate from the Istio samples directory.
|
||||
|
||||
|
|
@ -328,10 +331,10 @@ This will be used to access pilot on `cluster1` securely using the ingress gatew
|
|||
|
||||
Now that you have your `cluster1` and `cluster2` clusters set up, you can deploy an example service.
|
||||
|
||||
## Example service
|
||||
## Deploy example service
|
||||
|
||||
In this demo you will see how traffic to a service can be distributed across the two clusters.
|
||||
As shown in the diagram, above, you will deploy two instances of the `helloworld` service, one on `cluster1` and one on `cluster2`.
|
||||
As shown in the diagram, above, deploy two instances of the `helloworld` service,
|
||||
one on `cluster1` and one on `cluster2`.
|
||||
The difference between the two instances is the version of their `helloworld` image.
|
||||
|
||||
### Deploy helloworld v2 in cluster 2
|
||||
|
|
@ -382,9 +385,10 @@ The difference between the two instances is the version of their `helloworld` im
|
|||
helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
### Split-horizon EDS in action
|
||||
### Cross-cluster routing in action
|
||||
|
||||
We will call the `helloworld.sample` service from another in-mesh `sleep` service.
|
||||
To demonstrate how traffic to the `helloworld` service is distributed across the two clusters,
|
||||
call the `helloworld` service from another in-mesh `sleep` service.
|
||||
|
||||
1. Deploy the `sleep` service in both clusters:
|
||||
|
||||
|
|
@ -420,11 +424,8 @@ We will call the `helloworld.sample` service from another in-mesh `sleep` servic
|
|||
If set up correctly, the traffic to the `helloworld.sample` service will be distributed between instances on `cluster1` and `cluster2`
|
||||
resulting in responses with either `v1` or `v2` in the body:
|
||||
|
||||
{{< text sh >}}
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
{{< /text >}}
|
||||
|
||||
{{< text sh >}}
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
{{< /text >}}
|
||||
|
||||
|
|
@ -436,7 +437,7 @@ $ kubectl logs --context=$CTX_CLUSTER1 -n sample $(kubectl get pod --context=$CT
|
|||
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
|
||||
{{< /text >}}
|
||||
|
||||
The gateway IP, `192.23.120.32:15443`, of `cluster2` is logged when v2 was called and the instance IP, `10.10.0.90:5000`, of `cluster1` is logged when v1 was called.
|
||||
In `cluster1`, the gateway IP of `cluster2` (`192.23.120.32:15443`) is logged when v2 was called and the instance IP in `cluster1` (`10.10.0.90:5000`) is logged when v1 was called.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs --context=$CTX_CLUSTER2 -n sample $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
|
||||
|
|
@ -444,11 +445,11 @@ $ kubectl logs --context=$CTX_CLUSTER2 -n sample $(kubectl get pod --context=$CT
|
|||
[2019-05-25T08:06:12.834Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 181 180 "-" "curl/7.60.0" "ce480b56-fafd-468b-9996-9fea5257cb1e" "helloworld.sample:5000" "10.32.0.9:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36886 -
|
||||
{{< /text >}}
|
||||
|
||||
The gateway IP, `192.168.1.246:15443`, of `cluster1` is logged when v1 was called and the gateway IP, `10.32.0.9:5000`, of `cluster2` is logged when v2 was called.
|
||||
In `cluster2`, the gateway IP of `cluster1` (`192.168.1.246:15443`) is logged when v1 was called and the gateway IP in `cluster2` (`10.32.0.9:5000`) is logged when v2 was called.
|
||||
|
||||
## Cleanup
|
||||
|
||||
Execute the following commands to clean up the demo services __and__ the Istio components.
|
||||
Execute the following commands to clean up the example services __and__ the Istio components.
|
||||
|
||||
Cleanup the `cluster2` cluster:
|
||||
|
||||
|
|
@ -1,13 +1,14 @@
|
|||
---
|
||||
title: VPN Connectivity
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with direct network access to remote pods.
|
||||
title: Shared control plane (single-network)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.
|
||||
weight: 5
|
||||
keywords: [kubernetes,multicluster,federation,vpn]
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/vpn/
|
||||
---
|
||||
|
||||
Follow this flow to install an Istio [multicluster service mesh](/docs/concepts/multicluster-deployments/)
|
||||
Follow this guide to install an Istio [multicluster service mesh](/docs/concepts/multicluster-deployments/)
|
||||
where the Kubernetes cluster services and the applications in each cluster
|
||||
have the capability to expose their internal Kubernetes network to other
|
||||
clusters.
|
||||
|
Before Width: | Height: | Size: 152 KiB After Width: | Height: | Size: 152 KiB |
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
title: Platform-specific Instructions
|
||||
description: Additional installation flows for the supported Kubernetes platforms.
|
||||
description: Additional installation instructions for supported Kubernetes platforms.
|
||||
weight: 40
|
||||
---
|
||||
|
|
|
|||
|
|
@ -8,11 +8,11 @@ aliases:
|
|||
- /docs/setup/kubernetes/quick-start-alicloud-ack/
|
||||
---
|
||||
|
||||
Follow this flow to install and configure an Istio mesh in the
|
||||
Follow this guide to install and configure an Istio mesh in the
|
||||
[Alibaba Cloud Kubernetes Container Service](https://www.alibabacloud.com/product/kubernetes)
|
||||
using the `Application Catalog` module.
|
||||
|
||||
This flow installs the current release version of Istio and deploys the
|
||||
This guide installs the current release version of Istio and deploys the
|
||||
[Bookinfo](/docs/examples/bookinfo/) sample application.
|
||||
|
||||
## Prerequisites
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ aliases:
|
|||
- /docs/setup/kubernetes/quick-start/
|
||||
---
|
||||
|
||||
Follow this flow to install and configure an Istio mesh Istio in the
|
||||
Follow this guide to install and configure an Istio mesh Istio in the
|
||||
[Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE).
|
||||
|
||||
## Prerequisites
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ aliases:
|
|||
- /docs/setup/kubernetes/quick-start-ibm/
|
||||
---
|
||||
|
||||
Follow this flow to install and configure an Istio mesh in IBM Cloud.
|
||||
Follow this guide to install and configure an Istio mesh in IBM Cloud.
|
||||
|
||||
You can use the [managed Istio add-on for IBM Cloud Kubernetes Service](#managed-istio-add-on)
|
||||
in IBM Cloud Public, [install Istio manually](#manual-istio-install) in IBM Cloud Public,
|
||||
|
|
|
|||
|
|
@ -1,288 +0,0 @@
|
|||
---
|
||||
title: Gateway-Connected Clusters
|
||||
description: Configuring remote services in a gateway-connected multicluster mesh.
|
||||
weight: 20
|
||||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /docs/examples/multicluster/gateways/
|
||||
---
|
||||
|
||||
This example shows how to configure and call remote services in a multicluster mesh with a
|
||||
[multiple control plane topology](/docs/concepts/multicluster-deployments/#multiple-control-plane-topology).
|
||||
To demonstrate cross cluster access,
|
||||
the [sleep service]({{<github_tree>}}/samples/sleep)
|
||||
running in one cluster is configured
|
||||
to call the [httpbin service]({{<github_tree>}}/samples/httpbin)
|
||||
running in a second cluster.
|
||||
|
||||
## Before you begin
|
||||
|
||||
* Set up a multicluster environment with two Istio clusters by following the
|
||||
[multiple control planes with gateways](/docs/setup/kubernetes/install/multicluster/gateways/) instructions.
|
||||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
## Configure the example services
|
||||
|
||||
1. Deploy the `sleep` service in `cluster1`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 namespace foo
|
||||
$ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy the `httpbin` service in `cluster2`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 namespace bar
|
||||
$ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled
|
||||
$ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
1. Export the `cluster2` gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
|
||||
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
This command sets the value to the gateway's public IP, but note that you can set it to
|
||||
a DNS name instead, if you have one.
|
||||
|
||||
{{< tip >}}
|
||||
If `cluster2` is running in an environment that does not
|
||||
support external load balancers, you will need to use a nodePort to access the gateway.
|
||||
Instructions for obtaining the IP to use can be found in the
|
||||
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
guide. You will also need to change the service entry endpoint port in the following step from 15443
|
||||
to its corresponding nodePort
|
||||
(i.e., `kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'`).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Create a service entry for the `httpbin` service in `cluster2`.
|
||||
|
||||
To allow `sleep` in `cluster1` to access `httpbin` in `cluster2`, we need to create
|
||||
a service entry for it. The host name of the service entry should be of the form
|
||||
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||
remote service's name and namespace respectively.
|
||||
|
||||
For DNS resolution for services under the `*.global` domain, you need to assign these
|
||||
services an IP address.
|
||||
|
||||
{{< tip >}}
|
||||
Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||
{{< /tip >}}
|
||||
|
||||
If the global services have actual VIPs, you can use those, but otherwise we suggest
|
||||
using IPs from the loopback range `127.0.0.0/8` that are not already allocated.
|
||||
These IPs are non-routable outside of a pod.
|
||||
In this example we'll use IPs in `127.255.0.0/16` which avoids conflicting with
|
||||
well known IPs such as `127.0.0.1` (`localhost`).
|
||||
Application traffic for these IPs will be captured by the sidecar and routed to the
|
||||
appropriate remote service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
# Treat remote cluster services as part of the service mesh
|
||||
# as all clusters in the service mesh share the same root of trust.
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each remote service, within a given cluster.
|
||||
# This address need not be routable. Traffic for this IP will be captured
|
||||
# by the sidecar and routed appropriately.
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
# This is the routable address of the ingress gateway in cluster2 that
|
||||
# sits in front of sleep.foo service. Traffic from the sidecar will be
|
||||
# routed to this address.
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The configurations above will result in all traffic in `cluster1` for
|
||||
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||
`<IPofCluster2IngressGateway>:15443` over a mutual TLS connection.
|
||||
|
||||
The gateway for port 15443 is a special SNI-aware Envoy
|
||||
preconfigured and installed as part of the multicluster Istio installation step
|
||||
in the [before you begin](#before-you-begin) section. Traffic entering port 15443 will be
|
||||
load balanced among pods of the appropriate internal service of the target
|
||||
cluster (in this case, `httpbin.bar` in `cluster2`).
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a `Gateway` configuration for port 15443.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Verify that `httpbin` is accessible from the `sleep` service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers
|
||||
{{< /text >}}
|
||||
|
||||
## Send remote cluster traffic using egress gateway
|
||||
|
||||
If you want to route traffic from `cluster1` via a dedicated egress gateway, instead of directly from the sidecars,
|
||||
use the following service entry for `httpbin.bar` instead of the one in the previous section.
|
||||
|
||||
{{< tip >}}
|
||||
The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.
|
||||
{{< /tip >}}
|
||||
|
||||
If `$CLUSTER2_GW_ADDR` is an IP address, use the `$CLUSTER2_GW_ADDR - IP address` option. If `$CLUSTER2_GW_ADDR` is a hostname, use the `$CLUSTER2_GW_ADDR - hostname` option.
|
||||
|
||||
{{< tabset cookie-name="profile" >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - IP address" cookie-value="option1" >}}
|
||||
* Export the `cluster1` egress gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
|
||||
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
|
||||
{{< /text >}}
|
||||
|
||||
* Apply the httpbin-bar service entry:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: STATIC
|
||||
addresses:
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: ${CLUSTER1_EGW_ADDR}
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - hostname" cookie-value="option2" >}}
|
||||
If the `${CLUSTER2_GW_ADDR}` is a hostname, you can use `resolution: DNS` for the endpoint resolution:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
network: external
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
- address: istio-egressgateway.istio-system.svc.cluster.local
|
||||
ports:
|
||||
http1: 15443
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
## Version-aware routing to remote services
|
||||
|
||||
If the remote service has multiple versions, you can add
|
||||
labels to the service entry endpoints.
|
||||
For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: httpbin-bar
|
||||
spec:
|
||||
hosts:
|
||||
# must be of form name.namespace.global
|
||||
- httpbin.bar.global
|
||||
location: MESH_INTERNAL
|
||||
ports:
|
||||
- name: http1
|
||||
number: 8000
|
||||
protocol: http
|
||||
resolution: DNS
|
||||
addresses:
|
||||
# the IP address to which httpbin.bar.global will resolve to
|
||||
# must be unique for each service.
|
||||
- 127.255.0.2
|
||||
endpoints:
|
||||
- address: ${CLUSTER2_GW_ADDR}
|
||||
labels:
|
||||
cluster: cluster2
|
||||
ports:
|
||||
http1: 15443 # Do not change this port value
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
You can then create virtual services and destination rules
|
||||
to define subsets of the `httpbin.bar.global` service using the appropriate gateway label selectors.
|
||||
The instructions are the same as those used for routing to a local service.
|
||||
See [multicluster version routing](/blog/2019/multicluster-version-routing/)
|
||||
for a complete example.
|
||||
|
||||
## Cleanup
|
||||
|
||||
Execute the following commands to clean up the example services.
|
||||
|
||||
* Cleanup `cluster1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
|
||||
{{< /text >}}
|
||||
|
||||
* Cleanup `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 ns bar
|
||||
{{< /text >}}
|
||||
|
|
@ -58,7 +58,7 @@ keywords: [profiles,install,helm]
|
|||
|
||||
Istio 提供了两种附加的内置配置,专门用于搭建[多集群服务网格](/docs/concepts/multicluster-deployments/#multicluster-service-mesh)。
|
||||
|
||||
1. **remote**:用于搭建[单控制平面拓扑](/docs/concepts/multicluster-deployments/#single-control-plane-topology)的多集群网格。
|
||||
1. **remote**:用于搭建[单控制平面拓扑](/docs/concepts/multicluster-deployments/#shared-control-plane-topology)的多集群网格。
|
||||
|
||||
1. **multicluster-gateways**:用来搭建[多控制平面拓扑](/docs/concepts/multicluster-deployments/#multiple-control-plane-topology)的多集群网络。
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ keywords: [kubernetes,multicluster,federation,vpn]
|
|||
在此配置中,运行远程配置的多个 Kubernetes 控制平面将连接到**单个** Istio 控制平面。一旦一个或多个远程 Kubernetes 集群连接到 Istio 控制平面,Envoy 就可以与单个控制平面通信并形成跨多个集群的服务网格。
|
||||
|
||||
{{< image width="80%"
|
||||
link="multicluster-with-vpn.svg"
|
||||
link="/docs/setup/kubernetes/install/multicluster/shared-vpn/multicluster-with-vpn.svg"
|
||||
caption="通过 VPN 直连远程 pod 的多 Kubernetes 集群 Istio 网格"
|
||||
>}}
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ aliases:
|
|||
按照此示例中的说明,您将设置一个两集群网格,如下图所示:
|
||||
|
||||
{{< image width="80%"
|
||||
link="diagram.svg"
|
||||
link="/docs/setup/kubernetes/install/multicluster/shared-gateways/diagram.svg"
|
||||
caption="单个 Istio 控制平面配置水平分割 EDS,跨越多个 Kubernetes 集群" >}}
|
||||
|
||||
原始集群 `cluster1` 将运行完整的 Istio 控制平面组件,而 `cluster2` 集群仅运行 Istio Citadel、Sidecar Injector 和 Ingress gateway。不需要 VPN 连接,不同集群中的工作负载之间也无需直接网络访问。
|
||||
|
|
|
|||
Loading…
Reference in New Issue