mirror of https://github.com/istio/istio.io.git
Multicluster via gateways example (#3077)
* Multicluster via gateways example * tweaks * address review comments
This commit is contained in:
parent
733d6779a7
commit
c100527c92
|
|
@ -0,0 +1,215 @@
|
||||||
|
---
|
||||||
|
title: Gateway-Connected Clusters
|
||||||
|
description: Configuring remote services in a gateway-connected multicluster mesh.
|
||||||
|
weight: 20
|
||||||
|
keywords: [kubernetes,multicluster]
|
||||||
|
---
|
||||||
|
|
||||||
|
This example shows how to configure and call remote services in a multicluster mesh with a
|
||||||
|
[multiple control plane topology](/docs/concepts/multicluster-deployments/#multiple-control-plane-topology).
|
||||||
|
To demonstrate cross cluster access,
|
||||||
|
the [sleep service]({{<github_tree>}}/samples/sleep)
|
||||||
|
running in one cluster is configured
|
||||||
|
to call the [httpbin service]({{<github_tree>}}/samples/httpbin)
|
||||||
|
running in a second cluster.
|
||||||
|
|
||||||
|
## Before you begin
|
||||||
|
|
||||||
|
* Set up a multicluster environment with two Istio clusters by following the
|
||||||
|
[multiple control planes with gateways](/docs/setup/kubernetes/multicluster-install/gateways/) instructions.
|
||||||
|
|
||||||
|
* The `kubectl` command will be used to access both clusters with the `--context` flag.
|
||||||
|
Export the following environment variables with the context names of your configuration:
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ export CTX_CLUSTER1=<cluster1 context name>
|
||||||
|
$ export CTX_CLUSTER2=<cluster2 context name>
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
## Configure the example services
|
||||||
|
|
||||||
|
1. Deploy the `sleep` service in `cluster1`.
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl create --context=$CTX_CLUSTER1 namespace foo
|
||||||
|
$ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled
|
||||||
|
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
1. Deploy the `httpbin` service in `cluster2`.
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl create --context=$CTX_CLUSTER2 namespace bar
|
||||||
|
$ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled
|
||||||
|
$ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||||
|
$ export GATEWAY_IP_CLUSTER2=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
|
||||||
|
-n istio-system -o jsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
1. Create a service entry for the `httpbin` service in `cluster1`.
|
||||||
|
|
||||||
|
To allow `sleep` in `cluster1` to access `httpbin` in `cluster2`, we need to create
|
||||||
|
a service entry for it. The host name of the service entry should be of the form
|
||||||
|
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||||
|
remote service's name and namespace respectively.
|
||||||
|
|
||||||
|
For DNS resolution for services under the
|
||||||
|
`*.global` domain, you need to assign these services an IP address. We
|
||||||
|
suggest assigning an IP address from the 127.255.0.0/16 subnet. These IPs
|
||||||
|
are non-routable outside of a pod. Application traffic for these IPs will
|
||||||
|
be captured by the sidecar and routed to the appropriate remote service.
|
||||||
|
|
||||||
|
> Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||||
|
apiVersion: networking.istio.io/v1alpha3
|
||||||
|
kind: ServiceEntry
|
||||||
|
metadata:
|
||||||
|
name: httpbin-bar
|
||||||
|
spec:
|
||||||
|
hosts:
|
||||||
|
# must be of form name.namespace.global
|
||||||
|
- httpbin.bar.global
|
||||||
|
# Treat remote cluster services as part of the service mesh
|
||||||
|
# as all clusters in the service mesh share the same root of trust.
|
||||||
|
location: MESH_INTERNAL
|
||||||
|
ports:
|
||||||
|
- name: http1
|
||||||
|
number: 8000
|
||||||
|
protocol: http
|
||||||
|
resolution: DNS
|
||||||
|
addresses:
|
||||||
|
# the IP address to which httpbin.bar.global will resolve to
|
||||||
|
# must be unique for each remote service, within a given cluster.
|
||||||
|
# This address need not be routable. Traffic for this IP will be captured
|
||||||
|
# by the sidecar and routed appropriately.
|
||||||
|
- 127.255.0.2
|
||||||
|
endpoints:
|
||||||
|
# This is the routable address of the ingress gateway in cluster2 that
|
||||||
|
# sits in front of sleep.bar service. Traffic from the sidecar will be
|
||||||
|
# routed to this address.
|
||||||
|
- address: ${GATEWAY_IP_CLUSTER2}
|
||||||
|
ports:
|
||||||
|
http1: 15443 # Do not change this port value
|
||||||
|
EOF
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
The configurations above will result in all traffic in `cluster1` for
|
||||||
|
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||||
|
`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
|
||||||
|
|
||||||
|
> Do not create a `Gateway` configuration for port 15443.
|
||||||
|
|
||||||
|
The gateway for port 15443 is a special SNI-aware Envoy
|
||||||
|
preconfigured and installed as part of the multicluster Istio installation step
|
||||||
|
in the [before you begin](#before-you-begin) section. Traffic entering port 15443 will be
|
||||||
|
load balanced among pods of the appropriate internal service of the target
|
||||||
|
cluster (in this case, `httpbin.bar` in `cluster2`).
|
||||||
|
|
||||||
|
1. Verify that `httpbin` is accessible from the `sleep` service.
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl exec --context=$CTX_CLUSTER1 $(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name}) \
|
||||||
|
-n foo -c sleep -- curl httpbin.bar.global:8000/ip
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
## Send remote cluster traffic using egress gateway
|
||||||
|
|
||||||
|
If you want to route traffic from `cluster1` via a dedicated
|
||||||
|
egress gateway, instead of directly from the sidecars,
|
||||||
|
use the following service entry for `httpbin.bar` instead of the one in the previous section.
|
||||||
|
|
||||||
|
> The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||||
|
apiVersion: networking.istio.io/v1alpha3
|
||||||
|
kind: ServiceEntry
|
||||||
|
metadata:
|
||||||
|
name: httpbin-bar
|
||||||
|
spec:
|
||||||
|
hosts:
|
||||||
|
# must be of form name.namespace.global
|
||||||
|
- httpbin.bar.global
|
||||||
|
location: MESH_INTERNAL
|
||||||
|
ports:
|
||||||
|
- name: http1
|
||||||
|
number: 8000
|
||||||
|
protocol: http
|
||||||
|
resolution: DNS
|
||||||
|
addresses:
|
||||||
|
- 127.255.0.2
|
||||||
|
endpoints:
|
||||||
|
- address: ${GATEWAY_IP_CLUSTER2}
|
||||||
|
network: external
|
||||||
|
ports:
|
||||||
|
http1: 15443 # Do not change this port value
|
||||||
|
- address: istio-egressgateway.istio-system.svc.cluster.local
|
||||||
|
ports:
|
||||||
|
http1: 15443
|
||||||
|
EOF
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
## Version-aware routing to remote services
|
||||||
|
|
||||||
|
If the remote service has multiple versions, you can add one or more
|
||||||
|
labels to the service entry endpoint.
|
||||||
|
For example:
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||||
|
apiVersion: networking.istio.io/v1alpha3
|
||||||
|
kind: ServiceEntry
|
||||||
|
metadata:
|
||||||
|
name: httpbin-bar
|
||||||
|
spec:
|
||||||
|
hosts:
|
||||||
|
# must be of form name.namespace.global
|
||||||
|
- httpbin.bar.global
|
||||||
|
location: MESH_INTERNAL
|
||||||
|
ports:
|
||||||
|
- name: http1
|
||||||
|
number: 8000
|
||||||
|
protocol: http
|
||||||
|
resolution: DNS
|
||||||
|
addresses:
|
||||||
|
# the IP address to which httpbin.bar.global will resolve to
|
||||||
|
# must be unique for each service.
|
||||||
|
- 127.255.0.2
|
||||||
|
endpoints:
|
||||||
|
- address: ${GATEWAY_IP_CLUSTER2}
|
||||||
|
labels:
|
||||||
|
version: beta
|
||||||
|
some: thing
|
||||||
|
foo: bar
|
||||||
|
ports:
|
||||||
|
http1: 15443 # Do not change this port value
|
||||||
|
EOF
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
You can then follow the steps outlined in the
|
||||||
|
[request routing](/docs/tasks/traffic-management/request-routing/) task
|
||||||
|
to create appropriate virtual services and destination rules.
|
||||||
|
Use destination rules to define subsets of the `httpbin.bar.global` service with
|
||||||
|
the appropriate label selectors.
|
||||||
|
The instructions are identical to those used for routing to a local service.
|
||||||
|
|
||||||
|
## Cleanup
|
||||||
|
|
||||||
|
Execute the following commands to clean up the example services.
|
||||||
|
|
||||||
|
* Cleanup `cluster1`:
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/httpbin/sleep.yaml@
|
||||||
|
$ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar
|
||||||
|
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
|
||||||
|
{{< /text >}}
|
||||||
|
|
||||||
|
* Cleanup `cluster2`:
|
||||||
|
|
||||||
|
{{< text bash >}}
|
||||||
|
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||||
|
$ kubectl delete --context=$CTX_CLUSTER1 ns bar
|
||||||
|
{{< /text >}}
|
||||||
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
title: Cluster-aware Service Routing
|
title: Cluster-Aware Service Routing
|
||||||
description: Leveraging Istio's Split-horizon EDS to create a multicluster mesh.
|
description: Leveraging Istio's Split-horizon EDS to create a multicluster mesh.
|
||||||
weight: 85
|
weight: 85
|
||||||
keywords: [kubernetes,multicluster]
|
keywords: [kubernetes,multicluster]
|
||||||
|
|
@ -373,7 +373,7 @@ $ kubectl logs --context=$CTX_LOCAL -n sample sleep-57f9d6fd6b-q4k4h istio-proxy
|
||||||
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
|
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
|
||||||
{{< /text >}}
|
{{< /text >}}
|
||||||
|
|
||||||
The remote gateway IP, `192.23.120.32:443`, is logged when v2 was called and the local instance IP, `10.10.0.90:5000` is logged when v1 was called.
|
The remote gateway IP, `192.23.120.32:443`, is logged when v2 was called and the local instance IP, `10.10.0.90:5000`, is logged when v1 was called.
|
||||||
|
|
||||||
## Cleanup
|
## Cleanup
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -33,7 +33,7 @@ on **each** Kubernetes cluster.
|
||||||
* A **Root CA**. Cross cluster communication requires mTLS connection
|
* A **Root CA**. Cross cluster communication requires mTLS connection
|
||||||
between services. To enable mTLS communication across clusters, each
|
between services. To enable mTLS communication across clusters, each
|
||||||
cluster's Citadel will be configured with intermediate CA credentials
|
cluster's Citadel will be configured with intermediate CA credentials
|
||||||
generated by a shared root CA. For illustration purposes, we will use a
|
generated by a shared root CA. For illustration purposes, we use a
|
||||||
sample root CA certificate available as part of Istio install
|
sample root CA certificate available as part of Istio install
|
||||||
under the `samples/certs` directory.
|
under the `samples/certs` directory.
|
||||||
|
|
||||||
|
|
@ -41,7 +41,7 @@ on **each** Kubernetes cluster.
|
||||||
|
|
||||||
1. Generate intermediate CA certs for each cluster's Citadel from your
|
1. Generate intermediate CA certs for each cluster's Citadel from your
|
||||||
organization's root CA. The shared root CA enables mTLS communication
|
organization's root CA. The shared root CA enables mTLS communication
|
||||||
across different clusters. For illustration purposes, we will use
|
across different clusters. For illustration purposes, we use
|
||||||
the sample root certificates as the intermediate certificate.
|
the sample root certificates as the intermediate certificate.
|
||||||
|
|
||||||
1. In every cluster, create a Kubernetes secret for your generated CA certs
|
1. In every cluster, create a Kubernetes secret for your generated CA certs
|
||||||
|
|
@ -67,9 +67,9 @@ the sample root certificates as the intermediate certificate.
|
||||||
For further details and customization options, refer to the [Installation
|
For further details and customization options, refer to the [Installation
|
||||||
with Helm](/docs/setup/kubernetes/helm-install/) instructions.
|
with Helm](/docs/setup/kubernetes/helm-install/) instructions.
|
||||||
|
|
||||||
## Configure DNS
|
## Setup DNS
|
||||||
|
|
||||||
Providing a DNS resolution for services in remote clusters will allow
|
Providing DNS resolution for services in remote clusters will allow
|
||||||
existing applications to function unmodified, as applications typically
|
existing applications to function unmodified, as applications typically
|
||||||
expect to resolve services by their DNS names and access the resulting
|
expect to resolve services by their DNS names and access the resulting
|
||||||
IP. Istio itself does not use the DNS for routing requests between
|
IP. Istio itself does not use the DNS for routing requests between
|
||||||
|
|
@ -77,7 +77,7 @@ services. Services local to a cluster share a common DNS suffix
|
||||||
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
|
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
|
||||||
services.
|
services.
|
||||||
|
|
||||||
To provide a similar setup for services from remote clusters, we will name
|
To provide a similar setup for services from remote clusters, we name
|
||||||
services from remote clusters in the format
|
services from remote clusters in the format
|
||||||
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
|
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
|
||||||
will provide DNS resolution for these services. In order to utilize this
|
will provide DNS resolution for these services. In order to utilize this
|
||||||
|
|
@ -98,161 +98,20 @@ data:
|
||||||
EOF
|
EOF
|
||||||
{{< /text >}}
|
{{< /text >}}
|
||||||
|
|
||||||
## Adding services from other clusters
|
## Configure application services
|
||||||
|
|
||||||
Each service in the remote cluster that needs to be accessed from a given
|
Every service in a given cluster that needs to be accessed from a different remote
|
||||||
cluster requires a `ServiceEntry` configuration. The host used in the
|
cluster requires a `ServiceEntry` configuration in the remote cluster.
|
||||||
service entry should be of the form `<name>.<namespace>.global` where name
|
The host used in the service entry should be of the form `<name>.<namespace>.global`
|
||||||
and namespace correspond to the remote service's name and namespace
|
where name and namespace correspond to the service's name and namespace respectively.
|
||||||
respectively. In order to provide DNS resolution for services under the
|
Visit our [multicluster using gateways](/docs/examples/multicluster/gateways/)
|
||||||
`*.global` domain, you need to assign these services an IP address. We
|
example for detailed configuration instructions.
|
||||||
suggest assigning an IP address from the 127.255.0.0/16 subnet. These IPs
|
|
||||||
are non-routable outside of a pod. Application traffic for these IPs will
|
|
||||||
be captured by the sidecar and routed to the appropriate remote service
|
|
||||||
|
|
||||||
> Each service (in the .global DNS domain) must have a unique IP within the cluster.
|
|
||||||
|
|
||||||
For example, the diagram above depicts two services `foo.ns1` in `cluster1`
|
|
||||||
and `bar.ns2` in `cluster2`. In order to access `bar.ns2` from `cluster1`,
|
|
||||||
add the following service entry to `cluster1`:
|
|
||||||
|
|
||||||
{{< text yaml >}}
|
|
||||||
apiVersion: networking.istio.io/v1alpha3
|
|
||||||
kind: ServiceEntry
|
|
||||||
metadata:
|
|
||||||
name: bar-ns2
|
|
||||||
spec:
|
|
||||||
hosts:
|
|
||||||
# must be of form name.namespace.global
|
|
||||||
- bar.ns2.global
|
|
||||||
# Treat remote cluster services as part of the service mesh
|
|
||||||
# as all clusters in the service mesh share the same root of trust.
|
|
||||||
location: MESH_INTERNAL
|
|
||||||
ports:
|
|
||||||
- name: http1
|
|
||||||
number: 8080
|
|
||||||
protocol: http
|
|
||||||
- name: tcp2
|
|
||||||
number: 9999
|
|
||||||
protocol: tcp
|
|
||||||
resolution: DNS
|
|
||||||
addresses:
|
|
||||||
# the IP address to which bar.ns2.global will resolve to
|
|
||||||
# must be unique for each remote service, within a given cluster.
|
|
||||||
# This address need not be routable. Traffic for this IP will be captured
|
|
||||||
# by the sidecar and routed appropriately.
|
|
||||||
- 127.255.0.2
|
|
||||||
endpoints:
|
|
||||||
# This is the routable address of the ingress gateway in cluster2 that
|
|
||||||
# sits in front of bar.ns2 service. Traffic from the sidecar will be routed
|
|
||||||
# to this address.
|
|
||||||
- address: <IPofCluster2IngressGateway>
|
|
||||||
ports:
|
|
||||||
http1: 15443 # Do not change this port value
|
|
||||||
tcp2: 15443 # Do not change this port value
|
|
||||||
{{< /text >}}
|
|
||||||
|
|
||||||
If you wish to route all egress traffic from `cluster1` via a dedicated
|
|
||||||
egress gateway, use the following service entry for `bar.ns2`
|
|
||||||
|
|
||||||
{{< text yaml >}}
|
|
||||||
apiVersion: networking.istio.io/v1alpha3
|
|
||||||
kind: ServiceEntry
|
|
||||||
metadata:
|
|
||||||
name: bar-ns2
|
|
||||||
spec:
|
|
||||||
hosts:
|
|
||||||
# must be of form name.namespace.global
|
|
||||||
- bar.ns2.global
|
|
||||||
location: MESH_INTERNAL
|
|
||||||
ports:
|
|
||||||
- name: http1
|
|
||||||
number: 8080
|
|
||||||
protocol: http
|
|
||||||
- name: tcp2
|
|
||||||
number: 9999
|
|
||||||
protocol: tcp
|
|
||||||
resolution: DNS
|
|
||||||
addresses:
|
|
||||||
- 127.255.0.2
|
|
||||||
endpoints:
|
|
||||||
- address: <IPofCluster2IngressGateway>
|
|
||||||
network: external
|
|
||||||
ports:
|
|
||||||
http1: 15443 # Do not change this port value
|
|
||||||
tcp2: 15443 # Do not change this port value
|
|
||||||
- address: istio-egressgateway.istio-system.svc.cluster.local
|
|
||||||
ports:
|
|
||||||
http1: 15443
|
|
||||||
tcp2: 15443
|
|
||||||
{{< /text >}}
|
|
||||||
|
|
||||||
Verify the setup by trying to access `bar.ns2.global` or `bar.ns2` from any
|
|
||||||
pod on `cluster1`. Both DNS names should resolve to 127.255.0.2, the
|
|
||||||
address used in the service entry configuration.
|
|
||||||
|
|
||||||
The configurations above will result in all traffic in `cluster1` for
|
|
||||||
`bar.ns2.global` on *any port* to be routed to the endpoint
|
|
||||||
`<IPofCluster2IngressGateway>:15443` over an mTLS connection.
|
|
||||||
|
|
||||||
The gateway for port 15443 is a special SNI-aware Envoy that has been
|
|
||||||
preconfigured and installed as part of the Istio installation step
|
|
||||||
described in the prerequisite section. Traffic entering port 15443 will be
|
|
||||||
load balanced among pods of the appropriate internal service of the target
|
|
||||||
cluster (in this case, `bar.ns2`).
|
|
||||||
|
|
||||||
> Do not create a Gateway configuration for port 15443.
|
|
||||||
|
|
||||||
## Version-aware routing to remote services
|
|
||||||
|
|
||||||
If the remote service being added has multiple versions, add one or more
|
|
||||||
labels to the service entry endpoint, and follow the steps outlined in the
|
|
||||||
[request routing](/docs/tasks/traffic-management/request-routing/) section
|
|
||||||
to create appropriate virtual services and destination rules. For example,
|
|
||||||
|
|
||||||
{{< text yaml >}}
|
|
||||||
apiVersion: networking.istio.io/v1alpha3
|
|
||||||
kind: ServiceEntry
|
|
||||||
metadata:
|
|
||||||
name: bar-ns2
|
|
||||||
spec:
|
|
||||||
hosts:
|
|
||||||
# must be of form name.namespace.global
|
|
||||||
- bar.ns2.global
|
|
||||||
location: MESH_INTERNAL
|
|
||||||
ports:
|
|
||||||
- name: http1
|
|
||||||
number: 8080
|
|
||||||
protocol: http
|
|
||||||
- name: tcp2
|
|
||||||
number: 9999
|
|
||||||
protocol: tcp
|
|
||||||
resolution: DNS
|
|
||||||
addresses:
|
|
||||||
# the IP address to which bar.ns2.global will resolve to
|
|
||||||
# must be unique for each service.
|
|
||||||
- 127.255.0.2
|
|
||||||
endpoints:
|
|
||||||
- address: <IPofCluster2IngressGateway>
|
|
||||||
labels:
|
|
||||||
version: beta
|
|
||||||
some: thing
|
|
||||||
foo: bar
|
|
||||||
ports:
|
|
||||||
http1: 15443 # Do not change this port value
|
|
||||||
tcp2: 15443 # Do not change this port value
|
|
||||||
{{< /text >}}
|
|
||||||
|
|
||||||
Use destination rules to create subsets for `bar.ns2` service with
|
|
||||||
appropriate label selectors. The set of steps to follow are identical to
|
|
||||||
those used for a local service.
|
|
||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
Using Istio gateways, a common root CA, and service entries, you configured
|
Using Istio gateways, a common root CA, and service entries, you can configure
|
||||||
a single Istio service mesh across multiple Kubernetes clusters. Although
|
a single Istio service mesh across multiple Kubernetes clusters.
|
||||||
the above procedure involved a certain amount of manual work, the entire
|
Once configured this way, traffic can be transparently routed to remote clusters
|
||||||
process could be automated by creating service entries for each service in
|
|
||||||
the system, with a unique IP allocated from the 127.255.0.0/16 subnet. Once
|
|
||||||
configured this way, traffic can be transparently routed to remote clusters
|
|
||||||
without any application involvement.
|
without any application involvement.
|
||||||
|
Although this approach requires a certain amount of manual configuration for
|
||||||
|
remote service access, the service entry creation process could be automated.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue