mirror of https://github.com/istio/istio.io.git
update multicluster shared control plane docs (#6843)
* update multicluster shared control plane docs * Merged single and multiple network instructions. They are nearly identically except for specifying the mesh network configuration. * Removed use of pod IPs for cross-cluster control plane. Added three options that are more appropriate for production use. * use `istioctl x create-remote-secret` instead of copy/paste bash * Updated the master and remote cluster configuration examples to be declartive instead of imperative. Users can copy/paste the examples, edit, commit to scm, and apply to the clusters. * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Frank Budinsky <frankb@ca.ibm.com> * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Frank Budinsky <frankb@ca.ibm.com> * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Frank Budinsky <frankb@ca.ibm.com> * updates * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Frank Budinsky <frankb@ca.ibm.com> * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Frank Budinsky <frankb@ca.ibm.com> * lint errors * lint * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Lin Sun <linsun@us.ibm.com> * update networks and add selfSigned * Apply suggestions from code review Co-Authored-By: Lin Sun <linsun@us.ibm.com> * Update content/en/docs/setup/install/multicluster/shared/index.md Co-Authored-By: Lin Sun <linsun@us.ibm.com> * fix config and remove option 3 * fix formatting and grammer * move additional considerations after the sample services Co-authored-by: Frank Budinsky <frankb@ca.ibm.com> Co-authored-by: Lin Sun <linsun@us.ibm.com>
This commit is contained in:
parent
0c3f225672
commit
e7f8c82451
|
@ -32,7 +32,7 @@ your specific needs. The following built-in configuration profiles are currently
|
|||
|
||||
1. **remote**: used for configuring remote clusters of a
|
||||
[multicluster mesh](/docs/ops/deployment/deployment-models/#multiple-clusters) with a
|
||||
[shared control plane](/docs/setup/install/multicluster/shared-vpn/) configuration.
|
||||
[shared control plane](/docs/setup/install/multicluster/shared/) configuration.
|
||||
|
||||
1. **empty**: deploys nothing. This can be useful as a base profile for custom configuration.
|
||||
|
||||
|
|
|
@ -1,451 +0,0 @@
|
|||
---
|
||||
title: Shared control plane (multi-network)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for disconnected cluster networks.
|
||||
weight: 85
|
||||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /docs/examples/multicluster/split-horizon-eds/
|
||||
- /docs/tasks/multicluster/split-horizon-eds/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-gateways/
|
||||
---
|
||||
|
||||
Follow this guide to configure a multicluster mesh using a shared
|
||||
[control plane](/docs/ops/deployment/deployment-models/#control-plane-models)
|
||||
with gateways to connect network-isolated clusters.
|
||||
Istio's location-aware service routing feature is used to route requests to different endpoints,
|
||||
depending on the location of the request source.
|
||||
|
||||
By following the instructions in this guide, you will setup a two-cluster mesh as shown in the following diagram:
|
||||
|
||||
{{< image width="80%"
|
||||
link="./diagram.svg"
|
||||
caption="Shared Istio control plane topology spanning multiple Kubernetes clusters using gateways" >}}
|
||||
|
||||
The primary cluster, `cluster1`, runs the full set of Istio control plane components while `cluster2` only
|
||||
runs Istio Citadel, Sidecar Injector, and Ingress gateway.
|
||||
No VPN connectivity nor direct network access between workloads in different clusters is required.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Two or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
|
||||
* Authority to [deploy the Istio control plane](/docs/setup/install/istioctl/)
|
||||
|
||||
* Two Kubernetes clusters (referred to as `cluster1` and `cluster2`).
|
||||
|
||||
{{< warning >}}
|
||||
The Kubernetes API server of `cluster2` MUST be accessible from `cluster1` in order to run this configuration.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
## Setup the multicluster mesh
|
||||
|
||||
In this configuration you install Istio with mutual TLS enabled for both the control plane and application pods.
|
||||
For the shared root CA, you create a `cacerts` secret on both `cluster1` and `cluster2` clusters using the same Istio
|
||||
certificate from the Istio samples directory.
|
||||
|
||||
The instructions, below, also set up `cluster2` with a selector-less service and an endpoint for `istio-pilot.istio-system`
|
||||
that has the address of `cluster1` Istio ingress gateway.
|
||||
This will be used to access pilot on `cluster1` securely using the ingress gateway without mutual TLS termination.
|
||||
|
||||
### Setup cluster 1 (primary)
|
||||
|
||||
1. Deploy Istio to `cluster1`:
|
||||
|
||||
{{< warning >}}
|
||||
When you enable the additional components necessary for multicluster operation, the resource footprint
|
||||
of the Istio control plane may increase beyond the capacity of the default Kubernetes cluster you created when
|
||||
completing the [Platform setup](/docs/setup/platform-setup/) steps.
|
||||
If the Istio services aren't getting scheduled due to insufficient CPU or memory, consider
|
||||
adding more nodes to your cluster or upgrading to larger memory instances as necessary.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 ns istio-system
|
||||
$ kubectl create --context=$CTX_CLUSTER1 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
|
||||
$ istioctl manifest apply --context=$CTX_CLUSTER1 \
|
||||
-f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
Note that the gateway addresses are set to `0.0.0.0`. These are temporary placeholder values that will
|
||||
later be updated with the public IPs of the `cluster1` and `cluster2` gateways after they are deployed
|
||||
in the following section.
|
||||
{{< /warning >}}
|
||||
|
||||
Wait for the Istio pods on `cluster1` to become ready:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods --context=$CTX_CLUSTER1 -n istio-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-citadel-55d8b59798-6hnx4 1/1 Running 0 83s
|
||||
istio-galley-c74b77787-lrtr5 2/2 Running 0 82s
|
||||
istio-ingressgateway-684f5df677-shzhm 1/1 Running 0 83s
|
||||
istio-pilot-5495bc8885-2rgmf 2/2 Running 0 82s
|
||||
istio-policy-69cdf5db4c-x4sct 2/2 Running 2 83s
|
||||
istio-sidecar-injector-5749cf7cfc-pgd95 1/1 Running 0 82s
|
||||
istio-telemetry-646db5ddbd-gvp6l 2/2 Running 1 83s
|
||||
prometheus-685585888b-4tvf7 1/1 Running 0 83s
|
||||
{{< /text >}}
|
||||
|
||||
1. Create an ingress gateway to access service(s) in `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: cluster-aware-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 443
|
||||
name: tls
|
||||
protocol: TLS
|
||||
tls:
|
||||
mode: AUTO_PASSTHROUGH
|
||||
hosts:
|
||||
- "*.local"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
This `Gateway` configures 443 port to pass incoming traffic through to the target service specified in a
|
||||
request's SNI header, for SNI values of the _local_ top-level domain
|
||||
(i.e., the [Kubernetes DNS domain](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)).
|
||||
Mutual TLS connections will be used all the way from the source to the destination sidecar.
|
||||
|
||||
Although applied to `cluster1`, this Gateway instance will also affect `cluster2` because both clusters communicate with the
|
||||
same Pilot.
|
||||
1. Determine the ingress IP and port for `cluster1`.
|
||||
|
||||
1. Set the current context of `kubectl` to `CTX_CLUSTER1`
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ORIGINAL_CONTEXT=$(kubectl config current-context)
|
||||
$ kubectl config use-context $CTX_CLUSTER1
|
||||
{{< /text >}}
|
||||
|
||||
1. Follow the instructions in
|
||||
[Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports),
|
||||
to set the `INGRESS_HOST` and `SECURE_INGRESS_PORT` environment variables.
|
||||
|
||||
1. Restore the previous `kubectl` context:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl config use-context $ORIGINAL_CONTEXT
|
||||
$ unset ORIGINAL_CONTEXT
|
||||
{{< /text >}}
|
||||
|
||||
1. Print the values of `INGRESS_HOST` and `SECURE_INGRESS_PORT`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo The ingress gateway of cluster1: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
|
||||
{{< /text >}}
|
||||
|
||||
1. Update the gateway address in the mesh network configuration. Edit the `istio ConfigMap`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio
|
||||
{{< /text >}}
|
||||
|
||||
Update the gateway's address and port of `network1` to reflect the `cluster1` ingress host and port,
|
||||
respectively, then save and quit. Note that the address appears in two places, the second under `values.yaml:`.
|
||||
|
||||
Once saved, Pilot will automatically read the updated network configuration.
|
||||
|
||||
### Setup cluster 2
|
||||
|
||||
1. Export the `cluster1` gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export LOCAL_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-ingressgateway \
|
||||
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}') && echo ${LOCAL_GW_ADDR}
|
||||
{{< /text >}}
|
||||
|
||||
This command sets the value to the gateway's public IP and displays it.
|
||||
|
||||
{{< warning >}}
|
||||
The command fails if the load balancer configuration doesn't include an IP address. The implementation of DNS name support is pending.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Deploy Istio to `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 ns istio-system
|
||||
$ kubectl create --context=$CTX_CLUSTER2 secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
|
||||
$ CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
|
||||
$ istioctl manifest apply --context=$CTX_CLUSTER2 \
|
||||
--set profile=remote \
|
||||
--set values.gateways.enabled=true \
|
||||
--set values.security.selfSigned=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
|
||||
--set values.global.network="network2" \
|
||||
--set values.global.multiCluster.clusterName=${CLUSTER_NAME} \
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the Istio pods on `cluster2`, except for `istio-ingressgateway`, to become ready:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio!=ingressgateway
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-citadel-55d8b59798-nlk2z 1/1 Running 0 26s
|
||||
istio-sidecar-injector-5749cf7cfc-s6r7p 1/1 Running 0 25s
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
`istio-ingressgateway` will not be ready until you configure the Istio control plane in `cluster1` to watch
|
||||
`cluster2`. You do it in the next section.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Determine the ingress IP and port for `cluster2`.
|
||||
|
||||
1. Set the current context of `kubectl` to `CTX_CLUSTER2`
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ORIGINAL_CONTEXT=$(kubectl config current-context)
|
||||
$ kubectl config use-context $CTX_CLUSTER2
|
||||
{{< /text >}}
|
||||
|
||||
1. Follow the instructions in
|
||||
[Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports),
|
||||
to set the `INGRESS_HOST` and `SECURE_INGRESS_PORT` environment variables.
|
||||
|
||||
1. Restore the previous `kubectl` context:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl config use-context $ORIGINAL_CONTEXT
|
||||
$ unset ORIGINAL_CONTEXT
|
||||
{{< /text >}}
|
||||
|
||||
1. Print the values of `INGRESS_HOST` and `SECURE_INGRESS_PORT`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo The ingress gateway of cluster2: address=$INGRESS_HOST, port=$SECURE_INGRESS_PORT
|
||||
{{< /text >}}
|
||||
|
||||
1. Update the gateway address in the mesh network configuration. Edit the `istio ConfigMap`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl edit cm -n istio-system --context=$CTX_CLUSTER1 istio
|
||||
{{< /text >}}
|
||||
|
||||
Update the gateway's address and port of `network2` to reflect the `cluster2` ingress host and port,
|
||||
respectively, then save and quit. Note that the address appears in two places, the second under `values.yaml:`.
|
||||
|
||||
Once saved, Pilot will automatically read the updated network configuration.
|
||||
|
||||
1. Prepare environment variables for building the `n2-k8s-config` file for the service account `istio-reader-service-account`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ CLUSTER_NAME=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].name}')
|
||||
$ SERVER=$(kubectl --context=$CTX_CLUSTER2 config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
|
||||
$ SECRET_NAME=$(kubectl --context=$CTX_CLUSTER2 get sa istio-reader-service-account -n istio-system -o jsonpath='{.secrets[].name}')
|
||||
$ CA_DATA=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['ca\.crt']}")
|
||||
$ TOKEN=$(kubectl get --context=$CTX_CLUSTER2 secret ${SECRET_NAME} -n istio-system -o jsonpath="{.data['token']}" | base64 --decode)
|
||||
{{< /text >}}
|
||||
|
||||
{{< idea >}}
|
||||
An alternative to `base64 --decode` is `openssl enc -d -base64 -A` on many systems.
|
||||
{{< /idea >}}
|
||||
|
||||
1. Create the `n2-k8s-config` file in the working directory:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > n2-k8s-config
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: ${CA_DATA}
|
||||
server: ${SERVER}
|
||||
name: ${CLUSTER_NAME}
|
||||
contexts:
|
||||
- context:
|
||||
cluster: ${CLUSTER_NAME}
|
||||
user: ${CLUSTER_NAME}
|
||||
name: ${CLUSTER_NAME}
|
||||
current-context: ${CLUSTER_NAME}
|
||||
users:
|
||||
- name: ${CLUSTER_NAME}
|
||||
user:
|
||||
token: ${TOKEN}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
### Start watching cluster 2
|
||||
|
||||
1. Execute the following commands to add and label the secret of the `cluster2` Kubernetes.
|
||||
After executing these commands Istio Pilot on `cluster1` will begin watching `cluster2` for services and instances,
|
||||
just as it does for `cluster1`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 secret generic n2-k8s-secret --from-file n2-k8s-config -n istio-system
|
||||
$ kubectl label --context=$CTX_CLUSTER1 secret n2-k8s-secret istio/multiCluster=true -n istio-system
|
||||
{{< /text >}}
|
||||
|
||||
1. Wait for `istio-ingressgateway` to become ready:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods --context=$CTX_CLUSTER2 -n istio-system -l istio=ingressgateway
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
istio-ingressgateway-5c667f4f84-bscff 1/1 Running 0 16m
|
||||
{{< /text >}}
|
||||
|
||||
Now that you have your `cluster1` and `cluster2` clusters set up, you can deploy an example service.
|
||||
|
||||
## Deploy example service
|
||||
|
||||
As shown in the diagram, above, deploy two instances of the `helloworld` service,
|
||||
one on `cluster1` and one on `cluster2`.
|
||||
The difference between the two instances is the version of their `helloworld` image.
|
||||
|
||||
### Deploy helloworld v2 in cluster 2
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 ns sample
|
||||
$ kubectl label --context=$CTX_CLUSTER2 namespace sample istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
|
||||
$ kubectl create --context=$CTX_CLUSTER2 -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v2` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get po --context=$CTX_CLUSTER2 -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s
|
||||
{{< /text >}}
|
||||
|
||||
### Deploy helloworld v1 in cluster 1
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 ns sample
|
||||
$ kubectl label --context=$CTX_CLUSTER1 namespace sample istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
|
||||
$ kubectl create --context=$CTX_CLUSTER1 -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v1` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get po --context=$CTX_CLUSTER1 -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
### Cross-cluster routing in action
|
||||
|
||||
To demonstrate how traffic to the `helloworld` service is distributed across the two clusters,
|
||||
call the `helloworld` service from another in-mesh `sleep` service.
|
||||
|
||||
1. Deploy the `sleep` service in both clusters:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/sleep/sleep.yaml@ -n sample
|
||||
$ kubectl apply --context=$CTX_CLUSTER2 -f @samples/sleep/sleep.yaml@ -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Wait for the `sleep` service to start in each cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get po --context=$CTX_CLUSTER1 -n sample -l app=sleep
|
||||
sleep-754684654f-n6bzf 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get po --context=$CTX_CLUSTER2 -n sample -l app=sleep
|
||||
sleep-754684654f-dzl9j 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from `cluster1` :
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER1 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from `cluster2` :
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER2 -it -n sample -c sleep $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
If set up correctly, the traffic to the `helloworld.sample` service will be distributed between instances on `cluster1` and `cluster2`
|
||||
resulting in responses with either `v1` or `v2` in the body:
|
||||
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
{{< /text >}}
|
||||
|
||||
You can also verify the IP addresses used to access the endpoints by printing the log of the sleep's `istio-proxy` container.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs --context=$CTX_CLUSTER1 -n sample $(kubectl get pod --context=$CTX_CLUSTER1 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
|
||||
[2018-11-25T12:37:52.077Z] "GET /hello HTTP/1.1" 200 - 0 60 190 189 "-" "curl/7.60.0" "6e096efe-f550-4dfa-8c8c-ba164baf4679" "helloworld.sample:5000" "192.23.120.32:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59496 -
|
||||
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
|
||||
{{< /text >}}
|
||||
|
||||
In `cluster1`, the gateway IP of `cluster2` (`192.23.120.32:15443`) is logged when v2 was called and the instance IP in `cluster1` (`10.10.0.90:5000`) is logged when v1 was called.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs --context=$CTX_CLUSTER2 -n sample $(kubectl get pod --context=$CTX_CLUSTER2 -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
|
||||
[2019-05-25T08:06:11.468Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 177 176 "-" "curl/7.60.0" "58cfb92b-b217-4602-af67-7de8f63543d8" "helloworld.sample:5000" "192.168.1.246:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36840 -
|
||||
[2019-05-25T08:06:12.834Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 181 180 "-" "curl/7.60.0" "ce480b56-fafd-468b-9996-9fea5257cb1e" "helloworld.sample:5000" "10.32.0.9:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36886 -
|
||||
{{< /text >}}
|
||||
|
||||
In `cluster2`, the gateway IP of `cluster1` (`192.168.1.246:15443`) is logged when v1 was called and the gateway IP in `cluster2` (`10.32.0.9:5000`) is logged when v2 was called.
|
||||
|
||||
## Cleanup
|
||||
|
||||
Execute the following commands to clean up the example services __and__ the Istio components.
|
||||
|
||||
Cleanup the `cluster2` cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest generate --context=$CTX_CLUSTER2 \
|
||||
--set profile=remote \
|
||||
--set values.gateways.enabled=true \
|
||||
--set values.security.selfSigned=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
|
||||
--set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
|
||||
--set values.global.network="network2" | kubectl --context=$CTX_CLUSTER2 delete -f -
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 ns sample
|
||||
$ rm n2-k8s-config
|
||||
$ unset CTX_CLUSTER2 CLUSTER_NAME SERVER SECRET_NAME CA_DATA TOKEN INGRESS_HOST SECURE_INGRESS_PORT INGRESS_PORT LOCAL_GW_ADDR
|
||||
{{< /text >}}
|
||||
|
||||
Cleanup the `cluster1` cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest generate --context=$CTX_CLUSTER1 \
|
||||
-f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-primary.yaml | kubectl --context=$CTX_CLUSTER1 delete -f -
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 ns sample
|
||||
$ unset CTX_CLUSTER1
|
||||
{{< /text >}}
|
|
@ -1,505 +0,0 @@
|
|||
---
|
||||
title: Shared control plane (single-network)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.
|
||||
weight: 5
|
||||
keywords: [kubernetes,multicluster,federation,vpn]
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-vpn/
|
||||
---
|
||||
|
||||
Follow this guide to install an Istio [multicluster service mesh](/docs/ops/deployment/deployment-models/#multiple-clusters)
|
||||
where the Kubernetes cluster services and the applications in each cluster
|
||||
have the capability to expose their internal Kubernetes network to other
|
||||
clusters.
|
||||
|
||||
In this configuration, multiple Kubernetes clusters running
|
||||
a remote configuration connect to a shared Istio
|
||||
[control plane](/docs/ops/deployment/deployment-models/#control-plane-models).
|
||||
Once one or more remote Kubernetes clusters are connected to the
|
||||
Istio control plane, Envoy can then form a mesh network across multiple clusters.
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-vpn.svg" caption="Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN" >}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Two or more clusters running a supported Kubernetes version ({{< supported_kubernetes_versions >}}).
|
||||
|
||||
* The ability to [deploy the Istio control plane](/docs/setup/install/istioctl/)
|
||||
on **one** of the clusters.
|
||||
|
||||
* A RFC1918 network, VPN, or an alternative more advanced network technique
|
||||
meeting the following requirements:
|
||||
|
||||
* Individual cluster Pod CIDR ranges and service CIDR ranges must be unique
|
||||
across the multicluster environment and may not overlap.
|
||||
|
||||
* All pod CIDRs in every cluster must be routable to each other.
|
||||
|
||||
* All Kubernetes control plane API servers must be routable to each other.
|
||||
|
||||
This guide describes how to install a multicluster Istio topology using the
|
||||
remote configuration profile provided by Istio.
|
||||
|
||||
## Deploy the local control plane
|
||||
|
||||
[Install the Istio control plane](/docs/setup/install/istioctl/)
|
||||
on **one** Kubernetes cluster.
|
||||
|
||||
### Set environment variables {#environment-var}
|
||||
|
||||
Wait for the Istio control plane to finish initializing before following the
|
||||
steps in this section.
|
||||
|
||||
You must run these operations on the Istio control plane cluster to capture the
|
||||
Istio control plane service endpoints, for example, the Pilot and Policy Pod IP
|
||||
endpoints.
|
||||
|
||||
Set the environment variables with the following commands:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
|
||||
$ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=policy -o jsonpath='{.items[0].status.podIP}')
|
||||
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
|
||||
{{< /text >}}
|
||||
|
||||
Normally, automatic sidecar injection on the remote clusters is enabled. To
|
||||
perform a manual sidecar injection refer to the [manual sidecar example](#manual-sidecar)
|
||||
|
||||
## Install the Istio remote
|
||||
|
||||
You must deploy the `istio-remote` component to each remote Kubernetes
|
||||
cluster. You can install the component in one of two ways:
|
||||
|
||||
1. Use the following command on the remote cluster to install
|
||||
the Istio control plane service endpoints:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
--set profile=remote \
|
||||
--set values.global.controlPlaneSecurityEnabled=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
|
||||
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
|
||||
--set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
|
||||
--set gateways.enabled=false
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
All clusters must have the same namespace for the Istio
|
||||
components. It is possible to override the `istio-system` name on the main
|
||||
cluster as long as the namespace is the same for all Istio components in
|
||||
all clusters.
|
||||
{{< /tip >}}
|
||||
|
||||
1. The following command example labels the `default` namespace. Use similar
|
||||
commands to label all the remote cluster's namespaces requiring automatic
|
||||
sidecar injection.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl label namespace default istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
Repeat for all Kubernetes namespaces that need to setup automatic sidecar
|
||||
injection.
|
||||
|
||||
### Installation configuration parameters
|
||||
|
||||
You must configure the remote cluster's sidecars interaction with the Istio
|
||||
control plane including the following endpoints in the `istio-remote` profile:
|
||||
`pilot`, `policy`, `telemetry` and tracing service. The profile
|
||||
enables automatic sidecar injection in the remote cluster by default. You can
|
||||
disable the automatic sidecar injection via a separate setting.
|
||||
|
||||
The following table shows the `istioctl` configuration values for remote clusters:
|
||||
|
||||
| Install setting | Accepted Values | Default | Purpose of Value |
|
||||
| --- | --- | --- | --- |
|
||||
| `values.global.remotePilotAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's pilot Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `values.global.remotePolicyAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's policy Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `values.global.remoteTelemetryAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's telemetry Pod IP address or remote cluster DNS resolvable hostname |
|
||||
| `values.sidecarInjectorWebhook.enabled` | true, false | true | Specifies whether to enable automatic sidecar injection on the remote cluster |
|
||||
| `values.global.remotePilotCreateSvcEndpoint` | true, false | false | If set, a selector-less service and endpoint for `istio-pilot` are created with the `remotePilotAddress` IP, which ensures the `istio-pilot.<namespace>` is DNS resolvable in the remote cluster. |
|
||||
| `values.global.createRemoteSvcEndpoints` | true, false | false | If set, selector-less services and endpoints for `istio-pilot`, `istio-telemetry`, `istio-policy` are created with the corresponding remote IPs: `remotePilotAddress`, `remoteTelmetryAddress`, `remotePolicyAddress`, which ensures the service names are DNS resolvable in the remote cluster. |
|
||||
|
||||
## Generate configuration files for remote clusters {#kubeconfig}
|
||||
|
||||
The Istio control plane requires access to all clusters in the mesh to
|
||||
discover services, endpoints, and pod attributes. The following steps
|
||||
describe how to generate a `kubeconfig` configuration file for the Istio control plane to use a remote cluster.
|
||||
|
||||
Perform this procedure on each remote cluster to add the cluster to the service
|
||||
mesh. This procedure requires the `cluster-admin` user access permission to
|
||||
the remote cluster.
|
||||
|
||||
1. Set the environment variables needed to build the `kubeconfig` file for the
|
||||
`istio-reader-service-account` service account with the following commands:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export WORK_DIR=$(pwd)
|
||||
$ CLUSTER_NAME=$(kubectl config view --minify=true -o jsonpath='{.clusters[].name}')
|
||||
$ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
|
||||
$ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
|
||||
$ NAMESPACE=istio-system
|
||||
$ SERVICE_ACCOUNT=istio-reader-service-account
|
||||
$ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
|
||||
$ CA_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['ca\.crt']}")
|
||||
$ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
An alternative to `base64 --decode` is `openssl enc -d -base64 -A` on many systems.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Create a `kubeconfig` file in the working directory for the
|
||||
`istio-reader-service-account` service account with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > ${KUBECFG_FILE}
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: ${CA_DATA}
|
||||
server: ${SERVER}
|
||||
name: ${CLUSTER_NAME}
|
||||
contexts:
|
||||
- context:
|
||||
cluster: ${CLUSTER_NAME}
|
||||
user: ${CLUSTER_NAME}
|
||||
name: ${CLUSTER_NAME}
|
||||
current-context: ${CLUSTER_NAME}
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: ${CLUSTER_NAME}
|
||||
user:
|
||||
token: ${TOKEN}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. _(Optional)_ Create file with environment variables to create the remote cluster's secret:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF > remote_cluster_env_vars
|
||||
export CLUSTER_NAME=${CLUSTER_NAME}
|
||||
export KUBECFG_FILE=${KUBECFG_FILE}
|
||||
export NAMESPACE=${NAMESPACE}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
At this point, you created the remote clusters' `kubeconfig` files in the
|
||||
current directory. The filename of the `kubeconfig` file is the same as the
|
||||
original cluster name.
|
||||
|
||||
## Instantiate the credentials {#credentials}
|
||||
|
||||
Perform this procedure on the cluster running the Istio control plane. This
|
||||
procedure uses the `WORK_DIR`, `CLUSTER_NAME`, and `NAMESPACE` environment
|
||||
values set and the file created for the remote cluster's secret from the
|
||||
[previous section](#kubeconfig).
|
||||
|
||||
If you created the environment variables file for the remote cluster's
|
||||
secret, source the file with the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ source remote_cluster_env_vars
|
||||
{{< /text >}}
|
||||
|
||||
You can install Istio in a different namespace. This procedure uses the
|
||||
`istio-system` namespace.
|
||||
|
||||
{{< warning >}}
|
||||
Do not store and label the secrets for the local cluster
|
||||
running the Istio control plane. Istio is always aware of the local cluster's
|
||||
Kubernetes credentials.
|
||||
{{< /warning >}}
|
||||
|
||||
Create a secret and label it properly for each remote cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
|
||||
$ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
The Kubernetes secret data keys must conform with the
|
||||
`DNS-1123 subdomain` [format](https://tools.ietf.org/html/rfc1123#page-13). For
|
||||
example, the filename can't have underscores. Resolve any issue with the
|
||||
filename simply by changing the filename to conform with the format.
|
||||
{{< /warning >}}
|
||||
|
||||
## Uninstalling the remote cluster
|
||||
|
||||
To uninstall the cluster run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest generate \
|
||||
--set profile=remote \
|
||||
--set values.global.controlPlaneSecurityEnabled=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
|
||||
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
|
||||
--set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
|
||||
--set gateways.enabled=false | kubectl delete -f -
|
||||
{{< /text >}}
|
||||
|
||||
## Manual sidecar injection example {#manual-sidecar}
|
||||
|
||||
The following example shows how to use the `istioctl manifest` command to generate
|
||||
the manifest for a remote cluster with the automatic sidecar injection
|
||||
disabled. Additionally, the example shows how to use the `configmaps` of the
|
||||
remote cluster with the [`istioctl kube-inject`](/docs/reference/commands/istioctl/#istioctl-kube-inject) command to generate any
|
||||
application manifests for the remote cluster.
|
||||
|
||||
Perform the following procedure against the remote cluster.
|
||||
|
||||
Before you begin, set the endpoint IP environment variables as described in the
|
||||
[set the environment variables section](#environment-var)
|
||||
|
||||
1. Install the Istio remote profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
--set profile=remote \
|
||||
--set values.global.controlPlaneSecurityEnabled=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
|
||||
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
|
||||
--set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
|
||||
--set gateways.enabled=false
|
||||
{{< /text >}}
|
||||
|
||||
1. [Generate](#kubeconfig) the `kubeconfig` configuration file for each remote
|
||||
cluster.
|
||||
|
||||
1. [Instantiate the credentials](#credentials) for each remote cluster.
|
||||
|
||||
### Manually inject the sidecars into the application manifests
|
||||
|
||||
The following example `istioctl` command injects the sidecars into the
|
||||
application manifests. Run the following commands in a shell with the
|
||||
`kubeconfig` context set up for the remote cluster.
|
||||
|
||||
{{< text bash >}}
|
||||
$ ORIGINAL_SVC_MANIFEST=mysvc-v1.yaml
|
||||
$ istioctl kube-inject --injectConfigMapName istio-sidecar-injector --meshConfigMapName istio -f ${ORIGINAL_SVC_MANIFEST} | kubectl apply -f -
|
||||
{{< /text >}}
|
||||
|
||||
## Access services from different clusters
|
||||
|
||||
Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied
|
||||
to the cluster, you must define the service object in every cluster where a
|
||||
client runs, regardless of the location of the service's endpoints. To ensure
|
||||
this is the case, duplicate the service object to every cluster using
|
||||
`kubectl`. Duplication ensures Kubernetes can resolve the service name in any
|
||||
cluster. Since the service objects are defined in a namespace, you must define
|
||||
the namespace if it doesn't exist, and include it in the service definitions in
|
||||
all clusters.
|
||||
|
||||
## Deployment considerations
|
||||
|
||||
The previous procedures provide a simple and step-by-step guide to deploy a
|
||||
multicluster environment. A production environment might require additional
|
||||
steps or more complex deployment options. The procedures gather the endpoint
|
||||
IPs of the Istio services and use them to invoke `istioctl`. This process creates
|
||||
Istio services on the remote clusters. As part of creating those services and
|
||||
endpoints in the remote cluster, Kubernetes adds DNS entries to the `kube-dns`
|
||||
configuration object.
|
||||
|
||||
This allows the `kube-dns` configuration object in the remote clusters to
|
||||
resolve the Istio service names for all Envoy sidecars in those remote
|
||||
clusters. Since Kubernetes pods don't have stable IPs, restart of any Istio
|
||||
service pod in the control plane cluster causes its endpoint to change.
|
||||
Therefore, any connection made from remote clusters to that endpoint are
|
||||
broken. This behavior is documented in [Istio issue #4822](https://github.com/istio/istio/issues/4822)
|
||||
|
||||
To either avoid or resolve this scenario several options are available. This
|
||||
section provides a high level overview of these options:
|
||||
|
||||
* Update the DNS entries
|
||||
* Use a load balancer service type
|
||||
* Expose the Istio services via a gateway
|
||||
|
||||
### Update the DNS entries
|
||||
|
||||
Upon any failure or restart of the local Istio control plane, `kube-dns` on the remote clusters must be
|
||||
updated with the correct endpoint mappings for the Istio services. There
|
||||
are a number of ways this can be done. The most obvious is to rerun the
|
||||
`istioctl` command in the remote cluster after the Istio services on the control plane
|
||||
cluster have restarted.
|
||||
|
||||
### Use load balance service type
|
||||
|
||||
In Kubernetes, you can declare a service with a service type of `LoadBalancer`.
|
||||
See the Kubernetes documentation on [service types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
|
||||
for more information.
|
||||
|
||||
A simple solution to the pod restart issue is to use load balancers for the
|
||||
Istio services. Then, you can use the load balancers' IPs as the Istio
|
||||
services' endpoint IPs to configure the remote clusters. You may need load
|
||||
balancer IPs for these Istio services:
|
||||
|
||||
* `istio-pilot`
|
||||
* `istio-telemetry`
|
||||
* `istio-policy`
|
||||
|
||||
Currently, the Istio installation doesn't provide an option to specify service
|
||||
types for the Istio services. You can manually specify the service types in the
|
||||
Istio manifests.
|
||||
|
||||
### Expose the Istio services via a gateway
|
||||
|
||||
This method uses the Istio ingress gateway functionality. The remote clusters
|
||||
have the `istio-pilot`, `istio-telemetry` and `istio-policy` services
|
||||
pointing to the load balanced IP of the Istio ingress gateway. Then, all the
|
||||
services point to the same IP.
|
||||
You must then create the destination rules to reach the proper Istio service in
|
||||
the main cluster in the ingress gateway.
|
||||
|
||||
This method provides two alternatives:
|
||||
|
||||
* Re-use the default Istio ingress gateway installed with the provided
|
||||
manifests. You only need to add the correct destination rules.
|
||||
|
||||
* Create another Istio ingress gateway specifically for the multicluster.
|
||||
|
||||
## Security
|
||||
|
||||
Istio supports deployment of mutual TLS between the control plane components as
|
||||
well as between sidecar injected application pods.
|
||||
|
||||
### Control plane security
|
||||
|
||||
To enable control plane security follow these general steps:
|
||||
|
||||
1. Deploy the Istio control plane cluster with:
|
||||
|
||||
* The control plane security enabled.
|
||||
|
||||
* The `citadel` certificate self signing disabled.
|
||||
|
||||
* A secret named `cacerts` in the Istio control plane namespace with the
|
||||
[Certificate Authority (CA) certificates](/docs/tasks/security/plugin-ca-cert/).
|
||||
|
||||
1. Deploy the Istio remote clusters with:
|
||||
|
||||
* The control plane security enabled.
|
||||
|
||||
* The `citadel` certificate self signing disabled.
|
||||
|
||||
* A secret named `cacerts` in the Istio control plane namespace with the
|
||||
[CA certificates](/docs/tasks/security/plugin-ca-cert/).
|
||||
The Certificate Authority (CA) of the main cluster or a root CA must sign
|
||||
the CA certificate for the remote clusters too.
|
||||
|
||||
* The Istio pilot service hostname must be resolvable via DNS. DNS
|
||||
resolution is required because Istio configures the sidecar to verify the
|
||||
certificate subject names using the `istio-pilot.<namespace>` subject
|
||||
name format.
|
||||
|
||||
* Set control plane IPs or resolvable host names.
|
||||
|
||||
### Mutual TLS between application pods
|
||||
|
||||
To enable mutual TLS for all application pods, follow these general steps:
|
||||
|
||||
1. Deploy the Istio control plane cluster with:
|
||||
|
||||
* Mutual TLS globally enabled.
|
||||
|
||||
* The Citadel certificate self-signing disabled.
|
||||
|
||||
* A secret named `cacerts` in the Istio control plane namespace with the
|
||||
[CA certificates](/docs/tasks/security/plugin-ca-cert/)
|
||||
|
||||
1. Deploy the Istio remote clusters with:
|
||||
|
||||
* Mutual TLS globally enabled.
|
||||
|
||||
* The Citadel certificate self-signing disabled.
|
||||
|
||||
* A secret named `cacerts` in the Istio control plane namespace with the
|
||||
[CA certificates](/docs/tasks/security/plugin-ca-cert/)
|
||||
The CA of the main cluster or a root CA must sign the CA certificate for
|
||||
the remote clusters too.
|
||||
|
||||
{{< tip >}}
|
||||
The CA certificate steps are identical for both control plane security and
|
||||
application pod security steps.
|
||||
{{< /tip >}}
|
||||
|
||||
### Example deployment
|
||||
|
||||
This example procedure installs Istio with both the control plane mutual TLS
|
||||
and the application pod mutual TLS enabled. The procedure sets up a remote
|
||||
cluster with a selector-less service and endpoint. Istio Pilot uses the service
|
||||
and endpoint to allow the remote sidecars to resolve the
|
||||
`istio-pilot.istio-system` hostname via Istio's local Kubernetes DNS.
|
||||
|
||||
#### Primary cluster: deploy the control plane cluster
|
||||
|
||||
1. Create the `cacerts` secret using the Istio certificate samples in the
|
||||
`istio-system` namespace:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create ns istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy the Istio control plane with its security features enabled:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
--set values.global.controlPlaneSecurityEnabled=true \
|
||||
--set values.security.selfSigned=false
|
||||
{{< /text >}}
|
||||
|
||||
#### Remote cluster: deploy Istio components
|
||||
|
||||
1. Create the `cacerts` secret using the Istio certificate samples in the
|
||||
`istio-system` namespace:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create ns istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
|
||||
{{< /text >}}
|
||||
|
||||
1. Set the environment variables for the IP addresses of the pods as described
|
||||
in the [setting environment variables section](#environment-var).
|
||||
|
||||
1. The following command deploys the remote cluster's components with security
|
||||
features for the control plane enabled, and enables the
|
||||
creation of the an Istio Pilot selector-less service and endpoint to get a
|
||||
DNS entry in the remote cluster.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
--set profile=remote \
|
||||
--set values.global.controlPlaneSecurityEnabled=true \
|
||||
--set values.security.selfSigned=false \
|
||||
--set values.global.createRemoteSvcEndpoints=true \
|
||||
--set values.global.remotePilotCreateSvcEndpoint=true \
|
||||
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
|
||||
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
|
||||
--set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
|
||||
--set gateways.enabled=false
|
||||
{{< /text >}}
|
||||
|
||||
1. To generate the `kubeconfig` configuration file for the remote cluster,
|
||||
follow the steps in the [Kubernetes configuration section](#kubeconfig)
|
||||
|
||||
### Primary cluster: instantiate credentials
|
||||
|
||||
You must instantiate credentials for each remote cluster. Follow the
|
||||
[instantiate credentials procedure](#credentials)
|
||||
to complete the deployment.
|
||||
|
||||
**Congratulations!**
|
||||
|
||||
You have configured all the Istio components in both clusters to use mutual TLS
|
||||
between application sidecars, the control plane components, and other
|
||||
application sidecars.
|
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 116 KiB |
|
@ -0,0 +1,437 @@
|
|||
---
|
||||
title: Shared control plane (single and multiple networks)
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with a shared control plane.
|
||||
weight: 5
|
||||
keywords: [kubernetes,multicluster,federation,vpn,gateway]
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/vpn/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-vpn/
|
||||
- /docs/examples/multicluster/split-horizon-eds/
|
||||
- /docs/tasks/multicluster/split-horizon-eds/
|
||||
- /docs/setup/kubernetes/install/multicluster/shared-gateways/
|
||||
---
|
||||
|
||||
Setup a [multicluster Istio service mesh](/docs/ops/deployment/deployment-models/#multiple-clusters)
|
||||
across multiple clusters with a shared control plane. In this configuration, multiple Kubernetes clusters running
|
||||
a remote configuration connect to a shared Istio [control plane](/docs/ops/deployment/deployment-models/#control-plane-models)
|
||||
running in a main cluster. Clusters may be on the same network or different networks than other
|
||||
clusters in the mesh. Once one or more remote Kubernetes clusters are connected to the Istio control plane,
|
||||
Envoy can then form a mesh.
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-vpn.svg" caption="Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN" >}}
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Two or more clusters running a supported Kubernetes version ({{< supported_kubernetes_versions >}}).
|
||||
|
||||
* All Kubernetes control plane API servers must be routable to each other.
|
||||
|
||||
* Clusters on the same network must be an RFC1918 network, VPN, or an alternative more advanced network technique
|
||||
meeting the following requirements:
|
||||
* Individual cluster Pod CIDR ranges and service CIDR ranges must be unique across the network and may not overlap.
|
||||
* All pod CIDRs in the same network must be routable to each other.
|
||||
|
||||
* Clusters on different networks must have `istio-ingressgateway` services which are accessible from every other
|
||||
cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require
|
||||
special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for
|
||||
service object type load balancers. When deploying on platforms without NLB support, it may be necessary to
|
||||
modify the health checks for the load balancer to register the ingress gateway.
|
||||
|
||||
## Preparation
|
||||
|
||||
### Certificate Authority
|
||||
|
||||
Generate intermediate CA certificates for each cluster's CA from your
|
||||
organization's root CA. The shared root CA enables mutual TLS communication
|
||||
across different clusters. For illustration purposes, the following instructions
|
||||
use the certificates from the Istio samples directory for both cluster. Run the
|
||||
following commands on each cluster in the mesh to install the certificates.
|
||||
See [Certificate Authority (CA) certificates](/docs/tasks/security/plugin-ca-cert/)
|
||||
for more details on configuring an external CA.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
|
||||
### Cross-cluster control plane access
|
||||
|
||||
Decide how to expose the main cluster's Istiod discovery service to
|
||||
the remote clusters. Choose between **one** of the two options below:
|
||||
|
||||
* Option (1) - Use the `istio-ingressgateway` gateway shared with data traffic.
|
||||
* Option (2) - Use a cloud provider’s internal load balancer on the Istiod service.
|
||||
|
||||
### Cluster and network naming
|
||||
|
||||
Determine the name of the clusters and networks in the mesh. These names will be used
|
||||
in the mesh network configuration and when configuring the mesh's service registries.
|
||||
Assign a unique name to each cluster. The name must be a
|
||||
[DNS label name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names).
|
||||
In the example below the main cluster is called `main0` and the remote cluster is `remote0`.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_CTX=<...>
|
||||
$ export REMOTE_CLUSTER_CTX=<...>
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NAME=main0
|
||||
$ export REMOTE_CLUSTER_NAME=remote0
|
||||
{{< /text >}}
|
||||
|
||||
If the clusters are on different networks, assign a unique network name for each network.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NETWORK=network1
|
||||
$ export REMOTE_CLUSTER_NETWORK=network2
|
||||
{{< /text >}}
|
||||
|
||||
If clusters are on the same network, the same network name is used for those clusters.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export MAIN_CLUSTER_NETWORK=network1
|
||||
$ export REMOTE_CLUSTER_NETWORK=network1
|
||||
{{< /text >}}
|
||||
|
||||
## Deployment
|
||||
|
||||
### Main cluster
|
||||
|
||||
Create the main cluster's configuration. Replace the variables below with the cluster
|
||||
and network names chosen earlier. Pick **one** of the two options for cross-cluster
|
||||
control plane access and delete the configuration for the other two options.
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> istio-main-cluster.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
# required if istiod is disabled
|
||||
security:
|
||||
selfSigned: false
|
||||
global:
|
||||
multiCluster:
|
||||
clusterName: ${MAIN_CLUSTER_NAME}
|
||||
network: ${MAIN_CLUSTER_NETWORK}
|
||||
|
||||
# Mesh network configuration. This is optional and may be omitted if all clusters are on the same network.
|
||||
meshNetworks:
|
||||
${MAIN_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
# Always use Kubernetes as the registry name for the main cluster in the mesh network configuration
|
||||
- fromRegistry: Kubernetes
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
|
||||
${REMOTE_CLUSTER_NETWORK}:
|
||||
endpoints:
|
||||
- fromRegistry: ${REMOTE_CLUSTER_NAME}
|
||||
gateways:
|
||||
- registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
|
||||
port: 443
|
||||
|
||||
# Configure cross-cluster control plane access. Choose one of the three
|
||||
# options below and delete the other two option's configuration.
|
||||
|
||||
# Option(1) - Use the existing istio-ingressgateway.
|
||||
meshExpansion:
|
||||
enabled: true
|
||||
|
||||
# Option(2) - Use a cloud provider’s internal load balancer.
|
||||
# Change the Istio service `type=LoadBalancer` and add the cloud provider specific annotations. See
|
||||
# https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer for more
|
||||
# information. The example below shows the configuration for GCP/GKE.
|
||||
components:
|
||||
pilot:
|
||||
k8s:
|
||||
service:
|
||||
type: LoadBalancer
|
||||
service_annotations:
|
||||
cloud.google.com/load-balancer-type: Internal
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
Apply the main cluster's configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl --context=${MAIN_CLUSTER_CTX} manifest apply -f istio-main-cluster.yaml
|
||||
{{< /text >}}
|
||||
|
||||
Wait for the control plane to be ready before proceeding. Set the `ISTIOD_REMOTE_EP` environment
|
||||
variable based on which remote control plane configuration option was selected earlier:
|
||||
|
||||
* Option (1) - `istio-ingressgateway` gateway shared with data traffic
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ISTIOD_REMOTE_EP=$(kubectl --context=${MAIN_CLUSTER_CTX} -n istio-system get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
* Option (2) - Use a cloud provider’s internal load balancer on the Istiod service.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export ISTIOD_REMOTE_EP=$(kubectl --context=${MAIN_CLUSTER_CTX} -n istio-system get svc istiod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
### Remote cluster
|
||||
|
||||
Create the remote cluster's configuration.
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> istio-remote0-cluster.yaml
|
||||
apiVersion: install.istio.io/v1alpha1
|
||||
kind: IstioOperator
|
||||
spec:
|
||||
values:
|
||||
global:
|
||||
# The remote cluster's name and network name must match the values specified in the
|
||||
# mesh network configuration of the main cluster.
|
||||
multiCluster:
|
||||
clusterName: ${REMOTE_CLUSTER_NAME}
|
||||
network: ${REMOTE_CLUSTER_NETWORK}
|
||||
|
||||
# Replace ISTIOD_REMOTE_EP with the the value of ISTIOD_REMOTE_EP set earlier.
|
||||
remotePilotAddress: ${ISTIOD_REMOTE_EP}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
Apply the remote cluster configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl --context ${REMOTE_CLUSTER_CTX} manifest apply -f istio-remote0-cluster.yaml
|
||||
{{< /text >}}
|
||||
|
||||
## Cross-cluster load balancing
|
||||
|
||||
### Configure ingress gateways
|
||||
|
||||
{{< tip >}}
|
||||
Skip this next step and move onto configuring the service registries if both cluster are on the same network.
|
||||
{{< /tip >}}
|
||||
|
||||
Cross-network traffic is securely routed through each destination cluster's ingress gateway. When clusters in a mesh are
|
||||
on different networks you need to configure port 443 on the ingress gateway to pass incoming traffic through to the
|
||||
target service specified in a request's SNI header, for SNI values of the _local_
|
||||
top-level domain (i.e., the [Kubernetes DNS domain](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)).
|
||||
Mutual TLS connections will be used all the way from the source to the destination sidecar.
|
||||
|
||||
Apply the following configuration to each cluster.
|
||||
|
||||
{{< text yaml >}}
|
||||
cat <<EOF> cluster-aware-gateway.yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: cluster-aware-gateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway
|
||||
servers:
|
||||
- port:
|
||||
number: 443
|
||||
name: tls
|
||||
protocol: TLS
|
||||
tls:
|
||||
mode: AUTO_PASSTHROUGH
|
||||
hosts:
|
||||
- "*.local"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl --context=${MAIN_CLUSTER_CTX} apply -f cluster-aware-gateway.yaml
|
||||
$ kubectl --context=${REMOTE_CLUSTER_CTX} apply -f cluster-aware-gateway.yaml
|
||||
{{< /text >}}
|
||||
|
||||
### Configure cross-cluster service registries
|
||||
|
||||
To enable cross-cluster load balancing, the Istio control plane requires
|
||||
access to all clusters in the mesh to discover services, endpoints, and
|
||||
pod attributes. To configure access, create a secret for each remote
|
||||
cluster with credentials to access the remote cluster's `kube-apiserver` and
|
||||
install it in the main cluster. This secret uses the credentials of the
|
||||
`istio-reader-service-account` in the remote cluster. `--name` specifies the
|
||||
remote cluster's name. It must match the cluster name in main cluster's IstioOperator
|
||||
configuration.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl x create-remote-secret --context=${REMOTE_CLUSTER_CTX} --name ${REMOTE_CLUSTER_NAME} | \
|
||||
kubectl apply -f - --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a remote secret for the local cluster running the Istio control plane. Istio is always
|
||||
aware of the local cluster's Kubernetes credentials.
|
||||
{{< /warning >}}
|
||||
|
||||
## Deploy an example service
|
||||
|
||||
Deploy two instances of the `helloworld` service, one in each cluster. The difference
|
||||
between the two instances is the version of their `helloworld` image.
|
||||
|
||||
### Deploy helloworld v2 in the remote cluster
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=${REMOTE_CLUSTER_CTX} namespace sample
|
||||
$ kubectl label --context=${REMOTE_CLUSTER_CTX} namespace sample istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=${REMOTE_CLUSTER_CTX} -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
|
||||
$ kubectl create --context=${REMOTE_CLUSTER_CTX} -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v2` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${REMOTE_CLUSTER_CTX} -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s
|
||||
{{< /text >}}
|
||||
|
||||
### Deploy helloworld v1 in the main cluster
|
||||
|
||||
1. Create a `sample` namespace with a sidecar auto-injection label:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=${MAIN_CLUSTER_CTX} namespace sample
|
||||
$ kubectl label --context=${MAIN_CLUSTER_CTX} namespace sample istio-injection=enabled
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy `helloworld v1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=${MAIN_CLUSTER_CTX} -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample
|
||||
$ kubectl create --context=${MAIN_CLUSTER_CTX} -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Confirm `helloworld v1` is running:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${MAIN_CLUSTER_CTX} -n sample
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
|
||||
{{< /text >}}
|
||||
|
||||
### Cross-cluster routing in action
|
||||
|
||||
To demonstrate how traffic to the `helloworld` service is distributed across the two clusters,
|
||||
call the `helloworld` service from another in-mesh `sleep` service.
|
||||
|
||||
1. Deploy the `sleep` service in both clusters:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=${MAIN_CLUSTER_CTX} -f @samples/sleep/sleep.yaml@ -n sample
|
||||
$ kubectl apply --context=${REMOTE_CLUSTER_CTX} -f @samples/sleep/sleep.yaml@ -n sample
|
||||
{{< /text >}}
|
||||
|
||||
1. Wait for the `sleep` service to start in each cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${MAIN_CLUSTER_CTX} -n sample -l app=sleep
|
||||
sleep-754684654f-n6bzf 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pod --context=${REMOTE_CLUSTER_CTX} -n sample -l app=sleep
|
||||
sleep-754684654f-dzl9j 2/2 Running 0 5s
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from the main cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=${MAIN_CLUSTER_CTX} -it -n sample -c sleep $(kubectl get pod --context=${MAIN_CLUSTER_CTX} -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
1. Call the `helloworld.sample` service several times from the remote cluster:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=${REMOTE_CLUSTER_CTX} -it -n sample -c sleep $(kubectl get pod --context=${REMOTE_CLUSTER_CTX} -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
|
||||
{{< /text >}}
|
||||
|
||||
If set up correctly, the traffic to the `helloworld.sample` service will be distributed between instances
|
||||
on the main and remote clusters resulting in responses with either `v1` or `v2` in the body:
|
||||
|
||||
{{< text plain >}}
|
||||
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
|
||||
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
|
||||
{{< /text >}}
|
||||
|
||||
You can also verify the IP addresses used to access the endpoints by printing the log of the sleep's `istio-proxy` container.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs --context=${MAIN_CLUSTER_CTX} -n sample $(kubectl get pod --context=${MAIN_CLUSTER_CTX} -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
|
||||
[2018-11-25T12:37:52.077Z] "GET /hello HTTP/1.1" 200 - 0 60 190 189 "-" "curl/7.60.0" "6e096efe-f550-4dfa-8c8c-ba164baf4679" "helloworld.sample:5000" "192.23.120.32:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59496 -
|
||||
[2018-11-25T12:38:06.745Z] "GET /hello HTTP/1.1" 200 - 0 60 171 170 "-" "curl/7.60.0" "6f93c9cc-d32a-4878-b56a-086a740045d2" "helloworld.sample:5000" "10.10.0.90:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.20.194.146:5000 10.10.0.89:59646 -
|
||||
{{< /text >}}
|
||||
|
||||
In the main cluster, the gateway IP of the remote cluster (`192.23.120.32:15443`) is logged when v2 was called and
|
||||
the instance IP in the main cluster (`10.10.0.90:5000`) is logged when v1 was called.
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl logs --context=${REMOTE_CLUSTER_CTX} -n sample $(kubectl get pod --context=${REMOTE_CLUSTER_CTX} -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}') istio-proxy
|
||||
[2019-05-25T08:06:11.468Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 177 176 "-" "curl/7.60.0" "58cfb92b-b217-4602-af67-7de8f63543d8" "helloworld.sample:5000" "192.168.1.246:15443" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36840 -
|
||||
[2019-05-25T08:06:12.834Z] "GET /hello HTTP/1.1" 200 - "-" 0 60 181 180 "-" "curl/7.60.0" "ce480b56-fafd-468b-9996-9fea5257cb1e" "helloworld.sample:5000" "10.32.0.9:5000" outbound|5000||helloworld.sample.svc.cluster.local - 10.107.117.235:5000 10.32.0.10:36886 -
|
||||
{{< /text >}}
|
||||
|
||||
In the remote cluster, the gateway IP of the main cluster (`192.168.1.246:15443`) is logged when v1 was called and
|
||||
the instance IP in remote cluster (`10.32.0.9:5000`) is logged when v2 was called.
|
||||
|
||||
**Congratulations!**
|
||||
|
||||
You have configured a multi-cluster Istio mesh, installed samples and verified cross cluster traffic routing.
|
||||
|
||||
## Additional considerations
|
||||
|
||||
### Automatic injection
|
||||
|
||||
The Istiod service in each cluster provides automatic sidecar injection for proxies in its own cluster.
|
||||
Namespaces must be labeled in each cluster following the
|
||||
[automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) guide
|
||||
|
||||
### Access services from different clusters
|
||||
|
||||
Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied
|
||||
to the cluster, you must define the service object in every cluster where a
|
||||
client runs, regardless of the location of the service's endpoints. To ensure
|
||||
this is the case, duplicate the service object to every cluster using
|
||||
`kubectl`. Duplication ensures Kubernetes can resolve the service name in any
|
||||
cluster. Since the service objects are defined in a namespace, you must define
|
||||
the namespace if it doesn't exist, and include it in the service definitions in
|
||||
all clusters.
|
||||
|
||||
### Security
|
||||
|
||||
The Istiod service in each cluster provides CA functionality to proxies in its own
|
||||
cluster. The CA setup earlier ensures proxies across clusters in the mesh have the
|
||||
same root of trust.
|
||||
|
||||
## Uninstalling the remote cluster
|
||||
|
||||
To uninstall the remote cluster run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl --context=${REMOTE_CLUSTER_CTX} manifest generate <your original remote configuration> | \
|
||||
kubectl delete -f -
|
||||
|
||||
$ istioctl x create-remote-secret --context=${REMOTE_CLUSTER_CTX} --name ${REMOTE_CLUSTER_NAME} | \
|
||||
kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
|
||||
{{< /text >}}
|
Before Width: | Height: | Size: 152 KiB After Width: | Height: | Size: 152 KiB |
|
@ -37,9 +37,9 @@ concise list of things you should know before upgrading your deployment to Istio
|
|||
|
||||
- **Improved Multicluster Integration**. Consolidated the 1.0 `istio-remote`
|
||||
chart previously used for
|
||||
[multicluster VPN](/docs/setup/install/multicluster/shared-vpn/) and
|
||||
[multicluster split horizon](/docs/setup/install/multicluster/shared-gateways/) remote cluster installation
|
||||
into the Istio Helm chart simplifying the operational experience.
|
||||
[multicluster VPN](https://archive.istio.io/v1.1/docs/setup/kubernetes/install/multicluster/vpn/) and
|
||||
[multicluster split horizon](https://archive.istio.io/v1.1/docs/examples/multicluster/split-horizon-eds/)
|
||||
remote cluster installation into the Istio Helm chart simplifying the operational experience.
|
||||
|
||||
## Traffic management
|
||||
|
||||
|
|
|
@ -21,9 +21,9 @@ when installing both `istio-init` and Istio charts with either `template` or `ti
|
|||
- Many installation options have been added, removed, or changed. Refer to [Installation Options Changes](/news/releases/1.1.x/announcing-1.1/helm-changes/) for a detailed
|
||||
summary of the changes.
|
||||
|
||||
- The 1.0 `istio-remote` chart used for [multicluster VPN](/docs/setup/install/multicluster/shared-vpn/) and
|
||||
[multicluster shared gateways](/docs/setup/install/multicluster/shared-gateways/) remote cluster installation has been consolidated into the Istio chart. To generate
|
||||
an equivalent `istio-remote` chart, use the `--set global.istioRemote=true` flag.
|
||||
- The 1.0 `istio-remote` chart used for [multicluster VPN](https://archive.istio.io/v1.1/docs/setup/kubernetes/install/multicluster/vpn/) and
|
||||
[multicluster split horizon](https://archive.istio.io/v1.1/docs/examples/multicluster/split-horizon-eds/) remote cluster
|
||||
installation has been consolidated into the Istio chart. To generate an equivalent `istio-remote` chart, use the `--set global.istioRemote=true` flag.
|
||||
|
||||
- Addons are no longer exposed via separate load balancers. Instead addons can now be optionally exposed via the Ingress Gateway. To expose an addon via the
|
||||
Ingress Gateway, please follow the [Remotely Accessing Telemetry Addons](/docs/tasks/observability/gateways/) guide.
|
||||
|
|
Loading…
Reference in New Issue