Install guide for ambient multi network (#16709)

* Install guide for ambient multi network

Signed-off-by: Jackie Elliott <jaellio@microsoft.com>

* Adopt Mitch's changes

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Fix helm cleanup

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Stash

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Address the rest of the feedback

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Change some of the tips

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Make gen

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Spellcheck

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Use latest for news link

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Add prev and next designations to frontmatter

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Fixup

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

* Put the breadcrumb on the right pages

Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>

---------

Signed-off-by: Jackie Elliott <jaellio@microsoft.com>
Signed-off-by: Keith Mattix II <keithmattix@microsoft.com>
Co-authored-by: Keith Mattix II <keithmattix@microsoft.com>
This commit is contained in:
Jackie Maertens (Elliott) 2025-08-12 18:40:29 -04:00 committed by GitHub
parent 9dca650cc3
commit dd96a148ba
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 1665 additions and 39 deletions

6
.gitignore vendored
View File

@ -41,3 +41,9 @@ archived_version
# Local Netlify folder
.netlify
# Local artifacts when running tests
artifacts
# Certs generated during tests
certs

View File

@ -8,6 +8,7 @@ aliases:
owner: istio/wg-environment-maintainers
test: n/a
list_below: yes
keywords: [kubernetes,ambient,install]
---
{{< tip >}}

View File

@ -0,0 +1,72 @@
---
title: Install Multicluster
description: Install an Istio mesh in ambient mode across multiple Kubernetes clusters.
weight: 40
keywords: [kubernetes,multicluster,ambient]
simple_list: true
content_above: true
test: table-of-contents
owner: istio/wg-environments-maintainers
next: /docs/ambient/install/multicluster/before-you-begin
---
Follow this guide to install an Istio {{< gloss "ambient" >}}ambient service mesh{{< /gloss >}}
that spans multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}.
## Current Status and Limitations
{{< warning >}}
**Ambient multicluster is currently in alpha status** and has significant limitations.
This feature is under active development and should not be used in production environments.
{{< /warning >}}
Before proceeding with ambient multicluster installation, it's critical to understand
the current state and limitations of this feature:
### Supported Configurations
Currently, ambient multicluster only supports:
Before proceeding with an ambient multicluster installation, it is critical to understand
the current state and limitations of this feature.
### Critical Limitations
#### Network Topology Restrictions
**Multi-cluster single-network configurations are untested, and may be broken**
- Use caution when deploying ambient across clusters that share the same network
- Only multi-network configurations are supported
#### Control Plane Limitations
**Primary remote configuration is not currently supported**
- You can only have multiple primary clusters
- Configurations with one or more remote clusters will not work correctly
#### Waypoint Requirements
**Universal waypoint deployments are assumed across clusters**
- All clusters must have identically named waypoint deployments
- Waypoint configurations must be synchronized manually across clusters (e.g. using Flux, ArgoCD, or similar tools)
- Traffic routing relies on consistent waypoint naming conventions
#### Service Visibility and Scoping
**Service scope configurations are not read from across clusters**
- Only the local cluster's service scope configuration is used as the source of truth
- Remote cluster service scopes are not respected, which can lead to unexpected traffic behavior
- Cross-cluster service discovery may not respect intended service boundaries
**If a service's waypoint is marked as global, that service will also be global**
- This can lead to unintended cross-cluster traffic if not managed carefully
#### Gateway Limitations
**Ambient east-west gateways currently only support meshed mTLS traffic**
- Cannot currently expose `istiod` across networks using ambient east-west gateways. You can still use a classic e/w gateway for this.
{{< tip >}}
As ambient multicluster matures, many of these limitations will be addressed.
Check the [Istio release notes](https://istio.io/latest/news/) for updates on
ambient multicluster capabilities.
{{< /tip >}}

View File

@ -0,0 +1,87 @@
---
title: Before you begin
description: Initial steps before installing Istio on multiple clusters.
weight: 1
keywords: [kubernetes,multicluster,ambient]
test: n/a
owner: istio/wg-environments-maintainers
next: /docs/ambient/install/multicluster/multi-primary_multi-network
prev: /docs/ambient/install/multicluster
---
{{< boilerplate alpha >}}
Before you begin a multicluster installation, review the
[deployment models guide](/docs/ops/deployment/deployment-models)
which describes the foundational concepts used throughout this guide.
In addition, review the requirements and perform the initial steps below.
## Requirements
### Cluster
This guide requires that you have two Kubernetes clusters with support for LoadBalancer `Services` on any of the
[supported Kubernetes versions:](/docs/releases/supported-releases#support-status-of-istio-releases) {{< supported_kubernetes_versions >}}.
### API Server Access
The API Server in each cluster must be accessible to the other clusters in the
mesh. Many cloud providers make API Servers publicly accessible via network
load balancers (NLB). The ambient east-west gateway cannot be used to expose
the API server as it only supports double HBONE traffic. A non-ambient
[east-west](https://en.wikipedia.org/wiki/East-west_traffic) gateway could be
used to enable access to the API Server.
## Environment Variables
This guide will refer to two clusters: `cluster1` and `cluster2`. The following
environment variables will be used throughout to simplify the instructions:
Variable | Description
-------- | -----------
`CTX_CLUSTER1` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster1` cluster.
`CTX_CLUSTER2` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster2` cluster.
Set the two variables before proceeding:
{{< text syntax=bash snip_id=none >}}
$ export CTX_CLUSTER1=<your cluster1 context>
$ export CTX_CLUSTER2=<your cluster2 context>
{{< /text >}}
## Configure Trust
A multicluster service mesh deployment requires that you establish trust
between all clusters in the mesh. Depending on the requirements for your
system, there may be multiple options available for establishing trust.
See [certificate management](/docs/tasks/security/cert-management/) for
detailed descriptions and instructions for all available options.
Depending on which option you choose, the installation instructions for
Istio may change slightly.
This guide will assume that you use a common root to generate intermediate
certificates for each primary cluster.
Follow the [instructions](/docs/tasks/security/cert-management/plugin-ca-cert/)
to generate and push a CA certificate secret to both the `cluster1` and `cluster2`
clusters.
{{< tip >}}
If you currently have a single cluster with a self-signed CA (as described
in [Getting Started](/docs/setup/getting-started/)), you need to
change the CA using one of the methods described in
[certificate management](/docs/tasks/security/cert-management/). Changing the
CA typically requires reinstalling Istio. The installation instructions
below may have to be altered based on your choice of CA.
{{< /tip >}}
## Next steps
You're now ready to install an Istio ambient mesh across multiple clusters.
- [Install Multi-Primary on Different Networks](/docs/ambient/install/multicluster/multi-primary_multi-network)
{{< tip >}}
If you plan on installing Istio multi-cluster using Helm, follow the
[Helm prerequisites](/docs/setup/install/helm/#prerequisites) in the Helm install guide first.
{{< /tip >}}

View File

@ -0,0 +1,197 @@
#!/usr/bin/env bash
# shellcheck disable=SC1090,SC2034,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Initialize KUBECONFIG_FILES and KUBE_CONTEXTS
_set_kube_vars
source content/en/docs/ambient/install/multicluster/verify/snips.sh
# set_single_network_vars initializes all variables for a single network config.
function set_single_network_vars
{
export KUBECONFIG_CLUSTER1="${KUBECONFIG_FILES[0]}"
export KUBECONFIG_CLUSTER2="${KUBECONFIG_FILES[1]}"
export CTX_CLUSTER1="${KUBE_CONTEXTS[0]}"
export CTX_CLUSTER2="${KUBE_CONTEXTS[1]}"
}
# set_multi_network_vars initializes all variables for a multi-network config.
function set_multi_network_vars
{
export KUBECONFIG_CLUSTER1="${KUBECONFIG_FILES[0]}"
export KUBECONFIG_CLUSTER2="${KUBECONFIG_FILES[2]}"
export CTX_CLUSTER1="${KUBE_CONTEXTS[0]}"
export CTX_CLUSTER2="${KUBE_CONTEXTS[2]}"
}
# configure_trust creates a hierarchy of
function configure_trust
{
# Keeps the certs under a separate directory.
mkdir -p certs
pushd certs || exit
# Create the root cert.
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
# Create and deploy intermediate certs for cluster1 and cluster2.
make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts
make -f ../tools/certs/Makefile.selfsigned.mk cluster2-cacerts
# Create the istio-system namespace in each cluster so that we can create the secrets.
kubectl --context="$CTX_CLUSTER1" create namespace istio-system
kubectl --context="$CTX_CLUSTER2" create namespace istio-system
# Deploy secret to each cluster
kubectl --context="$CTX_CLUSTER1" create secret generic cacerts -n istio-system \
--from-file=cluster1/ca-cert.pem \
--from-file=cluster1/ca-key.pem \
--from-file=cluster1/root-cert.pem \
--from-file=cluster1/cert-chain.pem
kubectl --context="$CTX_CLUSTER2" create secret generic cacerts -n istio-system \
--from-file=cluster2/ca-cert.pem \
--from-file=cluster2/ca-key.pem \
--from-file=cluster2/root-cert.pem \
--from-file=cluster2/cert-chain.pem
popd || exit # Return to the previous directory.
}
# cleanup_istioctl removes all resources created by the tests with istioctl.
function cleanup_istioctl
{
# Remove temp files.
rm -f cluster1.yaml cluster2.yaml certs
# Cleanup both clusters concurrently
cleanup_cluster1_istioctl &
cleanup_cluster2_istioctl &
wait
snip_delete_crds
}
# cleanup_cluster1_istioctl removes the istio-system and sample namespaces on CLUSTER1 with istioctl.
function cleanup_cluster1_istioctl
{
echo y | istioctl uninstall --revision=default --context="${CTX_CLUSTER1}"
kubectl delete ns istio-system sample --context="${CTX_CLUSTER1}" --ignore-not-found
}
# cleanup_cluster2_istioctl removes the istio-system and sample namespaces on CLUSTER2 with istioctl.
function cleanup_cluster2_istioctl
{
echo y | istioctl uninstall --revision=default --context="${CTX_CLUSTER2}"
kubectl delete ns istio-system sample --context="${CTX_CLUSTER2}" --ignore-not-found
}
# verify_load_balancing verifies that traffic is load balanced properly
# between CLUSTER1 and CLUSTER2.
function verify_load_balancing
{
# Verify istiod is synced
echo "Verifying istiod is synced to remote cluster."
_verify_like snip_verify_multicluster_1 "$snip_verify_multicluster_1_out"
# Deploy the HelloWorld service.
snip_deploy_the_helloworld_service_1
snip_deploy_the_helloworld_service_2
snip_deploy_the_helloworld_service_3
# Deploy HelloWorld v1 and v2
snip_deploy_helloworld_v1_1
snip_deploy_helloworld_v2_1
# Deploy curl
snip_deploy_curl_1
# Wait for all the deployments.
_wait_for_deployment sample helloworld-v1 "${CTX_CLUSTER1}"
_wait_for_deployment sample curl "${CTX_CLUSTER1}"
_wait_for_deployment sample helloworld-v2 "${CTX_CLUSTER2}"
_wait_for_deployment sample curl "${CTX_CLUSTER2}"
# Expose the helloworld service in both clusters.
echo "Exposing helloworld in cluster1"
kubectl --context="${CTX_CLUSTER1}" label svc helloworld -n sample istio.io/global="true"
echo "Exposing helloworld in cluster2"
kubectl --context="${CTX_CLUSTER2}" label svc helloworld -n sample istio.io/global="true"
# Verify everything is deployed as expected.
VERIFY_TIMEOUT=0 # Don't retry.
echo "Verifying helloworld v1 deployment"
_verify_like snip_deploy_helloworld_v1_2 "$snip_deploy_helloworld_v1_2_out"
echo "Verifying helloworld v2 deployment"
_verify_like snip_deploy_helloworld_v2_2 "$snip_deploy_helloworld_v2_2_out"
echo "Verifying curl deployment in ${CTX_CLUSTER1}"
_verify_like snip_deploy_curl_2 "$snip_deploy_curl_2_out"
echo "Verifying curl deployment in ${CTX_CLUSTER2}"
_verify_like snip_deploy_curl_3 "$snip_deploy_curl_3_out"
unset VERIFY_TIMEOUT # Restore default
local EXPECTED_RESPONSE_FROM_CLUSTER1="Hello version: v1, instance:"
local EXPECTED_RESPONSE_FROM_CLUSTER2="Hello version: v2, instance:"
# Verify we hit both clusters from CLUSTER1
echo "Verifying load balancing from ${CTX_CLUSTER1}"
_verify_contains snip_verifying_crosscluster_traffic_1 "$EXPECTED_RESPONSE_FROM_CLUSTER1"
_verify_contains snip_verifying_crosscluster_traffic_1 "$EXPECTED_RESPONSE_FROM_CLUSTER2"
# Verify we hit both clusters from CLUSTER2
echo "Verifying load balancing from ${CTX_CLUSTER2}"
_verify_contains snip_verifying_crosscluster_traffic_3 "$EXPECTED_RESPONSE_FROM_CLUSTER1"
_verify_contains snip_verifying_crosscluster_traffic_3 "$EXPECTED_RESPONSE_FROM_CLUSTER2"
}
# For Helm multi-cluster installation steps
function create_istio_system_ns
{
snip_create_istio_system_namespace_cluster_1
snip_create_istio_system_namespace_cluster_2
}
function setup_helm_repo
{
snip_setup_helm_repo_cluster_1
snip_setup_helm_repo_cluster_2
}
snip_create_istio_system_namespace_cluster_1() {
kubectl create namespace istio-system --context "${CTX_CLUSTER1}"
}
snip_create_istio_system_namespace_cluster_2() {
kubectl create namespace istio-system --context "${CTX_CLUSTER2}"
}
snip_setup_helm_repo_cluster_1() {
helm repo add istio https://istio-release.storage.googleapis.com/charts --kube-context "${CTX_CLUSTER1}"
helm repo update --kube-context "${CTX_CLUSTER1}"
}
snip_setup_helm_repo_cluster_2() {
helm repo add istio https://istio-release.storage.googleapis.com/charts --kube-context "${CTX_CLUSTER2}"
helm repo update --kube-context "${CTX_CLUSTER2}"
}
snip_delete_sample_ns_cluster_1() {
kubectl delete namespace sample --context "${CTX_CLUSTER1}"
}
snip_delete_sample_ns_cluster_2() {
kubectl delete namespace sample --context "${CTX_CLUSTER2}"
}

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 113 KiB

View File

@ -0,0 +1,118 @@
#!/usr/bin/env bash
# shellcheck disable=SC1090,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# @setup multicluster
set -e
set -u
set -o pipefail
source content/en/docs/ambient/install/multicluster/common.sh
source "tests/util/gateway-api.sh"
set_multi_network_vars
setup_helm_repo
function install_istio_on_cluster1_helm {
echo "Installing Gateway API CRDs on Primary cluster: ${CTX_CLUSTER1}"
install_gateway_api_crds "${CTX_CLUSTER1}"
echo "Installing Istio on Primary cluster: ${CTX_CLUSTER1}"
snip_set_the_default_network_for_cluster1_1
_rewrite_helm_repo snip_configure_cluster1_as_a_primary_3
_rewrite_helm_repo snip_configure_cluster1_as_a_primary_4
_rewrite_helm_repo snip_install_cni_cluster1
_rewrite_helm_repo snip_install_ztunnel_cluster1
echo "Creating the east-west gateway"
snip_install_an_ambient_eastwest_gateway_in_cluster1_2
snip_install_an_ambient_eastwest_gateway_in_cluster1_3
echo "Waiting for the east-west gateway to have an external IP"
_verify_like snip_install_an_ambient_eastwest_gateway_in_cluster1_4 "$snip_install_an_ambient_eastwest_gateway_in_cluster1_4_out"
}
function install_istio_on_cluster2_helm {
echo "Installing Gateway API CRDs on Primary cluster: ${CTX_CLUSTER2}"
install_gateway_api_crds "${CTX_CLUSTER2}"
echo "Installing Istio on Primary cluster: ${CTX_CLUSTER2}"
snip_set_the_default_network_for_cluster2_1
_rewrite_helm_repo snip_configure_cluster2_as_a_primary_3
_rewrite_helm_repo snip_configure_cluster2_as_a_primary_4
_rewrite_helm_repo snip_install_cni_cluster2
_rewrite_helm_repo snip_install_ztunnel_cluster2
echo "Creating the east-west gateway"
snip_install_an_ambient_eastwest_gateway_in_cluster2_2
snip_install_an_ambient_eastwest_gateway_in_cluster2_3
echo "Waiting for the east-west gateway to have an external IP"
_verify_like snip_install_an_ambient_eastwest_gateway_in_cluster2_4 "$snip_install_an_ambient_eastwest_gateway_in_cluster2_4_out"
}
function install_istio_helm {
# Install Istio on the 2 clusters. Executing in
# parallel to reduce test time.
install_istio_on_cluster1_helm &
install_istio_on_cluster2_helm &
wait
}
function enable_endpoint_discovery {
snip_enable_endpoint_discovery_1
snip_enable_endpoint_discovery_2
}
time configure_trust
time install_istio_helm
time enable_endpoint_discovery
time verify_load_balancing
# @cleanup
source content/en/docs/setup/install/multicluster/common.sh
set_multi_network_vars
function cleanup_cluster1_helm {
snip_cleanup_3
snip_cleanup_4
snip_delete_sample_ns_cluster_1
}
function cleanup_cluster2_helm {
snip_cleanup_5
snip_cleanup_6
snip_delete_sample_ns_cluster_2
}
function cleanup_helm {
cleanup_cluster1_helm
cleanup_cluster2_helm
snip_delete_crds
snip_delete_gateway_crds
}
time cleanup_helm
# Everything should be removed once cleanup completes. Use a small
# timeout for comparing cluster snapshots before/after the test.
export VERIFY_TIMEOUT=20

View File

@ -0,0 +1,440 @@
---
title: Install ambient multi-primary on different networks
description: Install an Istio ambient mesh across multiple primary clusters on different networks.
weight: 30
keywords: [kubernetes,multicluster,ambient]
test: yes
owner: istio/wg-environments-maintainers
next: /docs/ambient/install/multicluster/verify
prev: /docs/ambient/install/multicluster/before-you-begin
---
{{< boilerplate alpha >}}
{{< tip >}}
This guide requires installation of the Gateway API CRDs.
{{< boilerplate gateway-api-install-crds >}}
{{< /tip >}}
Follow this guide to install the Istio control plane on both `cluster1` and
`cluster2`, making each a {{< gloss >}}primary cluster{{< /gloss >}} (this is currently the only supported configuration in ambient mode). Cluster
`cluster1` is on the `network1` network, while `cluster2` is on the
`network2` network. This means there is no direct connectivity between pods
across cluster boundaries.
Before proceeding, be sure to complete the steps under
[before you begin](/docs/ambient/install/multicluster/before-you-begin).
{{< boilerplate multi-cluster-with-metallb >}}
In this configuration, both `cluster1` and `cluster2` observe the API Servers
in each cluster for endpoints.
Service workloads across cluster boundaries communicate indirectly, via
dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west_traffic)
traffic. The gateway in each cluster must be reachable from the other cluster.
{{< image width="75%"
link="arch.svg"
caption="Multiple primary clusters on separate networks"
>}}
## Set the default network for `cluster1`
If the istio-system namespace is already created, we need to set the cluster's network there:
{{< text bash >}}
$ kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
{{< /text >}}
## Configure `cluster1` as a primary
Create the `istioctl` configuration for `cluster1`:
{{< tabset category-name="multicluster-install-type-cluster-1" >}}
{{< tab name="IstioOperator" category-value="iop" >}}
Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API.
{{< text bash >}}
$ cat <<EOF > cluster1.yaml
apiVersion: insall.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: ambient
components:
pilot:
k8s:
env:
- name: AMBIENT_ENABLE_MULTI_NETWORK
value: "true"
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
EOF
{{< /text >}}
Apply the configuration to `cluster1`:
{{< text bash >}}
$ istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
{{< /text >}}
{{< /tab >}}
{{< tab name="Helm" category-value="helm" >}}
Install Istio as primary in `cluster1` using the following Helm commands:
Install the `base` chart in `cluster1`:
{{< text bash >}}
$ helm install istio-base istio/base -n istio-system --kube-context "${CTX_CLUSTER1}"
{{< /text >}}
Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings:
{{< text bash >}}
$ helm install istiod istio/istiod -n istio-system --kube-context "${CTX_CLUSTER1}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster1 --set global.network=network1 --set profile=ambient --set env.AMBIENT_ENABLE_MULTI_NETWORK="true"
{{< /text >}}
Next, install the CNI node agent in ambient mode:
{{< text syntax=bash snip_id=install_cni_cluster1 >}}
$ helm install istio-cni istio/cni -n istio-system --kube-context "${CTX_CLUSTER1}" --set profile=ambient
{{< /text >}}
Finally, install the ztunnel data plane:
{{< text syntax=bash snip_id=install_ztunnel_cluster1 >}}
$ helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX_CLUSTER1}" --set multiCluster.clusterName=cluster1 --set global.network=network1
{{< /text >}}
{{< /tab >}}
{{< /tabset >}}
## Install an ambient east-west gateway in `cluster1`
Install a gateway in `cluster1` that is dedicated to ambient
[east-west](https://en.wikipedia.org/wiki/East-west_traffic) traffic. Be
aware that, depending on your Kubernetes environment, this gateway may be
deployed on the public Internet by default. Production systems may
require additional access restrictions (e.g. via firewall rules) to prevent
external attacks. Check with your cloud vendor to see what options are
available.
{{< tabset category-name="east-west-gateway-install-type-cluster-1" >}}
{{< tab name="IstioOperator" category-value="iop" >}}
{{< text bash >}}
$ @samples/multicluster/gen-eastwest-gateway.sh@ \
--network network1 \
--ambient | \
kubectl --context="${CTX_CLUSTER1}" apply -f -
{{< /text >}}
{{< warning >}}
If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command.
{{< /warning >}}
{{< /tab >}}
{{< tab name="Kubectl apply" category-value="helm" >}}
Install the east-west gateway in `cluster1` using the following Gateway definition:
{{< text bash >}}
$ cat <<EOF > cluster1-ewgateway.yaml
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: istio-eastwestgateway
namespace: istio-system
labels:
topology.istio.io/network: "network1"
spec:
gatewayClassName: istio-east-west
listeners:
- name: mesh
port: 15008
protocol: HBONE
tls:
mode: Terminate # represents double-HBONE
options:
gateway.istio.io/tls-terminate-mode: ISTIO_MUTUAL
EOF
{{< /text >}}
{{< warning >}}
If you are running a revisioned instance of istiod and you don't have a default revision or tag set, you may need to add the `istio.io/rev` label to this `Gateway` manifest.
{{< /warning >}}
Apply the configuration to `cluster1`:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER1}" -f cluster1-ewgateway.yaml
{{< /text >}}
{{< /tab >}}
{{< /tabset >}}
Wait for the east-west gateway to be assigned an external IP address:
{{< text bash >}}
$ kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s
{{< /text >}}
## Set the default network for `cluster2`
If the istio-system namespace is already created, we need to set the cluster's network there:
{{< text bash >}}
$ kubectl --context="${CTX_CLUSTER2}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
{{< /text >}}
## Configure cluster2 as a primary
Create the `istioctl` configuration for `cluster2`:
{{< tabset category-name="multicluster-install-type-cluster-2" >}}
{{< tab name="IstioOperator" category-value="iop" >}}
Install Istio as primary in `cluster2` using istioctl and the `IstioOperator` API.
{{< text bash >}}
$ cat <<EOF > cluster2.yaml
apiVersion: insall.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: ambient
components:
pilot:
k8s:
env:
- name: AMBIENT_ENABLE_MULTI_NETWORK
value: "true"
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
EOF
{{< /text >}}
Apply the configuration to `cluster2`:
{{< text bash >}}
$ istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml
{{< /text >}}
{{< /tab >}}
{{< tab name="Helm" category-value="helm" >}}
Install Istio as primary in `cluster2` using the following Helm commands:
Install the `base` chart in `cluster2`:
{{< text bash >}}
$ helm install istio-base istio/base -n istio-system --kube-context "${CTX_CLUSTER2}"
{{< /text >}}
Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings:
{{< text bash >}}
$ helm install istiod istio/istiod -n istio-system --kube-context "${CTX_CLUSTER2}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster2 --set global.network=network2 --set profile=ambient --set env.AMBIENT_ENABLE_MULTI_NETWORK="true"
{{< /text >}}
Next, install the CNI node agent in ambient mode:
{{< text syntax=bash snip_id=install_cni_cluster2 >}}
$ helm install istio-cni istio/cni -n istio-system --kube-context "${CTX_CLUSTER2}" --set profile=ambient
{{< /text >}}
Finally, install the ztunnel data plane:
{{< text syntax=bash snip_id=install_ztunnel_cluster2 >}}
$ helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX_CLUSTER2}" --set multiCluster.clusterName=cluster2 --set global.network=network2
{{< /text >}}
{{< /tab >}}
{{< /tabset >}}
## Install an ambient east-west gateway in `cluster2`
As we did with `cluster1` above, install a gateway in `cluster2` that is dedicated
to east-west traffic.
{{< tabset category-name="east-west-gateway-install-type-cluster-2" >}}
{{< tab name="IstioOperator" category-value="iop" >}}
{{< text bash >}}
$ @samples/multicluster/gen-eastwest-gateway.sh@ \
--network network2 \
--ambient | \
kubectl apply --context="${CTX_CLUSTER2}" -f -
{{< /text >}}
{{< /tab >}}
{{< tab name="Kubectl apply" category-value="helm" >}}
Install the east-west gateway in `cluster2` using the following Gateway definition:
{{< text bash >}}
$ cat <<EOF > cluster2-ewgateway.yaml
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: istio-eastwestgateway
namespace: istio-system
labels:
topology.istio.io/network: "network2"
spec:
gatewayClassName: istio-east-west
listeners:
- name: mesh
port: 15008
protocol: HBONE
tls:
mode: Terminate # represents double-HBONE
options:
gateway.istio.io/tls-terminate-mode: ISTIO_MUTUAL
EOF
{{< /text >}}
{{< warning >}}
If you are running a revisioned instance of istiod and you don't have a default revision or tag set, you may need to add the `istio.io/rev` label to this `Gateway` manifest.
{{< /warning >}}
Apply the configuration to `cluster2`:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER2}" -f cluster2-ewgateway.yaml
{{< /text >}}
{{< /tab >}}
{{< /tabset >}}
Wait for the east-west gateway to be assigned an external IP address:
{{< text bash >}}
$ kubectl --context="${CTX_CLUSTER2}" get svc istio-eastwestgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s
{{< /text >}}
## Enable Endpoint Discovery
Install a remote secret in `cluster2` that provides access to `cluster1`s API server.
{{< text bash >}}
$ istioctl create-remote-secret \
--context="${CTX_CLUSTER1}" \
--name=cluster1 | \
kubectl apply -f - --context="${CTX_CLUSTER2}"
{{< /text >}}
Install a remote secret in `cluster1` that provides access to `cluster2`s API server.
{{< text bash >}}
$ istioctl create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
{{< /text >}}
**Congratulations!** You successfully installed an Istio mesh across multiple
primary clusters on different networks!
## Next Steps
You can now [verify the installation](/docs/ambient/install/multicluster/verify).
## Cleanup
Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm).
{{< tabset category-name="multicluster-uninstall-type-cluster-1" >}}
{{< tab name="IstioOperator" category-value="iop" >}}
Uninstall Istio in `cluster1`:
{{< text syntax=bash snip_id=none >}}
$ istioctl uninstall --context="${CTX_CLUSTER1}" -y --purge
$ kubectl delete ns istio-system --context="${CTX_CLUSTER1}"
{{< /text >}}
Uninstall Istio in `cluster2`:
{{< text syntax=bash snip_id=none >}}
$ istioctl uninstall --context="${CTX_CLUSTER2}" -y --purge
$ kubectl delete ns istio-system --context="${CTX_CLUSTER2}"
{{< /text >}}
{{< /tab >}}
{{< tab name="Helm" category-value="helm" >}}
Delete Istio Helm installation from `cluster1`:
{{< text syntax=bash >}}
$ helm delete ztunnel -n istio-system --kube-context "${CTX_CLUSTER1}"
$ helm delete istio-cni -n istio-system --kube-context "${CTX_CLUSTER1}"
$ helm delete istiod -n istio-system --kube-context "${CTX_CLUSTER1}"
$ helm delete istio-base -n istio-system --kube-context "${CTX_CLUSTER1}"
{{< /text >}}
Delete the `istio-system` namespace from `cluster1`:
{{< text syntax=bash >}}
$ kubectl delete ns istio-system --context="${CTX_CLUSTER1}"
{{< /text >}}
Delete Istio Helm installation from `cluster2`:
{{< text syntax=bash >}}
$ helm delete ztunnel -n istio-system --kube-context "${CTX_CLUSTER2}"
$ helm delete istio-cni -n istio-system --kube-context "${CTX_CLUSTER2}"
$ helm delete istiod -n istio-system --kube-context "${CTX_CLUSTER2}"
$ helm delete istio-base -n istio-system --kube-context "${CTX_CLUSTER2}"
{{< /text >}}
Delete the `istio-system` namespace from `cluster2`:
{{< text syntax=bash >}}
$ kubectl delete ns istio-system --context="${CTX_CLUSTER2}"
{{< /text >}}
(Optional) Delete CRDs installed by Istio:
Deleting CRDs permanently removes any Istio resources you have created in your clusters.
To delete Istio CRDs installed in your clusters:
{{< text syntax=bash snip_id=delete_crds >}}
$ kubectl get crd -oname --context "${CTX_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX_CLUSTER1}"
$ kubectl get crd -oname --context "${CTX_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX_CLUSTER2}"
{{< /text >}}
And finally, clean up the Gateway API CRDs:
{{< text syntax=bash snip_id=delete_gateway_crds >}}
$ kubectl get crd -oname --context "${CTX_CLUSTER1}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX_CLUSTER1}"
$ kubectl get crd -oname --context "${CTX_CLUSTER2}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX_CLUSTER2}"
{{< /text >}}
{{< /tab >}}
{{< /tabset >}}

View File

@ -0,0 +1,243 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/ambient/install/multicluster/multi-primary_multi-network/index.md
####################################################################################################
source "content/en/boilerplates/snips/gateway-api-install-crds.sh"
snip_set_the_default_network_for_cluster1_1() {
kubectl --context="${CTX_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1
}
snip_configure_cluster1_as_a_primary_1() {
cat <<EOF > cluster1.yaml
apiVersion: insall.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: ambient
components:
pilot:
k8s:
env:
- name: AMBIENT_ENABLE_MULTI_NETWORK
value: "true"
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: network1
EOF
}
snip_configure_cluster1_as_a_primary_2() {
istioctl install --context="${CTX_CLUSTER1}" -f cluster1.yaml
}
snip_configure_cluster1_as_a_primary_3() {
helm install istio-base istio/base -n istio-system --kube-context "${CTX_CLUSTER1}"
}
snip_configure_cluster1_as_a_primary_4() {
helm install istiod istio/istiod -n istio-system --kube-context "${CTX_CLUSTER1}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster1 --set global.network=network1 --set profile=ambient --set env.AMBIENT_ENABLE_MULTI_NETWORK="true"
}
snip_install_cni_cluster1() {
helm install istio-cni istio/cni -n istio-system --kube-context "${CTX_CLUSTER1}" --set profile=ambient
}
snip_install_ztunnel_cluster1() {
helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX_CLUSTER1}" --set multiCluster.clusterName=cluster1 --set global.network=network1
}
snip_install_an_ambient_eastwest_gateway_in_cluster1_1() {
samples/multicluster/gen-eastwest-gateway.sh \
--network network1 \
--ambient | \
kubectl --context="${CTX_CLUSTER1}" apply -f -
}
snip_install_an_ambient_eastwest_gateway_in_cluster1_2() {
cat <<EOF > cluster1-ewgateway.yaml
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: istio-eastwestgateway
namespace: istio-system
labels:
topology.istio.io/network: "network1"
spec:
gatewayClassName: istio-east-west
listeners:
- name: mesh
port: 15008
protocol: HBONE
tls:
mode: Terminate # represents double-HBONE
options:
gateway.istio.io/tls-terminate-mode: ISTIO_MUTUAL
EOF
}
snip_install_an_ambient_eastwest_gateway_in_cluster1_3() {
kubectl apply --context="${CTX_CLUSTER1}" -f cluster1-ewgateway.yaml
}
snip_install_an_ambient_eastwest_gateway_in_cluster1_4() {
kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system
}
! IFS=$'\n' read -r -d '' snip_install_an_ambient_eastwest_gateway_in_cluster1_4_out <<\ENDSNIP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s
ENDSNIP
snip_set_the_default_network_for_cluster2_1() {
kubectl --context="${CTX_CLUSTER2}" get namespace istio-system && \
kubectl --context="${CTX_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2
}
snip_configure_cluster2_as_a_primary_1() {
cat <<EOF > cluster2.yaml
apiVersion: insall.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: ambient
components:
pilot:
k8s:
env:
- name: AMBIENT_ENABLE_MULTI_NETWORK
value: "true"
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: network2
EOF
}
snip_configure_cluster2_as_a_primary_2() {
istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml
}
snip_configure_cluster2_as_a_primary_3() {
helm install istio-base istio/base -n istio-system --kube-context "${CTX_CLUSTER2}"
}
snip_configure_cluster2_as_a_primary_4() {
helm install istiod istio/istiod -n istio-system --kube-context "${CTX_CLUSTER2}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster2 --set global.network=network2 --set profile=ambient --set env.AMBIENT_ENABLE_MULTI_NETWORK="true"
}
snip_install_cni_cluster2() {
helm install istio-cni istio/cni -n istio-system --kube-context "${CTX_CLUSTER2}" --set profile=ambient
}
snip_install_ztunnel_cluster2() {
helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX_CLUSTER2}" --set multiCluster.clusterName=cluster2 --set global.network=network2
}
snip_install_an_ambient_eastwest_gateway_in_cluster2_1() {
samples/multicluster/gen-eastwest-gateway.sh \
--network network2 \
--ambient | \
kubectl apply --context="${CTX_CLUSTER2}" -f -
}
snip_install_an_ambient_eastwest_gateway_in_cluster2_2() {
cat <<EOF > cluster2-ewgateway.yaml
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: istio-eastwestgateway
namespace: istio-system
labels:
topology.istio.io/network: "network2"
spec:
gatewayClassName: istio-east-west
listeners:
- name: mesh
port: 15008
protocol: HBONE
tls:
mode: Terminate # represents double-HBONE
options:
gateway.istio.io/tls-terminate-mode: ISTIO_MUTUAL
EOF
}
snip_install_an_ambient_eastwest_gateway_in_cluster2_3() {
kubectl apply --context="${CTX_CLUSTER2}" -f cluster2-ewgateway.yaml
}
snip_install_an_ambient_eastwest_gateway_in_cluster2_4() {
kubectl --context="${CTX_CLUSTER2}" get svc istio-eastwestgateway -n istio-system
}
! IFS=$'\n' read -r -d '' snip_install_an_ambient_eastwest_gateway_in_cluster2_4_out <<\ENDSNIP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s
ENDSNIP
snip_enable_endpoint_discovery_1() {
istioctl create-remote-secret \
--context="${CTX_CLUSTER1}" \
--name=cluster1 | \
kubectl apply -f - --context="${CTX_CLUSTER2}"
}
snip_enable_endpoint_discovery_2() {
istioctl create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"
}
snip_cleanup_3() {
helm delete ztunnel -n istio-system --kube-context "${CTX_CLUSTER1}"
helm delete istio-cni -n istio-system --kube-context "${CTX_CLUSTER1}"
helm delete istiod -n istio-system --kube-context "${CTX_CLUSTER1}"
helm delete istio-base -n istio-system --kube-context "${CTX_CLUSTER1}"
}
snip_cleanup_4() {
kubectl delete ns istio-system --context="${CTX_CLUSTER1}"
}
snip_cleanup_5() {
helm delete ztunnel -n istio-system --kube-context "${CTX_CLUSTER2}"
helm delete istio-cni -n istio-system --kube-context "${CTX_CLUSTER2}"
helm delete istiod -n istio-system --kube-context "${CTX_CLUSTER2}"
helm delete istio-base -n istio-system --kube-context "${CTX_CLUSTER2}"
}
snip_cleanup_6() {
kubectl delete ns istio-system --context="${CTX_CLUSTER2}"
}
snip_delete_crds() {
kubectl get crd -oname --context "${CTX_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX_CLUSTER1}"
kubectl get crd -oname --context "${CTX_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX_CLUSTER2}"
}
snip_delete_gateway_crds() {
kubectl get crd -oname --context "${CTX_CLUSTER1}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX_CLUSTER1}"
kubectl get crd -oname --context "${CTX_CLUSTER2}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX_CLUSTER2}"
}

View File

@ -0,0 +1,90 @@
#!/usr/bin/env bash
# shellcheck disable=SC1090,SC2154
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# @setup multicluster
set -e
set -u
set -o pipefail
source content/en/docs/ambient/install/multicluster/common.sh
source "tests/util/gateway-api.sh"
set_multi_network_vars
function install_istio_on_cluster1_istioctl {
echo "Installing Gateway API CRDs on Primary cluster: ${CTX_CLUSTER1}"
install_gateway_api_crds "${CTX_CLUSTER1}"
echo "Installing Istio on Primary cluster: ${CTX_CLUSTER1}"
snip_set_the_default_network_for_cluster1_1
snip_configure_cluster1_as_a_primary_1
echo y | snip_configure_cluster1_as_a_primary_2
echo "Creating the east-west gateway"
snip_install_an_ambient_eastwest_gateway_in_cluster1_1
echo "Waiting for the east-west gateway to have an external IP"
_verify_like snip_install_an_ambient_eastwest_gateway_in_cluster1_4 "$snip_install_an_ambient_eastwest_gateway_in_cluster1_4_out"
}
function install_istio_on_cluster2_istioctl {
echo "Installing Gateway API CRDs on Primary cluster: ${CTX_CLUSTER2}"
install_gateway_api_crds "${CTX_CLUSTER2}"
echo "Installing Istio on Primary cluster: ${CTX_CLUSTER2}"
snip_set_the_default_network_for_cluster2_1
snip_configure_cluster2_as_a_primary_1
echo y | snip_configure_cluster2_as_a_primary_2
echo "Creating the east-west gateway"
snip_install_an_ambient_eastwest_gateway_in_cluster2_1
echo "Waiting for the east-west gateway to have an external IP"
_verify_like snip_install_an_ambient_eastwest_gateway_in_cluster2_4 "$snip_install_an_ambient_eastwest_gateway_in_cluster2_4_out"
}
function install_istio_istioctl {
# Install Istio on the 2 clusters. Executing in
# parallel to reduce test time.
install_istio_on_cluster1_istioctl &
install_istio_on_cluster2_istioctl &
wait
}
function enable_endpoint_discovery {
snip_enable_endpoint_discovery_1
snip_enable_endpoint_discovery_2
}
time configure_trust
time install_istio_istioctl
time enable_endpoint_discovery
time verify_load_balancing
# @cleanup
source content/en/docs/ambient/install/multicluster/common.sh
set_multi_network_vars
time cleanup_istioctl
time snip_delete_gateway_crds
# Everything should be removed once cleanup completes. Use a small
# timeout for comparing cluster snapshots before/after the test.
export VERIFY_TIMEOUT=20

View File

@ -0,0 +1,220 @@
---
title: Verify the ambient installation
description: Verify that Istio ambient mesh has been installed properly on multiple clusters.
weight: 50
keywords: [kubernetes,multicluster,ambient]
test: yes
owner: istio/wg-environments-maintainers
prev: /docs/ambient/install/multicluster/multi-primary_multi-network
---
Follow this guide to verify that your ambient multicluster Istio installation is working
properly.
Before proceeding, be sure to complete the steps under
[before you begin](/docs/ambient/install/multicluster/before-you-begin) as well as
choosing and following one of the [multicluster installation guides](/docs/ambient/install/multicluster).
In this guide, we will verify multicluster is functional, deploy the `HelloWorld`
application `v1` to `cluster1` and `v2` to `cluster2`. Upon receiving a request,
`HelloWorld` will include its version in its response when we call the `/hello` path.
We will also deploy the `curl` container to both clusters. We will use these
pods as the source of requests to the `HelloWorld` service,
simulating in-mesh traffic. Finally, after generating traffic, we will observe
which cluster received the requests.
## Verify Multicluster
To confirm that Istiod is now able to communicate with the Kubernetes control plane
of the remote cluster.
{{< text bash >}}
$ istioctl remote-clusters --context="${CTX_CLUSTER1}"
NAME SECRET STATUS ISTIOD
cluster1 synced istiod-7b74b769db-kb4kj
cluster2 istio-system/istio-remote-secret-cluster2 synced istiod-7b74b769db-kb4kj
{{< /text >}}
All clusters should indicate their status as `synced`. If a cluster is listed with
a `STATUS` of `timeout` that means that Istiod in the primary cluster is unable to
communicate with the remote cluster. See the Istiod logs for detailed error
messages.
Note: if you do see `timeout` issues and there is an intermediary host (such as the [Rancher auth proxy](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint#two-authentication-methods-for-rke-clusters))
sitting between Istiod in the primary cluster and the Kubernetes control plane in
the remote cluster, you may need to update the `certificate-authority-data` field
of the kubeconfig that `istioctl create-remote-secret` generates in order to
match the certificate being used by the intermediate host.
## Deploy the `HelloWorld` Service
In order to make the `HelloWorld` service callable from any cluster, the DNS
lookup must succeed in each cluster (see
[deployment models](/docs/ops/deployment/deployment-models#dns-with-multiple-clusters)
for details). We will address this by deploying the `HelloWorld` Service to
each cluster in the mesh.
{{< tip >}}
Before proceeding, ensure that the istio-system namespaces in both clusters have the `istio.io/topology-network` set to the appropriate value (e.g., `network1` for `cluster1` and `network2` for `cluster2`).
{{< /tip >}}
To begin, create the `sample` namespace in each cluster:
{{< text bash >}}
$ kubectl create --context="${CTX_CLUSTER1}" namespace sample
$ kubectl create --context="${CTX_CLUSTER2}" namespace sample
{{< /text >}}
Enroll the `sample` namespace in the mesh:
{{< text bash >}}
$ kubectl label --context="${CTX_CLUSTER1}" namespace sample \
istio.io/dataplane-mode=ambient
$ kubectl label --context="${CTX_CLUSTER2}" namespace sample \
istio.io/dataplane-mode=ambient
{{< /text >}}
Create the `HelloWorld` service in both clusters:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f @samples/helloworld/helloworld.yaml@ \
-l service=helloworld -n sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f @samples/helloworld/helloworld.yaml@ \
-l service=helloworld -n sample
{{< /text >}}
## Deploy `HelloWorld` `V1`
Deploy the `helloworld-v1` application to `cluster1`:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f @samples/helloworld/helloworld.yaml@ \
-l version=v1 -n sample
{{< /text >}}
Confirm the `helloworld-v1` pod status:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=helloworld
NAME READY STATUS RESTARTS AGE
helloworld-v1-86f77cd7bd-cpxhv 1/1 Running 0 40s
{{< /text >}}
Wait until the status of `helloworld-v1` is `Running`.
Now, mark the helloworld service in `cluster1` as global so that it can be accessed from other clusters in the mesh:
{{< text bash >}}
$ kubectl label --context="${CTX_CLUSTER1}" svc helloworld -n sample \
istio.io/global="true"
{{< /text >}}
## Deploy `HelloWorld` `V2`
Deploy the `helloworld-v2` application to `cluster2`:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f @samples/helloworld/helloworld.yaml@ \
-l version=v2 -n sample
{{< /text >}}
Confirm the status the `helloworld-v2` pod status:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=helloworld
NAME READY STATUS RESTARTS AGE
helloworld-v2-758dd55874-6x4t8 1/1 Running 0 40s
{{< /text >}}
Wait until the status of `helloworld-v2` is `Running`.
Now, mark the helloworld service in `cluster2` as global so that it can be accessed from other clusters in the mesh:
{{< text bash >}}
$ kubectl label --context="${CTX_CLUSTER2}" svc helloworld -n sample \
istio.io/global="true"
{{< /text >}}
## Deploy `curl`
Deploy the `curl` application to both clusters:
{{< text bash >}}
$ kubectl apply --context="${CTX_CLUSTER1}" \
-f @samples/curl/curl.yaml@ -n sample
$ kubectl apply --context="${CTX_CLUSTER2}" \
-f @samples/curl/curl.yaml@ -n sample
{{< /text >}}
Confirm the status `curl` pod on `cluster1`:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=curl
NAME READY STATUS RESTARTS AGE
curl-754684654f-n6bzf 1/1 Running 0 5s
{{< /text >}}
Wait until the status of the `curl` pod is `Running`.
Confirm the status of the `curl` pod on `cluster2`:
{{< text bash >}}
$ kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=curl
NAME READY STATUS RESTARTS AGE
curl-754684654f-dzl9j 1/1 Running 0 5s
{{< /text >}}
Wait until the status of the `curl` pod is `Running`.
## Verifying Cross-Cluster Traffic
To verify that cross-cluster load balancing works as expected, call the
`HelloWorld` service several times using the `curl` pod. To ensure load
balancing is working properly, call the `HelloWorld` service from all
clusters in your deployment.
Send one request from the `curl` pod on `cluster1` to the `HelloWorld` service:
{{< text bash >}}
$ kubectl exec --context="${CTX_CLUSTER1}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
{{< /text >}}
Repeat this request several times and verify that the `HelloWorld` version
should change between `v1` and `v2`, signifying that endpoints in both
clusters are being used:
{{< text plain >}}
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
...
{{< /text >}}
Now repeat this process from the `curl` pod on `cluster2`:
{{< text bash >}}
$ kubectl exec --context="${CTX_CLUSTER2}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
{{< /text >}}
Repeat this request several times and verify that the `HelloWorld` version
should toggle between `v1` and `v2`:
{{< text plain >}}
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
...
{{< /text >}}
**Congratulations!** You successfully installed and verified Istio on multiple
clusters!
<!-- TODO: Link to guide for locality load balancing once we add waypoint instructions -->

View File

@ -0,0 +1,143 @@
#!/bin/bash
# shellcheck disable=SC2034,SC2153,SC2155,SC2164
# Copyright Istio Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################################################################
# WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL MARKDOWN FILE:
# docs/ambient/install/multicluster/verify/index.md
####################################################################################################
snip_verify_multicluster_1() {
istioctl remote-clusters --context="${CTX_CLUSTER1}"
}
! IFS=$'\n' read -r -d '' snip_verify_multicluster_1_out <<\ENDSNIP
NAME SECRET STATUS ISTIOD
cluster1 synced istiod-7b74b769db-kb4kj
cluster2 istio-system/istio-remote-secret-cluster2 synced istiod-7b74b769db-kb4kj
ENDSNIP
snip_deploy_the_helloworld_service_1() {
kubectl create --context="${CTX_CLUSTER1}" namespace sample
kubectl create --context="${CTX_CLUSTER2}" namespace sample
}
snip_deploy_the_helloworld_service_2() {
kubectl label --context="${CTX_CLUSTER1}" namespace sample \
istio.io/dataplane-mode=ambient
kubectl label --context="${CTX_CLUSTER2}" namespace sample \
istio.io/dataplane-mode=ambient
}
snip_deploy_the_helloworld_service_3() {
kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l service=helloworld -n sample
}
snip_deploy_helloworld_v1_1() {
kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/helloworld/helloworld.yaml \
-l version=v1 -n sample
}
snip_deploy_helloworld_v1_2() {
kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=helloworld
}
! IFS=$'\n' read -r -d '' snip_deploy_helloworld_v1_2_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-v1-86f77cd7bd-cpxhv 1/1 Running 0 40s
ENDSNIP
snip_deploy_helloworld_v1_3() {
kubectl label --context="${CTX_CLUSTER1}" svc helloworld -n sample \
istio.io/global="true"
}
snip_deploy_helloworld_v2_1() {
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/helloworld/helloworld.yaml \
-l version=v2 -n sample
}
snip_deploy_helloworld_v2_2() {
kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=helloworld
}
! IFS=$'\n' read -r -d '' snip_deploy_helloworld_v2_2_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
helloworld-v2-758dd55874-6x4t8 1/1 Running 0 40s
ENDSNIP
snip_deploy_helloworld_v2_3() {
kubectl label --context="${CTX_CLUSTER2}" svc helloworld -n sample \
istio.io/global="true"
}
snip_deploy_curl_1() {
kubectl apply --context="${CTX_CLUSTER1}" \
-f samples/curl/curl.yaml -n sample
kubectl apply --context="${CTX_CLUSTER2}" \
-f samples/curl/curl.yaml -n sample
}
snip_deploy_curl_2() {
kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l app=curl
}
! IFS=$'\n' read -r -d '' snip_deploy_curl_2_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
curl-754684654f-n6bzf 1/1 Running 0 5s
ENDSNIP
snip_deploy_curl_3() {
kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=curl
}
! IFS=$'\n' read -r -d '' snip_deploy_curl_3_out <<\ENDSNIP
NAME READY STATUS RESTARTS AGE
curl-754684654f-dzl9j 1/1 Running 0 5s
ENDSNIP
snip_verifying_crosscluster_traffic_1() {
kubectl exec --context="${CTX_CLUSTER1}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER1}" -n sample -l \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
}
! IFS=$'\n' read -r -d '' snip_verifying_crosscluster_traffic_2 <<\ENDSNIP
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
...
ENDSNIP
snip_verifying_crosscluster_traffic_3() {
kubectl exec --context="${CTX_CLUSTER2}" -n sample -c curl \
"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l \
app=curl -o jsonpath='{.items[0].metadata.name}')" \
-- curl -sS helloworld.sample:5000/hello
}
! IFS=$'\n' read -r -d '' snip_verifying_crosscluster_traffic_4 <<\ENDSNIP
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
...
ENDSNIP

24
go.mod
View File

@ -17,7 +17,7 @@ require (
require (
cel.dev/expr v0.24.0 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
cloud.google.com/go/compute/metadata v0.7.0 // indirect
dario.cat/mergo v1.0.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
@ -61,7 +61,7 @@ require (
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.1 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/go-viper/mapstructure/v2 v2.3.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
@ -153,15 +153,15 @@ require (
github.com/yl2chen/cidranger v1.0.2 // indirect
github.com/zeebo/errs v1.4.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel v1.36.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.57.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.35.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.35.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/otel/sdk v1.36.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/automaxprocs v1.6.0 // indirect
@ -173,7 +173,7 @@ require (
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/exp v0.0.0-20250506013437-ce4c2cf36ca6 // indirect
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/net v0.42.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
@ -181,9 +181,9 @@ require (
golang.org/x/time v0.11.0 // indirect
golang.org/x/tools v0.34.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.5.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/grpc v1.73.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250715232539-7130f93afb79 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250715232539-7130f93afb79 // indirect
google.golang.org/grpc v1.74.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
@ -191,7 +191,7 @@ require (
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
helm.sh/helm/v3 v3.18.4 // indirect
istio.io/api v1.27.0-beta.0.0.20250731082105-36763529c462 // indirect
istio.io/api v1.27.0-rc.0 // indirect
istio.io/client-go v1.27.0-beta.0.0.20250731082605-b098a6e566f4 // indirect
k8s.io/api v0.33.2 // indirect
k8s.io/apiextensions-apiserver v0.33.2 // indirect

48
go.sum
View File

@ -1,8 +1,8 @@
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
@ -128,8 +128,8 @@ github.com/go-openapi/swag v0.23.1/go.mod h1:STZs8TbRvEQQKUA+JZNAm3EWlgaOBGpyFDq
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-viper/mapstructure/v2 v2.3.0 h1:27XbWsHIqhbdR5TIC911OfYvgSaW93HM+dX7970Q7jk=
github.com/go-viper/mapstructure/v2 v2.3.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
@ -382,8 +382,8 @@ go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJyS
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 h1:sbiXRNDSWJOTobXh5HyQKjq6wUC5tNybqjIqDpAY4CU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0/go.mod h1:69uWxva0WgAA/4bu2Yy70SLDBwZXuQ6PbBpbsa5iZrQ=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 h1:1fTNlAIJZGWLP5FVu0fikVry1IsiUnXjf7QFvoNN3Xw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0/go.mod h1:zjPK58DtkqQFn+YUMbx0M2XV3QgKU0gS9LeGohREyK4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0 h1:m639+BofXTvcY1q8CGs4ItwQarYtJPOWmVobfM1HpVI=
@ -392,14 +392,14 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0 h1:xJ2qH
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0/go.mod h1:u5BF1xyjstDowA1R5QAO9JHzqK+ublenEW/dyqTjBVk=
go.opentelemetry.io/otel/exporters/prometheus v0.57.0 h1:AHh/lAP1BHrY5gBwk8ncc25FXWm/gmmY3BX258z5nuk=
go.opentelemetry.io/otel/exporters/prometheus v0.57.0/go.mod h1:QpFWz1QxqevfjwzYdbMb4Y1NnlJvqSGwyuU0B4iuc9c=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.35.0 h1:iPctf8iprVySXSKJffSS79eOjl9pvxV9ZqOWT0QejKY=
go.opentelemetry.io/otel/sdk v1.35.0/go.mod h1:+ga1bZliga3DxJ3CQGg3updiaAJoNECOgJREo9KHGQg=
go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o=
go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os=
go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
@ -446,8 +446,8 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
@ -499,17 +499,17 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/genproto/googleapis/api v0.0.0-20250715232539-7130f93afb79 h1:iOye66xuaAK0WnkPuhQPUFy8eJcmwUXqGGP3om6IxX8=
google.golang.org/genproto/googleapis/api v0.0.0-20250715232539-7130f93afb79/go.mod h1:HKJDgKsFUnv5VAGeQjz8kxcgDP0HoE0iZNp0OdZNlhE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250715232539-7130f93afb79 h1:1ZwqphdOdWYXsUHgMpU/101nCtf/kSp9hOrcvFsnl10=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250715232539-7130f93afb79/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/grpc v1.74.0 h1:sxRSkyLxlceWQiqDofxDot3d4u7DyoHPc7SBXMj8gGY=
google.golang.org/grpc v1.74.0/go.mod h1:NZUaK8dAMUfzhK6uxZ+9511LtOrk73UGWOFoNvz7z+s=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -536,8 +536,8 @@ helm.sh/helm/v3 v3.18.4 h1:pNhnHM3nAmDrxz6/UC+hfjDY4yeDATQCka2/87hkZXQ=
helm.sh/helm/v3 v3.18.4/go.mod h1:WVnwKARAw01iEdjpEkP7Ii1tT1pTPYfM1HsakFKM3LI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
istio.io/api v1.27.0-beta.0.0.20250731082105-36763529c462 h1:rmeRAAxlNrCj96Zlaf8r6YqU5kcU8Y7/96i29KPDHaY=
istio.io/api v1.27.0-beta.0.0.20250731082105-36763529c462/go.mod h1:DTVGH6CLXj5W8FF9JUD3Tis78iRgT1WeuAnxfTz21Wg=
istio.io/api v1.27.0-rc.0 h1:XGgmuCj2eW3jPjvi6Q5sE+Gu/cp8IOLFYKuhVBDQZb8=
istio.io/api v1.27.0-rc.0/go.mod h1:DTVGH6CLXj5W8FF9JUD3Tis78iRgT1WeuAnxfTz21Wg=
istio.io/client-go v1.27.0-beta.0.0.20250731082605-b098a6e566f4 h1:mwO5Wx0H+xbEsyGHYRdWG8fjRw9Cr0EExleNW7SXQhM=
istio.io/client-go v1.27.0-beta.0.0.20250731082605-b098a6e566f4/go.mod h1:oUPY27HFv9fW32NtjxlgrRaa0dPIN6jYj/xGcjorLA0=
istio.io/istio v0.0.0-20250809000025-fd9608b4a51f h1:MvH1qexWTQPL5nynYrxFPiXygksGj8Y+zcqg7Tt1jhg=

View File

@ -46,6 +46,13 @@ func (b *Builder) Defer(steps ...Step) *Builder {
return b
}
func (b *Builder) DeferIf(condition bool, steps ...Step) *Builder {
if condition {
b.Defer(steps...)
}
return b
}
// Build a run function for the test
func (b *Builder) Build() func(ctx framework.TestContext) {
return func(ctx framework.TestContext) {

View File

@ -215,6 +215,7 @@ func NewTestDocsFunc(config string) func(framework.TestContext) {
KubeConfig: kubeConfig,
}
noCleanup := ctx.Settings().NoCleanup
ctx.NewSubTest(path).
Run(NewBuilder().
Add(beforeSnapshotter).
@ -224,13 +225,13 @@ func NewTestDocsFunc(config string) func(framework.TestContext) {
Value: testCase.testScript,
},
}).
Defer(Script{
DeferIf(!noCleanup, Script{
Input: Inline{
FileName: cleanupScriptName,
Value: testCase.cleanupScript,
},
}).
Defer(SnapshotValidator{
DeferIf(!noCleanup, SnapshotValidator{
Before: beforeSnapshotter,
After: afterSnapshotter,
}).

View File

@ -95,7 +95,7 @@ check_content() {
FAILED=1
fi
if grep -nrP --include "*.md" -e "\(https://istio.io/(?!v[0-9]\.[0-9]/|archive/)" .; then
if grep -nrP --include "*.md" -e "\(https://istio.io/(?!v[0-9]\.[0-9]/|archive|latest\/news)" .; then
error "Ensure markdown content uses relative references to istio.io"
FAILED=1
fi