mirror of https://github.com/istio/istio.io.git
zh-translation: /docs/setup/install/multicluster/gateways/index.md (#5622)
* zh-translation: /docs/setup/install/multicluster/gateways/index.md (#1284) * zh-translation: /docs/setup/install/multicluster/gateways/index.md (#1284)
This commit is contained in:
parent
4a8a81306a
commit
bddef91b2d
|
@ -1,71 +1,54 @@
|
|||
---
|
||||
title: Replicated control planes
|
||||
description: Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances.
|
||||
title: 控制平面副本集
|
||||
description: 通过控制平面副本集实例,在多个 Kubernetes 集群上安装 Istio 网格。
|
||||
weight: 2
|
||||
aliases:
|
||||
- /docs/setup/kubernetes/multicluster-install/gateways/
|
||||
- /docs/examples/multicluster/gateways/
|
||||
- /docs/tasks/multicluster/gateways/
|
||||
- /docs/setup/kubernetes/install/multicluster/gateways/
|
||||
- /zh/docs/setup/kubernetes/multicluster-install/gateways/
|
||||
- /zh/docs/examples/multicluster/gateways/
|
||||
- /zh/docs/tasks/multicluster/gateways/
|
||||
- /zh/docs/setup/kubernetes/install/multicluster/gateways/
|
||||
keywords: [kubernetes,multicluster,gateway]
|
||||
---
|
||||
|
||||
Follow this guide to install an Istio
|
||||
[multicluster deployment](/docs/setup/deployment-models/#multiple-clusters)
|
||||
with replicated [control plane](/docs/setup/deployment-models/#control-plane-models) instances
|
||||
in every cluster and using gateways to connect services across clusters.
|
||||
请参照本指南安装具有副本集 [控制平面](/zh/docs/setup/deployment-models/#control-plane-models) 实例的
|
||||
Istio [多集群部署](/zh/docs/setup/deployment-models/#multiple-clusters),并在每个群集中使用 gateway 来提供跨集群连接服务。
|
||||
|
||||
Instead of using a shared Istio control plane to manage the mesh,
|
||||
in this configuration each cluster has its own Istio control plane
|
||||
installation, each managing its own endpoints.
|
||||
All of the clusters are under a shared administrative control for the purposes of
|
||||
policy enforcement and security.
|
||||
在此配置中,每个集群都使用它自己的 Istio 控制平面来完成安装,并管理自己的 endpoint,
|
||||
而不是使用共享的 Istio 控制平面来管理网格。
|
||||
出于以下目的,所有群集都在共同的管理控制下,执行策略与安全行为
|
||||
|
||||
A single Istio service mesh across the clusters is achieved by replicating
|
||||
shared services and namespaces and using a common root CA in all of the clusters.
|
||||
Cross-cluster communication occurs over Istio gateways of the respective clusters.
|
||||
通过共享服务副本及命名空间,并在所有群集中使用公共的根证书,可以在群集中实现一个 Istio 服务网格。
|
||||
跨集群通信基于各个集群的 Istio gateway。
|
||||
|
||||
{{< image width="80%" link="./multicluster-with-gateways.svg" caption="Istio mesh spanning multiple Kubernetes clusters using Istio Gateway to reach remote pods" >}}
|
||||
{{< image width="80%" link="./multicluster-with-gateways.svg" caption="使用 Istio Gateway 跨越多个基于 Kubernetes 集群的 Istio 网格并最终到达远端 pod" >}}
|
||||
|
||||
## Prerequisites
|
||||
## 前提条件{#prerequisites}
|
||||
|
||||
* Two or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
* 两个以上 Kubernetes 集群,且版本为:{{< supported_kubernetes_versions >}}。
|
||||
|
||||
* Authority to [deploy the Istio control plane](/docs/setup/install/istioctl/)
|
||||
on **each** Kubernetes cluster.
|
||||
* 有权限在 **每个** Kubernetes 集群上,[部署 Istio 控制平面](/zh/docs/setup/install/istioctl/)。
|
||||
|
||||
* The IP address of the `istio-ingressgateway` service in each cluster must be accessible
|
||||
from every other cluster, ideally using L4 network load balancers (NLB).
|
||||
Not all cloud providers support NLBs and some require special annotations to use them,
|
||||
so please consult your cloud provider’s documentation for enabling NLBs for
|
||||
service object type load balancers. When deploying on platforms without
|
||||
NLB support, it may be necessary to modify the health checks for the load
|
||||
balancer to register the ingress gateway.
|
||||
* 每个集群 `istio-ingressgateway` 服务的 IP 地址,必须允许其它集群访问,最好使用 4 层负载均衡(NLB)。
|
||||
有些云服务商不支持负载均衡或者需要特别注明才能使用。所以,请查阅您的云服务商的文档,为负载均衡类型的服务对象启用 NLB。
|
||||
在不支持负载均衡的平台上部署时,可能需要修改健康检查,使得负载均衡对象可以注册为 ingress gateway。
|
||||
|
||||
* A **Root CA**. Cross cluster communication requires mutual TLS connection
|
||||
between services. To enable mutual TLS communication across clusters, each
|
||||
cluster's Citadel will be configured with intermediate CA credentials
|
||||
generated by a shared root CA. For illustration purposes, you use a
|
||||
sample root CA certificate available in the Istio installation
|
||||
under the `samples/certs` directory.
|
||||
* 一个 **根 CA**。跨集群的服务通信必须使用双向 TLS 连接。
|
||||
为了在集群之间使用双向 TLS 通信,每个集群的 Citadel 都将由共享的根 CA 生成中间 CA 凭据。
|
||||
为方便演示,您在安装 Istio 时,可以使用 `samples/certs` 目录下的一个根 CA 证书样本。
|
||||
|
||||
## Deploy the Istio control plane in each cluster
|
||||
## 在每个集群中部署 Istio 控制平面{#deploy-the-Istio-control-plane-in-each-cluster}
|
||||
|
||||
1. Generate intermediate CA certificates for each cluster's Citadel from your
|
||||
organization's root CA. The shared root CA enables mutual TLS communication
|
||||
across different clusters.
|
||||
1. 从组织的根 CA 为每个集群的 Citadel 生成中间 CA 证书。
|
||||
共享的根 CA 支持跨集群的双向 TLS 通信。
|
||||
|
||||
For illustration purposes, the following instructions use the certificates
|
||||
from the Istio samples directory for both clusters. In real world deployments,
|
||||
you would likely use a different CA certificate for each cluster, all signed
|
||||
by a common root CA.
|
||||
为方便演示,后面两个集群的演示都将使用 Istio 样本目录下的证书。
|
||||
在实际部署中,一般会使用一个公共根 CA 为每个集群签发不同的 CA 证书。
|
||||
|
||||
1. Run the following commands in **every cluster** to deploy an identical Istio control plane
|
||||
configuration in all of them.
|
||||
1. 想要在 **每个集群** 上部署相同的 Istio 控制平面,请运行下面的命令:
|
||||
|
||||
{{< tip >}}
|
||||
Make sure that the current user has cluster administrator (`cluster-admin`) permissions and grant them if not.
|
||||
On the GKE platform, for example, the following command can be used:
|
||||
请确保当前用户拥有集群的管理员(`cluster-admin`)权限。
|
||||
如果没有权限,请授权给它。例如,在 GKE 平台下,可以使用以下命令授权:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"
|
||||
|
@ -73,12 +56,11 @@ Cross-cluster communication occurs over Istio gateways of the respective cluster
|
|||
|
||||
{{< /tip >}}
|
||||
|
||||
* Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key) for more details.
|
||||
* 使用类似于下面的命令,为生成的 CA 证书创建 Kubernetes secret。了解详情,请参见 [CA 证书](/zh/docs/tasks/security/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)。
|
||||
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
示例目录中的根证书和中间证书已被广泛分发和知道。
|
||||
**千万不要** 在生成环境中使用这些证书,这样集群就容易受到安全漏洞的威胁和破坏。
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -90,44 +72,32 @@ Cross-cluster communication occurs over Istio gateways of the respective cluster
|
|||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
* Install Istio:
|
||||
* 安装 Istio:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/multicluster/values-istio-multicluster-gateways.yaml
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[Installation with Istioctl](/docs/setup/install/kubernetes/) instructions.
|
||||
想了解更多细节和自定义选项,请参考 [使用 Istioctl 安装](/zh/docs/setup/install/kubernetes/)。
|
||||
|
||||
## Setup DNS
|
||||
## 配置 DNS{#setup-DNS}
|
||||
|
||||
Providing DNS resolution for services in remote clusters will allow
|
||||
existing applications to function unmodified, as applications typically
|
||||
expect to resolve services by their DNS names and access the resulting
|
||||
IP. Istio itself does not use the DNS for routing requests between
|
||||
services. Services local to a cluster share a common DNS suffix
|
||||
(e.g., `svc.cluster.local`). Kubernetes DNS provides DNS resolution for these
|
||||
services.
|
||||
应用一般需要通过他们的 DNS 解析服务名然后访问返回的 IP,为远端集群中的服务提供 DNS 解析,将允许已有的应用不做修改就可以正常运行。
|
||||
Istio 本身不会为两个服务之间的请求使用 DNS。集群本地的服务共用一个通用的 DNS 后缀(例如,`svc.cluster.local`)。Kubernetes DNS 为这些服务提供了 DNS 解析。
|
||||
|
||||
To provide a similar setup for services from remote clusters, you name
|
||||
services from remote clusters in the format
|
||||
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
|
||||
will provide DNS resolution for these services. In order to utilize this
|
||||
DNS, Kubernetes' DNS must be configured to `stub a domain` for `.global`.
|
||||
要为远端集群的服务提供类似的配置,远端集群内的服务需要以 `<name>.<namespace>.global` 的格式命名。
|
||||
Istio 还附带了一个名为 CoreDNS 的服务,它可以为这些服务提供 DNS 解析。
|
||||
想要使用 CoreDNS,Kubernetes DNS 的 `.global` 必须配置为 `stub a domain`。
|
||||
|
||||
{{< warning >}}
|
||||
Some cloud providers have different specific `DNS domain stub` capabilities
|
||||
and procedures for their Kubernetes services. Reference the cloud provider's
|
||||
documentation to determine how to `stub DNS domains` for each unique
|
||||
environment. The objective of this bash is to stub a domain for `.global` on
|
||||
port `53` to reference or proxy the `istiocoredns` service in Istio's service
|
||||
namespace.
|
||||
一些云提供商的 Kubernetes 服务可能有不同的、特殊的 `DNS domain stub` 程序和功能。
|
||||
请参考云提供商的文档,以确定如何为不同环境的 `stub DNS domains`。
|
||||
这个 bash 的目的是为 `53` 端口上的 `.global` 存根域引用或代理 Istio 的 service namespace 中的 `istiocoredns` 服务。
|
||||
{{< /warning >}}
|
||||
|
||||
Create one of the following ConfigMaps, or update an existing one, in each
|
||||
cluster that will be calling services in remote clusters
|
||||
(every cluster in the general case):
|
||||
在每个要调用远端集群中服务的集群中(通常是所有集群),
|
||||
选择并创建下面这些 ConfigMaps 中的一个,或直接使用现有的做修改。
|
||||
|
||||
{{< tabset cookie-name="platform" >}}
|
||||
{{< tab name="KubeDNS" cookie-value="kube-dns" >}}
|
||||
|
@ -220,25 +190,24 @@ EOF
|
|||
{{< /tab >}}
|
||||
{{< /tabset >}}
|
||||
|
||||
## Configure application services
|
||||
## 应用服务的配置{#configure-application-services}
|
||||
|
||||
Every service in a given cluster that needs to be accessed from a different remote
|
||||
cluster requires a `ServiceEntry` configuration in the remote cluster.
|
||||
The host used in the service entry should be of the form `<name>.<namespace>.global`
|
||||
where name and namespace correspond to the service's name and namespace respectively.
|
||||
一个集群中所有需要被其它远端集群访问的服务,都需要在远端集群中配置 `ServiceEntry`。
|
||||
service entry 使用的 host 应该采用如下格式:`<name>.<namespace>.global`。
|
||||
其中 name 和 namespace 分别对应服务名和命名空间。
|
||||
|
||||
To demonstrate cross cluster access, configure the
|
||||
[sleep service]({{<github_tree>}}/samples/sleep)
|
||||
running in one cluster to call the [httpbin service]({{<github_tree>}}/samples/httpbin)
|
||||
running in a second cluster. Before you begin:
|
||||
为了演示跨集群访问,需要配置:
|
||||
在第一个集群中运行的 [sleep service]({{<github_tree>}}/samples/sleep) 并调用
|
||||
在第二个集群中运行的 [httpbin service]({{<github_tree>}}/samples/httpbin)。
|
||||
开始之前:
|
||||
|
||||
* Choose two of your Istio clusters, to be referred to as `cluster1` and `cluster2`.
|
||||
* 选择两个 Istio 集群,分别称之为 `cluster1` 和 `cluster2`。
|
||||
|
||||
{{< boilerplate kubectl-multicluster-contexts >}}
|
||||
|
||||
### Configure the example services
|
||||
### 示例服务的配置{#configure-the-example-services}
|
||||
|
||||
1. Deploy the `sleep` service in `cluster1`.
|
||||
1. 在 `cluster1` 上部署 `sleep` 服务。
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER1 namespace foo
|
||||
|
@ -247,7 +216,7 @@ running in a second cluster. Before you begin:
|
|||
$ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
|
||||
{{< /text >}}
|
||||
|
||||
1. Deploy the `httpbin` service in `cluster2`.
|
||||
1. 在 `cluster2` 上部署 `httpbin` 服务。
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create --context=$CTX_CLUSTER2 namespace bar
|
||||
|
@ -255,48 +224,41 @@ running in a second cluster. Before you begin:
|
|||
$ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
1. Export the `cluster2` gateway address:
|
||||
1. 暴露 `cluster2` 的 gateway 地址:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \
|
||||
-n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
|
||||
{{< /text >}}
|
||||
|
||||
This command sets the value to the gateway's public IP, but note that you can set it to
|
||||
a DNS name instead, if you have one.
|
||||
该命令使用了 gateway 的公网 IP,如果您有域名的话,您也可以直接使用域名。
|
||||
|
||||
{{< tip >}}
|
||||
If `cluster2` is running in an environment that does not
|
||||
support external load balancers, you will need to use a nodePort to access the gateway.
|
||||
Instructions for obtaining the IP to use can be found in the
|
||||
[Control Ingress Traffic](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
guide. You will also need to change the service entry endpoint port in the following step from 15443
|
||||
to its corresponding nodePort
|
||||
(i.e., `kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'`).
|
||||
如果 `cluster2` 运行在一个不支持对外负载均衡的环境下,您需要使用 nodePort 访问 gateway。
|
||||
有关获取使用 IP 的说明,请参见教程:[Control Ingress Traffic](/zh/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)。
|
||||
在后面的步骤中,您还需要将 service entry 的 endpoint 的端口从 15443 修改为其对应的 nodePort
|
||||
(例如,`kubectl --context=$CTX_CLUSTER2 get svc -n istio-system istio-ingressgateway -o=jsonpath='{.spec.ports[?(@.port==15443)].nodePort}'`)。
|
||||
|
||||
{{< /tip >}}
|
||||
|
||||
1. Create a service entry for the `httpbin` service in `cluster1`.
|
||||
1. 在 `cluster1` 中为 `httpbin` 服务创建一个 service entry。
|
||||
|
||||
To allow `sleep` in `cluster1` to access `httpbin` in `cluster2`, we need to create
|
||||
a service entry for it. The host name of the service entry should be of the form
|
||||
`<name>.<namespace>.global` where name and namespace correspond to the
|
||||
remote service's name and namespace respectively.
|
||||
为了让 `cluster1` 中的 `sleep` 访问 `cluster2` 中的 `httpbin`,我们需要在 `cluster1` 中为 `httpbin` 服务创建一个 service entry。
|
||||
service entry 的 host 命名应采用 `<name>.<namespace>.global` 的格式。
|
||||
其中 name 和 namespace 分别与远端服务的 name 和 namespace 对应。
|
||||
|
||||
For DNS resolution for services under the `*.global` domain, you need to assign these
|
||||
services an IP address.
|
||||
为了让 DNS 解析 `*.global` 域下的服务,您需要给这些服务分配一个 IP 地址。
|
||||
|
||||
{{< tip >}}
|
||||
Each service (in the `.global` DNS domain) must have a unique IP within the cluster.
|
||||
每个(`.global` 域下的)服务都必须有一个在其所属集群内唯一的 IP 地址。
|
||||
{{< /tip >}}
|
||||
|
||||
If the global services have actual VIPs, you can use those, but otherwise we suggest
|
||||
using IPs from the class E addresses range `240.0.0.0/4`.
|
||||
Application traffic for these IPs will be captured by the sidecar and routed to the
|
||||
appropriate remote service.
|
||||
如果 global service 需要使用虚拟 IP,您可以使用,但除此之外,我们建议使用范围在 `240.0.0.0/4` 的 E 类 IP 地址。
|
||||
使用这类 IP 地址的应用的流量将被 sidecar 捕获,并路由至适当的远程服务。
|
||||
|
||||
{{< warning >}}
|
||||
Multicast addresses (224.0.0.0 ~ 239.255.255.255) should not be used because there is no route to them by default.
|
||||
Loopback addresses (127.0.0.0/8) should also not be used because traffic sent to them may be redirected to the sidecar inbound listener.
|
||||
组播地址 (224.0.0.0 ~ 239.255.255.255) 不应该被使用,因为这些地址默认不会被路由。
|
||||
环路地址 (127.0.0.0/8) 不应该被使用,因为 sidecar 可能会将其重定向至 sidecar 的某个监听端口。
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
|
@ -333,48 +295,43 @@ running in a second cluster. Before you begin:
|
|||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
The configurations above will result in all traffic in `cluster1` for
|
||||
`httpbin.bar.global` on *any port* to be routed to the endpoint
|
||||
`<IPofCluster2IngressGateway>:15443` over a mutual TLS connection.
|
||||
上面的配置会基于双向 TLS 连接,将 `cluster1` 中对 `httpbin.bar.global` 的 *任意端口* 的访问,路由至 `<IPofCluster2IngressGateway>:15443` endpoint。
|
||||
|
||||
The gateway for port 15443 is a special SNI-aware Envoy
|
||||
preconfigured and installed when you deployed the Istio control plane in the cluster.
|
||||
Traffic entering port 15443 will be
|
||||
load balanced among pods of the appropriate internal service of the target
|
||||
cluster (in this case, `httpbin.bar` in `cluster2`).
|
||||
gateway 的 15443 端口是一个特殊的 SNI-aware Envoy,当您在集群中部署 Istio 控制平面时,它会自动安装。
|
||||
进入 15443 端口的流量会为目标集群内适当的服务的 pods 提供负载均衡(在这个例子中是,`cluster2` 集群中的 `httpbin.bar` 服务)。
|
||||
|
||||
{{< warning >}}
|
||||
Do not create a `Gateway` configuration for port 15443.
|
||||
不要手动创建一个使用 15443 端口的 `Gateway`。
|
||||
{{< /warning >}}
|
||||
|
||||
1. Verify that `httpbin` is accessible from the `sleep` service.
|
||||
1. 验证 `sleep` 是否可以访问 `httpbin`。
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers
|
||||
{{< /text >}}
|
||||
|
||||
### Send remote traffic via an egress gateway
|
||||
### 通过 egress gateway 发送远程流量{#send-remote-traffic-via-an-egress-gateway}
|
||||
|
||||
If you want to route traffic from `cluster1` via a dedicated egress gateway, instead of directly from the sidecars,
|
||||
use the following service entry for `httpbin.bar` instead of the one in the previous section.
|
||||
如果您想在 `cluster1` 中通过一个专用的 egress gateway 路由流量,而不是从 sidecars 直连。
|
||||
使用下面的 service entry 替换前面一节对 `httpbin.bar` 使用的配置。
|
||||
|
||||
{{< tip >}}
|
||||
The egress gateway used in this configuration cannot also be used for other, non inter-cluster, egress traffic.
|
||||
该配置中使用的 egress gateway 依然不能处理其它的、非 inter-cluster 的 egress 流量。
|
||||
{{< /tip >}}
|
||||
|
||||
If `$CLUSTER2_GW_ADDR` is an IP address, use the `$CLUSTER2_GW_ADDR - IP address` option. If `$CLUSTER2_GW_ADDR` is a hostname, use the `$CLUSTER2_GW_ADDR - hostname` option.
|
||||
如果 `$CLUSTER2_GW_ADDR` 是 IP 地址,请使用 `$CLUSTER2_GW_ADDR - IP address` 选项。如果 `$CLUSTER2_GW_ADDR` 是域名,请使用 `$CLUSTER2_GW_ADDR - hostname` 选项。
|
||||
|
||||
{{< tabset cookie-name="profile" >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - IP address" cookie-value="option1" >}}
|
||||
* Export the `cluster1` egress gateway address:
|
||||
* 暴露 `cluster1` egress gateway 地址:
|
||||
|
||||
{{< text bash >}}
|
||||
$ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
|
||||
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
|
||||
{{< /text >}}
|
||||
|
||||
* Apply the httpbin-bar service entry:
|
||||
* 使 httpbin-bar 服务的 entry 生效:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
|
@ -408,7 +365,7 @@ EOF
|
|||
{{< /tab >}}
|
||||
|
||||
{{< tab name="$CLUSTER2_GW_ADDR - hostname" cookie-value="option2" >}}
|
||||
If the `${CLUSTER2_GW_ADDR}` is a hostname, you can use `resolution: DNS` for the endpoint resolution:
|
||||
如果 `${CLUSTER2_GW_ADDR}` 是域名,您也可以使用 `resolution: DNS` 实现 endpoint 解析。
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
|
@ -443,11 +400,11 @@ EOF
|
|||
|
||||
{{< /tabset >}}
|
||||
|
||||
### Cleanup the example
|
||||
### 示例的清理{#cleanup-the-example}
|
||||
|
||||
Execute the following commands to clean up the example services.
|
||||
运行下面的命令清理示例中的服务。
|
||||
|
||||
* Cleanup `cluster1`:
|
||||
* 清理 `cluster1`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@
|
||||
|
@ -455,18 +412,16 @@ Execute the following commands to clean up the example services.
|
|||
$ kubectl delete --context=$CTX_CLUSTER1 ns foo
|
||||
{{< /text >}}
|
||||
|
||||
* Cleanup `cluster2`:
|
||||
* 清理 `cluster2`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
|
||||
$ kubectl delete --context=$CTX_CLUSTER2 ns bar
|
||||
{{< /text >}}
|
||||
|
||||
## Version-aware routing to remote services
|
||||
## Version-aware 路由到远端服务{#version-aware-routing-to-remote-services}
|
||||
|
||||
If the remote service has multiple versions, you can add
|
||||
labels to the service entry endpoints.
|
||||
For example:
|
||||
如果远端服务有多个版本,您可以为 service entry endpoint 添加标签。比如:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
|
||||
|
@ -497,26 +452,21 @@ spec:
|
|||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
You can then create virtual services and destination rules
|
||||
to define subsets of the `httpbin.bar.global` service using the appropriate gateway label selectors.
|
||||
The instructions are the same as those used for routing to a local service.
|
||||
See [multicluster version routing](/blog/2019/multicluster-version-routing/)
|
||||
for a complete example.
|
||||
然后您就可以使用适当的 gateway 标签选择器,创建虚拟服务和目标规则去定义 `httpbin.bar.global` 的子集。
|
||||
这些指令与路由到本地服务使用的指令相同。
|
||||
完整的例子,请参考 [multicluster version routing](/zh/blog/2019/multicluster-version-routing/)。
|
||||
|
||||
## Uninstalling
|
||||
## 卸载{#uninstalling}
|
||||
|
||||
Uninstall Istio by running the following commands on **every cluster**:
|
||||
若要卸载 Istio,请在 **每个集群** 上执行下面的命令:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete -f $HOME/istio.yaml
|
||||
$ kubectl delete ns istio-system
|
||||
{{< /text >}}
|
||||
|
||||
## Summary
|
||||
## 总结{#summary}
|
||||
|
||||
Using Istio gateways, a common root CA, and service entries, you can configure
|
||||
a single Istio service mesh across multiple Kubernetes clusters.
|
||||
Once configured this way, traffic can be transparently routed to remote clusters
|
||||
without any application involvement.
|
||||
Although this approach requires a certain amount of manual configuration for
|
||||
remote service access, the service entry creation process could be automated.
|
||||
使用 Istio gateway、公共的根 CA 和 service entry,您可以配置一个跨多个 Kubernetes 集群的单 Istio 服务网格。
|
||||
经过这种方式配置后,应用无需任何修改,即可将流量路由到远端的集群内。
|
||||
尽管此方法需要手动配置一些访问远端服务的选项,但 service entry 的创建过程可以自动化。
|
Loading…
Reference in New Issue