diff --git a/content/zh/docs/ops/prep/deployment-models/cluster-iso.svg b/content/zh/docs/ops/prep/deployment-models/cluster-iso.svg
new file mode 100644
index 0000000000..8aff13f1c5
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/cluster-iso.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/cluster-ns.svg b/content/zh/docs/ops/prep/deployment-models/cluster-ns.svg
new file mode 100644
index 0000000000..5867845764
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/cluster-ns.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/exp-ns.svg b/content/zh/docs/ops/prep/deployment-models/exp-ns.svg
new file mode 100644
index 0000000000..21021f0e75
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/exp-ns.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/failover.svg b/content/zh/docs/ops/prep/deployment-models/failover.svg
new file mode 100644
index 0000000000..242ebe6931
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/failover.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/index.md b/content/zh/docs/ops/prep/deployment-models/index.md
new file mode 100644
index 0000000000..aff1ec987d
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/index.md
@@ -0,0 +1,397 @@
+---
+title: Deployment Models
+description: Describes the system models that impact your overall Istio depolyment.
+weight: 1
+keywords:
+- single-cluster
+- multiple-clusters
+- control-plane
+- tenancy
+- networks
+- identity
+- trust
+- single-mesh
+- multiple-meshes
+aliases:
+- /zh/docs/concepts/multicluster-deployments/
+- /zh/docs/concepts/deployment-models
+---
+
+Important system models impact your overall Istio deployment model. This page
+discusses the options for each of these models and describes how you can
+configure Istio to address them.
+
+## Cluster models
+
+The workload instances of your application run in one or more
+{{< gloss "cluster" >}}clusters{{< /gloss >}}. For isolation, performance, and
+high availability, you can confine clusters to availability zones and regions.
+
+Production systems, depending on their requirements, can run across multiple
+clusters spanning a number of zones or regions, leveraging cloud load balancers
+to handle things like locality and zonal or regional fail over.
+
+In most cases, clusters represent boundaries for configuration and endpoint
+discovery. For example, each Kubernetes cluster has an API Server which manages
+the configuration for the cluster as well as serving
+{{< gloss >}}service endpoint{{< /gloss >}} information as pods are brought up
+or down. Since Kubernetes configures this behavior on a per-cluster basis, this
+approach helps limit the potential problems caused by incorrect configurations.
+
+In Istio, you can configure a single service mesh to span any number of
+clusters.
+
+### Single cluster
+
+In the simplest case, you can confine an Istio mesh to a single
+{{< gloss >}}cluster{{< /gloss >}}. A cluster usually operates over a
+[single network](#single-network), but it varies between infrastructure
+providers. A single cluster and single network model includes a control plane,
+which results in the simplest Istio deployment.
+
+{{< image width="50%"
+ link="single-cluster.svg"
+ alt="A service mesh with a single cluster"
+ title="Single cluster"
+ caption="A service mesh with a single cluster"
+ >}}
+
+Single cluster deployments offer simplicity, but lack other features, for
+example, fault isolation and fail over. If you need higher availability, you
+should use multiple clusters.
+
+### Multiple clusters
+
+You can configure a single mesh to include
+multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}. Using a
+{{< gloss >}}multicluster{{< /gloss >}} deployment within a single mesh affords
+the following capabilities beyond that of a single cluster deployment:
+
+- Fault isolation and fail over: `cluster-1` goes down, fail over to `cluster-2`.
+- Location-aware routing and fail over: Send requests to the nearest service.
+- Various [control plane models](#control-plane-models): Support different
+ levels of availability.
+- Team or project isolation: Each team runs its own set of clusters.
+
+{{< image width="75%"
+ link="multi-cluster.svg"
+ alt="A service mesh with multiple clusters"
+ title="Multicluster"
+ caption="A service mesh with multiple clusters"
+ >}}
+
+Multicluster deployments give you a greater degree of isolation and
+availability but increase complexity. If your systems have high availability
+requirements, you likely need clusters across multiple zones and regions. You
+can canary configuration changes or new binary releases in a single cluster,
+where the configuration changes only affect a small amount of user traffic.
+Additionally, if a cluster has a problem, you can temporarily route traffic to
+nearby clusters until you address the issue.
+
+You can configure inter-cluster communication based on the
+[network](#network-models) and the options supported by your cloud provider. For
+example, if two clusters reside on the same underlying network, you can enable
+cross-cluster communication by simply configuring firewall rules.
+
+## Network models
+
+Many production systems require multiple networks or subnets for isolation
+and high availability. Istio supports spanning a service mesh over a variety of
+network topologies. This approach allows you to select the network model that
+fits your existing network topology.
+
+### Single network
+
+In the simplest case, a service mesh operates over a single fully connected
+network. In a single network model, all
+{{< gloss "workload instance" >}}workload instances{{< /gloss >}}
+can reach each other directly without an Istio gateway.
+
+A single network allows Istio to configure service consumers in a uniform
+way across the mesh with the ability to directly address workload instances.
+
+{{< image width="50%"
+ link="single-net.svg"
+ alt="A service mesh with a single network"
+ title="Single network"
+ caption="A service mesh with a single network"
+ >}}
+
+### Multiple networks
+
+You can span a single service mesh across multiple networks; such a
+configuration is known as **multi-network**.
+
+Multiple networks afford the following capabilities beyond that of single networks:
+
+- Overlapping IP or VIP ranges for **service endpoints**
+- Crossing of administrative boundaries
+- Fault tolerance
+- Scaling of network addresses
+- Compliance with standards that require network segmentation
+
+In this model, the workload instances in different networks can only reach each
+other through one or more [Istio gateways](/docs/concepts/traffic-management/#gateways).
+Istio uses **partitioned service discovery** to provide consumers a different
+view of {{< gloss >}}service endpoint{{< /gloss >}}s. The view depends on the
+network of the consumers.
+
+{{< image width="50%"
+ link="multi-net.svg"
+ alt="A service mesh with multiple networks"
+ title="Multi-network deployment"
+ caption="A service mesh with multiple networks"
+ >}}
+
+## Control plane models
+
+An Istio mesh uses the {{< gloss >}}control plane{{< /gloss >}} to configure all
+communication between workload instances within the mesh. You can replicate the
+control plane, and workload instances connect to any control plane instance to
+get their configuration.
+
+In the simplest case, you can run your mesh with a control plane on a single
+cluster.
+
+{{< image width="50%"
+ link="single-cluster.svg"
+ alt="A service mesh with a control plane"
+ title="Single control plane"
+ caption="A service mesh with a control plane"
+ >}}
+
+Multicluster deployments can also share control plane instances. In this case,
+the control plane instances can reside in one or more clusters.
+
+{{< image width="75%"
+ link="shared-control.svg"
+ alt="A service mesh with two clusters sharing a control plane"
+ title="Shared control plane"
+ caption="A service mesh with two clusters sharing a control plane"
+ >}}
+
+For high availability, you should deploy a control plane across multiple
+clusters, zones, or regions.
+
+{{< image width="75%"
+ link="multi-control.svg"
+ alt="A service mesh with control plane instances for each region"
+ title="Multiple control planes"
+ caption="A service mesh with control plane instances for each region"
+ >}}
+
+This model affords the following benefits:
+
+- Improved availability: If a control plane becomes unavailable, the scope of
+ the outage is limited to only that control plane.
+
+- Configuration isolation: You can make configuration changes in one cluster,
+ zone, or region without impacting others.
+
+You can improve control plane availability through fail over. When a control
+plane instance becomes unavailable, workload instances can connect to
+another available control plane instance. Fail over can happen across clusters,
+zones, or regions.
+
+{{< image width="50%"
+ link="failover.svg"
+ alt="A service mesh after a control plane instance fails"
+ title="Control plane fail over"
+ caption="A service mesh after a control plane instance fails"
+ >}}
+
+The following list ranks control plane deployment examples by availability:
+
+- One cluster per region (**lowest availability**)
+- Multiple clusters per region
+- One cluster per zone
+- Multiple clusters per zone
+- Each cluster (**highest availability**)
+
+## Identity and trust models
+
+When a workload instance is created within a service mesh, Istio assigns the
+workload an {{< gloss >}}identity{{< /gloss >}}.
+
+The Certificate Authority (CA) creates and signs the certificates used to verify
+the identities used within the mesh. You can verify the identity of the message sender
+with the public key of the CA that created and signed the certificate
+for that identity. A **trust bundle** is the set of all CA public keys used by
+an Istio mesh. With a mesh's trust bundle, anyone can verify the sender of any
+message coming from that mesh.
+
+### Trust within a mesh
+
+Within a single Istio mesh, Istio ensures each workload instance has an
+appropriate certificate representing its own identity, and the trust bundle
+necessary to recognize all identities within the mesh and any federated meshes.
+The CA only creates and signs the certificates for those identities. This model
+allows workload instances in the mesh to authenticate each other when
+communicating.
+
+{{< image width="50%"
+ link="single-trust.svg"
+ alt="A service mesh with a certificate authority"
+ title="Trust within a mesh"
+ caption="A service mesh with a certificate authority"
+ >}}
+
+### Trust between meshes
+
+If a service in a mesh requires a service in another, you must federate identity
+and trust between the two meshes. To federate identity and trust, you must
+exchange the trust bundles of the meshes. You can exchange the trust bundles
+either manually or automatically using a protocol such as [SPIFFE Trust Domain Federation](https://docs.google.com/document/d/1OC9nI2W04oghhbEDJpKdIUIw-G23YzWeHZxwGLIkB8k/edit).
+Once you import a trust bundle to a mesh, you can configure local policies for
+those identities.
+
+{{< image width="50%"
+ link="multi-trust.svg"
+ alt="Multiple service meshes with certificate authorities"
+ title="Trust between meshes"
+ caption="Multiple service meshes with certificate authorities"
+ >}}
+
+## Mesh models
+
+Istio supports having all of your services in a
+{{< gloss "service mesh" >}}mesh{{< /gloss >}}, or federating multiple meshes
+together, which is also known as {{< gloss >}}multi-mesh{{< /gloss >}}.
+
+### Single mesh
+
+The simplest Istio deployment is a single mesh. Within a mesh, service names are
+unique. For example, only one service can have the name `mysvc` in the `foo`
+namespace. Additionally, workload instances share a common identity since
+service account names are unique within a namespace, just like service names.
+
+A single mesh can span [one or more clusters](#cluster-models) and
+[one or more networks](#network-models). Within a mesh,
+[namespaces](#namespace-tenancy) are used for [tenancy](#tenancy-models).
+
+### Multiple meshes
+
+Multiple mesh deployments result from {{< gloss >}}mesh federation{{< /gloss >}}.
+
+Multiple meshes afford the following capabilities beyond that of a single mesh:
+
+- Organizational boundaries: lines of business
+- Service name or namespace reuse: multiple distinct uses of the `default`
+ namespace
+- Stronger isolation: isolating test workloads from production workloads
+
+You can enable inter-mesh communication with {{< gloss >}}mesh federation{{<
+/gloss >}}. When federating, each mesh can expose a set of services and
+identities, which all participating meshes can recognize.
+
+{{< image width="50%"
+ link="multi-mesh.svg"
+ alt="Multiple service meshes"
+ title="Multi-mesh"
+ caption="Multiple service meshes"
+ >}}
+
+To avoid service naming collisions, you can give each mesh a globally unique
+**mesh ID**, to ensure that the fully qualified domain
+name (FQDN) for each service is distinct.
+
+When federating two meshes that do not share the same
+{{< gloss >}}trust domain{{< /gloss >}}, you must
+{{< gloss "mesh federation">}}federate{{< /gloss >}}
+{{< gloss >}}identity{{< /gloss >}} and **trust bundles** between them. See the
+section on [Multiple Trust Domains](#trust-between-meshes) for an overview.
+
+## Tenancy models
+
+In Istio, a **tenant** is a group of users that share
+common access and privileges to a set of deployed workloads. Generally, you
+isolate the workload instances from multiple tenants from each other through
+network configuration and policies.
+
+You can configure tenancy models to satisfy the following organizational
+requirements for isolation:
+
+- Security
+- Policy
+- Capacity
+- Cost
+- Performance
+
+Istio supports two types of tenancy models:
+
+- [Namespace tenancy](#namespace-tenancy)
+- [Cluster tenancy](#cluster-tenancy)
+
+### Namespace tenancy
+
+Istio uses [namespaces](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-namespace)
+as a unit of tenancy within a mesh. Istio also works in environments that don't
+implement namespace tenancy. In environments that do, you can grant a team
+permission to deploy their workloads only to a given namespace or set of
+namespaces. By default, services from multiple tenant namespaces can communicate
+with each other.
+
+{{< image width="50%"
+ link="iso-ns.svg"
+ alt="A service mesh with two isolated namespaces"
+ title="Isolated namespaces"
+ caption="A service mesh with two isolated namespaces"
+ >}}
+
+To improve isolation, you can selectively choose which services to expose to
+other namespaces. You can configure authorization policies for exposed services
+to restrict access to only the appropriate callers.
+
+{{< image width="50%"
+ link="exp-ns.svg"
+ alt="A service mesh with two namespaces and an exposed service"
+ title="Namespaces with an exposed service"
+ caption="A service mesh with two namespaces and an exposed service"
+ >}}
+
+When using [multiple clusters](#multiple-clusters), the namespaces in each
+cluster sharing the same name are considered the same namespace. For example,
+`Service B` in the `foo` namespace of `cluster-1` and `Service B` in the
+`foo` namespace of `cluster-2` refer to the same service, and Istio merges their
+endpoints for service discovery and load balancing.
+
+{{< image width="50%"
+ link="cluster-ns.svg"
+ alt="A service mesh with two clusters with the same namespace"
+ title="Multicluster namespaces"
+ caption="A service mesh with clusters with the same namespace"
+ >}}
+
+### Cluster tenancy
+
+Istio supports using clusters as a unit of tenancy. In this case, you can give
+each team a dedicated cluster or set of clusters to deploy their
+workloads. Permissions for a cluster are usually limited to the members of the
+team that owns it. You can set various roles for finer grained control, for
+example:
+
+- Cluster administrator
+- Developer
+
+To use cluster tenancy with Istio, you configure each cluster as an independent
+mesh. Alternatively, you can use Istio to implement a group of clusters as a
+single tenant. Then, each team can own one or more clusters, but you configure
+all their clusters as a single mesh. To connect the meshes of the various teams
+together, you can federate the meshes into a multi-mesh deployment.
+
+{{< image width="50%"
+ link="cluster-iso.svg"
+ alt="Two isolated service meshes with two clusters and two namespaces"
+ title="Cluster isolation"
+ caption="Two isolated service meshes with two clusters and two namespaces"
+ >}}
+
+Since a different team or organization operates each mesh, service naming
+is rarely distinct. For example, the `mysvc` in the `foo` namespace of
+`cluster-1` and the `mysvc` service in the `foo` namespace of
+`cluster-2` do not refer to the same service. The most common example is the
+scenario in Kubernetes where many teams deploy their workloads to the `default`
+namespace.
+
+When each team has their own mesh, cross-mesh communication follows the
+concepts described in the [multiple meshes](#multiple-meshes) model.
diff --git a/content/zh/docs/ops/prep/deployment-models/iso-ns.svg b/content/zh/docs/ops/prep/deployment-models/iso-ns.svg
new file mode 100644
index 0000000000..b404bd3201
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/iso-ns.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/multi-cluster.svg b/content/zh/docs/ops/prep/deployment-models/multi-cluster.svg
new file mode 100644
index 0000000000..2b7763a6a5
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/multi-cluster.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/multi-control.svg b/content/zh/docs/ops/prep/deployment-models/multi-control.svg
new file mode 100644
index 0000000000..6a6158d9e7
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/multi-control.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/multi-mesh.svg b/content/zh/docs/ops/prep/deployment-models/multi-mesh.svg
new file mode 100644
index 0000000000..e14698b912
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/multi-mesh.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/multi-net.svg b/content/zh/docs/ops/prep/deployment-models/multi-net.svg
new file mode 100644
index 0000000000..5ac474fa83
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/multi-net.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/multi-trust.svg b/content/zh/docs/ops/prep/deployment-models/multi-trust.svg
new file mode 100644
index 0000000000..6abfaaa4dd
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/multi-trust.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/shared-control.svg b/content/zh/docs/ops/prep/deployment-models/shared-control.svg
new file mode 100644
index 0000000000..fbabd61cb2
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/shared-control.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/single-cluster.svg b/content/zh/docs/ops/prep/deployment-models/single-cluster.svg
new file mode 100644
index 0000000000..f5c4ed62d4
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/single-cluster.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/single-net.svg b/content/zh/docs/ops/prep/deployment-models/single-net.svg
new file mode 100644
index 0000000000..81fea299c4
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/single-net.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/ops/prep/deployment-models/single-trust.svg b/content/zh/docs/ops/prep/deployment-models/single-trust.svg
new file mode 100644
index 0000000000..04b3cd8de0
--- /dev/null
+++ b/content/zh/docs/ops/prep/deployment-models/single-trust.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/zh/docs/setup/install/multicluster/gateways/index.md b/content/zh/docs/setup/install/multicluster/gateways/index.md
index 39d5f111cb..b193889759 100644
--- a/content/zh/docs/setup/install/multicluster/gateways/index.md
+++ b/content/zh/docs/setup/install/multicluster/gateways/index.md
@@ -56,7 +56,7 @@ Istio [多集群部署](/zh/docs/setup/deployment-models/#multiple-clusters),
{{< /tip >}}
- * 使用类似于下面的命令,为生成的 CA 证书创建 Kubernetes secret。了解详情,请参见 [CA 证书](/zh/docs/tasks/security/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)。
+ * 使用类似于下面的命令,为生成的 CA 证书创建 Kubernetes secret。了解详情,请参见 [CA 证书](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)。
{{< warning >}}
示例目录中的根证书和中间证书已被广泛分发和知道。
diff --git a/content/zh/docs/setup/install/multicluster/shared-vpn/index.md b/content/zh/docs/setup/install/multicluster/shared-vpn/index.md
index ad40069fd8..2b100f5613 100644
--- a/content/zh/docs/setup/install/multicluster/shared-vpn/index.md
+++ b/content/zh/docs/setup/install/multicluster/shared-vpn/index.md
@@ -1,6 +1,6 @@
---
-title: Shared control plane (single-network)
-description: Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.
+title: 共享控制平面(单一网络)
+description: 安装一个跨多个 Kubernetes 集群的 Istio 网格,多集群共享控制平面,并且集群间通过 VPN 互连。
weight: 5
keywords: [kubernetes,multicluster,federation,vpn]
aliases:
@@ -9,96 +9,40 @@ aliases:
- /zh/docs/setup/kubernetes/install/multicluster/shared-vpn/
---
-Follow this guide to install an Istio [multicluster service mesh](/docs/ops/prep/deployment-models/#multiple-clusters)
-where the Kubernetes cluster services and the applications in each cluster
-have the capability to expose their internal Kubernetes network to other
-clusters.
+按照该指南安装一个 Istio [多集群服务网格](/zh/docs/ops/prep/deployment-models/#multiple-clusters)以让每个 Kubernetes 集群的服务和应用能够将他们的内部 Kubernetes 网络暴露至其它集群。
-In this configuration, multiple Kubernetes clusters running
-a remote configuration connect to a shared Istio
-[control plane](/docs/ops/prep/deployment-models/#control-plane-models).
-Once one or more remote Kubernetes clusters are connected to the
-Istio control plane, Envoy can then form a mesh network across multiple clusters.
+在这个配置中,多个 Kubernetes 集群运行一份可以连接到一个共享 Istio [控制平面](/zh/docs/ops/prep/deployment-models/#control-plane-models)的远程配置。
+一旦一个或多个远程 Kubernetes 集群连接到该 Istio 控制平面,Envoy 就会形成一个跨多集群的网格网络。
-{{< image width="80%" link="./multicluster-with-vpn.svg" caption="Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN" >}}
+{{< image width="80%" link="./multicluster-with-vpn.svg" caption="跨多 Kubernetes 集群的 Istio 网格可通过 VPN 直接访问远程 Pod" >}}
-## Prerequisites
+## 前提条件{#prerequisites}
-* Two or more clusters running a supported Kubernetes version ({{< supported_kubernetes_versions >}}).
+* 两个或更多运行受支持的 Kubernetes 版本({{< supported_kubernetes_versions >}})的集群。
-* The ability to deploy the [Istio control plane](/docs/setup/getting-started/)
- on **one** of the clusters.
+* 能够在多集群中的**一个**上[部署 Istio 控制平面](/zh/docs/setup/install/istioctl/)。
-* A RFC1918 network, VPN, or an alternative more advanced network technique
- meeting the following requirements:
+* 满足下列要求的 RFC1918 网络、VPN、或其它更高级的网络技术:
- * Individual cluster Pod CIDR ranges and service CIDR ranges must be unique
-across the multicluster environment and may not overlap.
+ * 各集群的 Pod CIDR 范围和服务 CIDR 范围在多群集环境中必须唯一,并且不能重叠。
- * All pod CIDRs in every cluster must be routable to each other.
+ * 每个集群中的所有 pod CIDRs 必须互相可路由。
- * All Kubernetes control plane API servers must be routable to each other.
+ * 所有 Kubernetes 控制平面 API 服务必须互相可路由。
-* Helm **2.10 or newer**. The use of Tiller is optional.
+本指南介绍如何使用 Istio 提供的远程配置文件安装多群集 Istio 拓扑。
-This guide describes how to install a multicluster Istio topology using the
-manifests and Helm charts provided within the Istio repository.
+## 部署本地控制平面{#deploy-the-local-control-plane}
-## Deploy the local control plane
+在 Kubernetes 集群**之一**上[安装 Istio 控制平面](/zh/docs/setup/install/istioctl/)。
-Install the [Istio control plane](/docs/setup/getting-started/)
-on **one** Kubernetes cluster.
+### 设置环境变量{#environment-var}
-## Install the Istio remote
+在执行本节中的步骤之前,请等待 Istio 控制平面完成初始化。
-You must deploy the `istio-remote` component to each remote Kubernetes
-cluster. You can install the component in one of two ways:
+您必须在 Istio 控制平面集群上执行这些操作,以获取 Istio 控制平面服务端点,例如,Pilot 和 Policy Pod IP 端点。
-{{< tabset cookie-name="install-istio-remote" >}}
-
-{{< tab name="Helm+kubectl" cookie-value="Helm+kubectl" >}}
-
-1. Use the following command on the remote cluster to install
- the Istio control plane service endpoints:
-
- {{< text bash >}}
- $ istioctl manifest apply \
- --set profile=remote \
- --set values.global.remotePilotAddress=${PILOT_POD_IP} \
- --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
- --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP}
- {{< /text >}}
-
- {{< tip >}}
- All clusters must have the same namespace for the Istio
- components. It is possible to override the `istio-system` name on the main
- cluster as long as the namespace is the same for all Istio components in
- all clusters.
- {{< /tip >}}
-
-1. The following command example labels the `default` namespace. Use similar
- commands to label all the remote cluster's namespaces requiring automatic
- sidecar injection.
-
- {{< text bash >}}
- $ kubectl label namespace default istio-injection=enabled
- {{< /text >}}
-
- Repeat for all Kubernetes namespaces that need to setup automatic sidecar
- injection.
-
-{{< /tab >}}
-
-### Set environment variables {#environment-var}
-
-Wait for the Istio control plane to finish initializing before following the
-steps in this section.
-
-You must run these operations on the Istio control plane cluster to capture the
-Istio control plane service endpoints, for example, the Pilot and Policy Pod IP
-endpoints.
-
-Set the environment variables with the following commands:
+运行以下命令设置环境变量:
{{< text bash >}}
$ export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
@@ -106,39 +50,65 @@ $ export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=pol
$ export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
{{< /text >}}
-Normally, automatic sidecar injection on the remote clusters is enabled. To
-perform a manual sidecar injection refer to the [manual sidecar example](#manual-sidecar)
+通常,在远程集群上自动 sidecar 注入已经启用。
+要执行手动 sidecar 注入,请参考[手动 sidecar 示例](#manual-sidecar)。
-### Installation configuration parameters
+## 安装 Istio 远程组件{#install-the-Istio-remote}
-You must configure the remote cluster's sidecars interaction with the Istio
-control plane including the following endpoints in the `istio-remote` profile:
-`pilot`, `policy`, `telemetry` and tracing service. The profile
-enables automatic sidecar injection in the remote cluster by default. You can
-disable the automatic sidecar injection via a separate setting.
+您必须在每个远程 Kubernetes 集群上都部署 `istio-remote` 组件。
+您可以用下面两种方式之一来安装该组件:
-The following table shows the `istioctl` configuration values for remote clusters:
+1. 在远程集群上使用下列命令来安装 Istio 控制平面服务端点:
-| Install setting | Accepted Values | Default | Purpose of Value |
+ {{< text bash >}}
+ $ istioctl manifest apply \
+ --set profile=remote \
+ --set values.global.controlPlaneSecurityEnabled=false \
+ --set values.global.remotePilotCreateSvcEndpoint=true \
+ --set values.global.remotePilotAddress=${PILOT_POD_IP} \
+ --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
+ --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
+ --set gateways.enabled=false \
+ --set autoInjection.enabled=true
+ {{< /text >}}
+
+ {{< tip >}}
+ 所有集群的 Istio 组件都必须具有相同的命名空间。
+ 只要所有集群中所有 Istio 组件的命名空间都相同,就可以在主集群上覆盖 `istio-system` 名称。
+ {{< /tip >}}
+
+1. 下列命令示例标记了 `default` 命名空间。使用类似的命令标记所有需要自动进行 sidecar 注入的远程集群的命名空间。
+
+ {{< text bash >}}
+ $ kubectl label namespace default istio-injection=enabled
+ {{< /text >}}
+
+ 为所有需要设置自动 sidecar 注入的 Kubernetes 命名空间重复以上命令。
+
+### 安装配置参数{#installation-configuration-parameters}
+
+你必须配置远程集群的 sidecar 与 Istio 控制平面交互,包括在 `istio-remote` 配置文件中的以下端点:`pilot`、`policy`、`telemetry`和跟踪服务。
+该配置文件默认在远程集群中启用自动 sidecar 注入。
+您可以通过单独的设置禁用自动 sidecar 注入。
+
+下列表格展示了 `istioctl` 针对远程集群的配置值:
+
+| 安装设置 | 可选值 | 默认 | 值作用 |
| --- | --- | --- | --- |
-| `values.global.remotePilotAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's pilot Pod IP address or remote cluster DNS resolvable hostname |
-| `values.global.remotePolicyAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's policy Pod IP address or remote cluster DNS resolvable hostname |
-| `values.global.remoteTelemetryAddress` | A valid IP address or hostname | None | Specifies the Istio control plane's telemetry Pod IP address or remote cluster DNS resolvable hostname |
-| `values.sidecarInjectorWebhook.enabled` | true, false | true | Specifies whether to enable automatic sidecar injection on the remote cluster |
-| `values.global.remotePilotCreateSvcEndpoint` | true, false | false | If set, a selector-less service and endpoint for `istio-pilot` are created with the `remotePilotAddress` IP, which ensures the `istio-pilot.` is DNS resolvable in the remote cluster. |
+| `values.global.remotePilotAddress` | 有效的 IP 地址或主机名 | None | 指定 Istio 控制平面的 pilot Pod IP 地址或远程集群 DNS 可解析的主机名 |
+| `values.global.remotePolicyAddress` | 有效的 IP 地址或主机名 | None | 指定 Istio 控制平面的 policy Pod IP 地址或远程集群 DNS 可解析的主机名 |
+| `values.global.remoteTelemetryAddress` | 有效的 IP 地址或主机名 | None | 指定 Istio 控制平面的 telemetry Pod IP 地址或远程集群 DNS 可解析的主机名 |
+| `values.sidecarInjectorWebhook.enabled` | true, false | true | 指定是否在远程集群上启用自动 sidecar 注入 |
+| `values.global.remotePilotCreateSvcEndpoint` | true, false | false | 如果设置,将使用 `remotePilotAddress` IP 创建用于 `istio-pilot` 的无选择器的服务和端点,以确保 `istio-pilot.` 在远程集群上可通过 DNS 解析。 |
-## Generate configuration files for remote clusters {#kubeconfig}
+## 为远程集群创建配置文件{#kubeconfig}
-The Istio control plane requires access to all clusters in the mesh to
-discover services, endpoints, and pod attributes. The following steps
-describe how to generate a `kubeconfig` configuration file for the Istio control plane to use a remote cluster.
+Istio 控制平面需要访问网格中的所有集群以发现服务、端点和 pod 属性。
+下列步骤描述了如何通过远程集群为 Istio 控制平面创建 `kubeconfig` 配置文件。
-Perform this procedure on each remote cluster to add the cluster to the service
-mesh. This procedure requires the `cluster-admin` user access permission to
-the remote cluster.
+在每个远程集群上执行这些步骤以将集群加入服务网格。这些步骤需要具有远程集群的 `cluster-admin` 用户访问权限。
-1. Set the environment variables needed to build the `kubeconfig` file for the
- `istio-multi` service account with the following commands:
+1. 用以下命令设置为 `istio-reader-service-account` 服务账号构建 `kubeconfig` 文件所需的环境变量:
{{< text bash >}}
$ export WORK_DIR=$(pwd)
@@ -146,18 +116,17 @@ the remote cluster.
$ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
$ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
$ NAMESPACE=istio-system
- $ SERVICE_ACCOUNT=istio-multi
+ $ SERVICE_ACCOUNT=istio-reader-service-account
$ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
$ CA_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['ca\.crt']}")
$ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)
{{< /text >}}
{{< tip >}}
- An alternative to `base64 --decode` is `openssl enc -d -base64 -A` on many systems.
+ 在许多系统上,`openssl enc -d -base64 -A` 可以替代 `base64 --decode`。
{{< /tip >}}
-1. Create a `kubeconfig` file in the working directory for the
- `istio-multi` service account with the following command:
+1. 在工作目录中,用以下命令创建 `istio-reader-service-account` 服务账号对应的 `kubeconfig` 文件:
{{< text bash >}}
$ cat < ${KUBECFG_FILE}
@@ -182,7 +151,7 @@ the remote cluster.
EOF
{{< /text >}}
-1. _(Optional)_ Create file with environment variables to create the remote cluster's secret:
+1. _(可选)_ 创建环境变量文件以创建远程集群的 secret:
{{< text bash >}}
$ cat < remote_cluster_env_vars
@@ -192,34 +161,29 @@ the remote cluster.
EOF
{{< /text >}}
-At this point, you created the remote clusters' `kubeconfig` files in the
-current directory. The filename of the `kubeconfig` file is the same as the
-original cluster name.
+至此,您已在当前目录中创建了远程集群的 `kubeconfig` 文件。
+`kubeconfig` 文件的文件名与原始集群名称相同。
-## Instantiate the credentials {#credentials}
+## 实例化凭证{#credentials}
-Perform this procedure on the cluster running the Istio control plane. This
-procedure uses the `WORK_DIR`, `CLUSTER_NAME`, and `NAMESPACE` environment
-values set and the file created for the remote cluster's secret from the
-[previous section](#kubeconfig).
+在运行 Istio 控制平面的集群上执行这一步骤。
+该步骤使用了来自[上一节](#kubeconfig)的 `WORK_DIR`、`CLUSTER_NAME` 和 `NAMESPACE` 环境变量以及为远程集群的 secret 创建的文件。
-If you created the environment variables file for the remote cluster's
-secret, source the file with the following command:
+如果您已经为远程集群的 secret 创建了环境变量文件,运行以下命令加载该文件:
{{< text bash >}}
$ source remote_cluster_env_vars
{{< /text >}}
-You can install Istio in a different namespace. This procedure uses the
-`istio-system` namespace.
+您可以将 Istio 安装到不同的命名空间。
+本步骤使用了 `istio-system` 命名空间。
{{< warning >}}
-Do not store and label the secrets for the local cluster
-running the Istio control plane. Istio is always aware of the local cluster's
-Kubernetes credentials.
+不要为运行 Istio 控制平面的本地集群存储和标记 secrets。
+Istio 始终可以感知到本地集群的 Kubernetes 凭据。
{{< /warning >}}
-Create a secret and label it properly for each remote cluster:
+创建一个 secret 并为每个远程集群正确标记:
{{< text bash >}}
$ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
@@ -227,252 +191,217 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
{{< /text >}}
{{< warning >}}
-The Kubernetes secret data keys must conform with the
-`DNS-1123 subdomain` [format](https://tools.ietf.org/html/rfc1123#page-13). For
-example, the filename can't have underscores. Resolve any issue with the
-filename simply by changing the filename to conform with the format.
+Kubernetes secret 数据密钥必须符合 `DNS-1123 subdomain` [格式](https://tools.ietf.org/html/rfc1123#page-13)。
+例如,文件名不能含有下划线。
+只需更改文件名使其符合格式,即可解决文件名的任何问题。
{{< /warning >}}
-## Uninstalling the remote cluster
+## 卸载远程集群{#uninstalling-the-remote-cluster}
-To uninstall the cluster run the following command:
+运行下列命令以卸载远程集群:
{{< text bash >}}
- $ istioctl manifest apply \
+ $ istioctl manifest generate \
--set profile=remote \
+ --set values.global.controlPlaneSecurityEnabled=false \
+ --set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
- --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} | kubectl delete -f -
+ --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
+ --set gateways.enabled=false \
+ --set autoInjection.enabled=true | kubectl delete -f -
{{< /text >}}
-## Manual sidecar injection example {#manual-sidecar}
+## 手动 sidecar 注入示例 {#manual-sidecar}
-The following example shows how to use the `helm template` command to generate
-the manifest for a remote cluster with the automatic sidecar injection
-disabled. Additionally, the example shows how to use the `configmaps` of the
-remote cluster with the [`istioctl kube-inject`](/docs/reference/commands/istioctl/#istioctl-kube-inject) command to generate any
-application manifests for the remote cluster.
+下列例子展示了如何使用 `istioctl manifest` 命令来为禁用自动 sidecar 注入的远程集群生成清单。
+另外,这个例子还展示了如何通过 [`istioctl kube-inject`](/zh/docs/reference/commands/istioctl/#istioctl-kube-inject) 命令使用远程集群的 `configmaps` 来为远程集群生成任意应用的清单。
-Perform the following procedure against the remote cluster.
+对远程集群执行下列步骤。
-Before you begin, set the endpoint IP environment variables as described in the
-[set the environment variables section](#environment-var)
+在开始之前,请按照[设置环境变量部分](#environment-var)中的说明设置端点IP环境变量。
-1. Install the Istio remote profile:
+1. 安装 Istio 远程配置文件:
{{< text bash >}}
$ istioctl manifest apply \
--set profile=remote \
+ --set values.global.controlPlaneSecurityEnabled=false \
+ --set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
--set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
- --set values.sidecarInjectorWebhook.enabled=false
+ --set gateways.enabled=false \
+ --set autoInjection.enabled=false
{{< /text >}}
-1. [Generate](#kubeconfig) the `kubeconfig` configuration file for each remote
- cluster.
+1. 为每个远程集群[生成](#kubeconfig) `kubeconfig` 配置文件。
-1. [Instantiate the credentials](#credentials) for each remote cluster.
+1. 为每个远程集群[实例化凭证](#credentials)。
-### Manually inject the sidecars into the application manifests
+### 手动将 sidecars 注入到应用程序清单{#manually-inject-the-sidecars-into-the-application-manifests}
-The following example `istioctl` command injects the sidecars into the
-application manifests. Run the following commands in a shell with the
-`kubeconfig` context set up for the remote cluster.
+以下示例 `istioctl` 命令将 sidecar 注入到应用程序清单中。
+在为远程集群设置了 `kubeconfig` 上下文的 shell 中运行以下命令。
{{< text bash >}}
$ ORIGINAL_SVC_MANIFEST=mysvc-v1.yaml
$ istioctl kube-inject --injectConfigMapName istio-sidecar-injector --meshConfigMapName istio -f ${ORIGINAL_SVC_MANIFEST} | kubectl apply -f -
{{< /text >}}
-## Access services from different clusters
+## 从不同的集群中访问服务{#access-services-from-different-clusters}
-Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied
-to the cluster, you must define the service object in every cluster where a
-client runs, regardless of the location of the service's endpoints. To ensure
-this is the case, duplicate the service object to every cluster using
-`kubectl`. Duplication ensures Kubernetes can resolve the service name in any
-cluster. Since the service objects are defined in a namespace, you must define
-the namespace if it doesn't exist, and include it in the service definitions in
-all clusters.
+Kubernetes 基于集群解析 DNS。
+由于 DNS 解析与集群有关,因此无论服务端点的位置在哪,您都必须在运行客户端的每个集群中定义服务对象。
+为确保这种情况,请使用 `kubectl` 将服务对象复制到每个集群。
+复制可确保 Kubernetes 可以解析任何集群中的服务名称。
+由于服务对象是在命名空间中定义的,如果该命名空间不存在,您必须定义它,并将其包含在所有集群的服务定义中。
-## Deployment considerations
+## 部署注意事项{#deployment-considerations}
-The previous procedures provide a simple and step-by-step guide to deploy a
-multicluster environment. A production environment might require additional
-steps or more complex deployment options. The procedures gather the endpoint
-IPs of the Istio services and use them to invoke Helm. This process creates
-Istio services on the remote clusters. As part of creating those services and
-endpoints in the remote cluster, Kubernetes adds DNS entries to the `kube-dns`
-configuration object.
+前面的步骤提供了一个简单且按部就班的部署多集群环境的指导。
+一个生产环境需要更多的步骤或更复杂的部署选项。
+本节收集 Istio 服务的端点 IPs 并使用它们来调用 `istioctl`。
+这个过程会在远程集群上创建 Istio 服务。
+作为在远程集群中创建那些服务和端口的一部分,Kubernetes 会往 `kube-dns` 配置对象中添加 DNS 条目。
-This allows the `kube-dns` configuration object in the remote clusters to
-resolve the Istio service names for all Envoy sidecars in those remote
-clusters. Since Kubernetes pods don't have stable IPs, restart of any Istio
-service pod in the control plane cluster causes its endpoint to change.
-Therefore, any connection made from remote clusters to that endpoint are
-broken. This behavior is documented in [Istio issue #4822](https://github.com/istio/istio/issues/4822)
+这让远程集群上的 `kube-dns` 配置对象可以为那些远程集群中的所有 Envoy sidecars 解析 Istio 服务名。
+因为 Kubernetes pods 没有固定的 IPs,控制平面中的任意 Istio 服务 pod 的重启都会导致它的端点变化。
+因此,任何从远程集群到那个端点的连接都会断开。
+这个行为记录在 [Istio 问题 #4822](https://github.com/istio/istio/issues/4822)。
-To either avoid or resolve this scenario several options are available. This
-section provides a high level overview of these options:
+有几个选项可以避免或解决这个情况。本节概述了这些选项:
-* Update the DNS entries
-* Use a load balancer service type
-* Expose the Istio services via a gateway
+* 更新 DNS 条目
+* 使用负载均衡服务类型
+* 通过网关暴露这些 Istio 服务
-### Update the DNS entries
+### 更新 DNS 条目{#update-the-DNS-entries}
-Upon any failure or restart of the local Istio control plane, `kube-dns` on the remote clusters must be
-updated with the correct endpoint mappings for the Istio services. There
-are a number of ways this can be done. The most obvious is to rerun the Helm
-install in the remote cluster after the Istio services on the control plane
-cluster have restarted.
+本地 Istio 控制平面发生任何故障或重新启动时,必须使用 Istio 服务的正确端点映射更新远程集群上的 `kube-dns`。
+有许多方法可以做到这一点。
+最明显的是在控制平面集群上的 Istio 服务重新启动后,在远程集群中重新运行 `istioctl` 命令。
-### Use load balance service type
+### 使用负载均衡服务类型{#use-load-balance-service-type}
-In Kubernetes, you can declare a service with a service type of `LoadBalancer`.
-See the Kubernetes documentation on [service types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)
-for more information.
+在 Kubernetes 中,您可以声明一个服务的服务类型为 `LoadBalancer`。
+更多信息请参考 Kubernetes 文档的[服务类型](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)。
-A simple solution to the pod restart issue is to use load balancers for the
-Istio services. Then, you can use the load balancers' IPs as the Istio
-services' endpoint IPs to configure the remote clusters. You may need load
-balancer IPs for these Istio services:
+Pod 重启问题的一个简单的解决方案就是为 Istio 服务使用负载均衡器。
+然后,您可以使用负载均衡器的 IPs 作为 Istio 服务的端点 IPs 来配置远程集群。
+您可能需要下列 Istio 服务的负载均衡器 IPs:
* `istio-pilot`
* `istio-telemetry`
* `istio-policy`
-Currently, the Istio installation doesn't provide an option to specify service
-types for the Istio services. You can manually specify the service types in the
-Istio Helm charts or the Istio manifests.
+目前,Istio 安装没有提供用于为 Istio 服务指定服务类型的选项。
+您可以在 Istio 清单中手动指定服务类型。
-### Expose the Istio services via a gateway
+### 通过网关暴露这些 Istio 服务{#expose-the-Istio-services-via-a-gateway}
-This method uses the Istio ingress gateway functionality. The remote clusters
-have the `istio-pilot`, `istio-telemetry` and `istio-policy` services
-pointing to the load balanced IP of the Istio ingress gateway. Then, all the
-services point to the same IP.
-You must then create the destination rules to reach the proper Istio service in
-the main cluster in the ingress gateway.
+这个方法使用了 Istio ingress 网关功能。
+远程集群需要 `istio-pilot`、`istio-telemetry` 和 `istio-policy` 服务指向 Istio ingress 网关的负载均衡器 IP。
+然后,所有的服务指向相同的 IP。
+您必须接着创建 destination rules 以在 ingress 网关的主集群中访问到对应的 Istio 服务。
-This method provides two alternatives:
+此方法提供了两种选择:
-* Re-use the default Istio ingress gateway installed with the provided
- manifests or Helm charts. You only need to add the correct destination rules.
+* 重用提供的清单所创建的默认 Istio ingress 网关。您只需要添加正确的 destination rules。
-* Create another Istio ingress gateway specifically for the multicluster.
+* 为多集群创建另外一个 Istio ingress 网关。
-## Security
+## 安全性{#security}
-Istio supports deployment of mutual TLS between the control plane components as
-well as between sidecar injected application pods.
+Istio 支持在控制平面组件之间以及注入到应用的 pods 的 sidecar 之间部署双向 TLS。
-### Control plane security
+### 控制平面安全性{#control-plane-security}
-To enable control plane security follow these general steps:
+按照这些步骤启用控制平面安全性:
-1. Deploy the Istio control plane cluster with:
+1. 部署 Istio 控制平面集群需要:
- * The control plane security enabled.
+ * 启用控制平面安全性。
- * The `citadel` certificate self signing disabled.
+ * 禁用 `citadel` 证书自签名。
- * A secret named `cacerts` in the Istio control plane namespace with the
- [Certificate Authority (CA) certificates](/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key).
+ * Istio 控制平面命名空间中具有[证书颁发机构(CA)证书](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)的名为 `cacerts` 的 secret。
-1. Deploy the Istio remote clusters with:
+1. 部署 Istio 远程集群需要:
- * The control plane security enabled.
+ * 启用控制平面安全性。
- * The `citadel` certificate self signing disabled.
+ * 禁用 `citadel` 证书自签名。
- * A secret named `cacerts` in the Istio control plane namespace with the
- [CA certificates](/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key).
- The Certificate Authority (CA) of the main cluster or a root CA must sign
- the CA certificate for the remote clusters too.
+ * Istio 控制平面命名空间中具有 [CA 证书](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)的名为 `cacerts` 的 secret。
+ 主集群的证书颁发机构(CA)或根 CA 必须也为远程集群签名 CA 证书。
- * The Istio pilot service hostname must be resolvable via DNS. DNS
- resolution is required because Istio configures the sidecar to verify the
- certificate subject names using the `istio-pilot.` subject
- name format.
+ * Istio pilot 服务主机名可被 DNS 解析。
+ DNS 解析是必需的,因为 Istio 将 sidecar 配置为使用 `istio-pilot.` 主题名称格式来验证证书主题名称。
- * Set control plane IPs or resolvable host names.
+ * 设置控制平面 IPs 或可解析的主机名。
-### Mutual TLS between application pods
+### 应用 pods 间的双向 TLS{#mutual-TLS-between-application-pods}
-To enable mutual TLS for all application pods, follow these general steps:
+按照这些步骤以为所有应用 pods 启用双向 TLS:
-1. Deploy the Istio control plane cluster with:
+1. 部署 Istio 控制平面集群需要:
- * Mutual TLS globally enabled.
+ * 启用全局双向 TLS。
- * The Citadel certificate self-signing disabled.
+ * 禁用 Citadel 证书自签名。
- * A secret named `cacerts` in the Istio control plane namespace with the
- [CA certificates](/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)
+ * Istio 控制平面命名空间中具有 [CA 证书](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)的名为 `cacerts` 的 secret。
-1. Deploy the Istio remote clusters with:
+1. 部署 Istio 远程集群需要:
- * Mutual TLS globally enabled.
+ * 启用全局双向 TLS。
- * The Citadel certificate self-signing disabled.
+ * 禁用 Citadel 证书自签名。
- * A secret named `cacerts` in the Istio control plane namespace with the
- [CA certificates](/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)
- The CA of the main cluster or a root CA must sign the CA certificate for
- the remote clusters too.
+ * Istio 控制平面命名空间中具有 [CA 证书](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key)的名为 `cacerts` 的 secret。
+ 主集群的 CA 或根 CA 必须也为远程集群签名 CA 证书。
{{< tip >}}
-The CA certificate steps are identical for both control plane security and
-application pod security steps.
+对于控制平面安全性和应用 pod 安全性步骤,CA 证书的步骤相同。
{{< /tip >}}
-### Example deployment
+### 部署示例{#example-deployment}
-This example procedure installs Istio with both the control plane mutual TLS
-and the application pod mutual TLS enabled. The procedure sets up a remote
-cluster with a selector-less service and endpoint. Istio Pilot uses the service
-and endpoint to allow the remote sidecars to resolve the
-`istio-pilot.istio-system` hostname via Istio's local Kubernetes DNS.
+这个示例过程将在同时启用控制平面双向 TLS 和应用 pod 双向 TLS 的情况下安装 Istio。
+该过程用无选择器的服务和端点来设置远程集群。
+Istio Pilot 用该服务和端点以让远程 sidecars 可以通过 Istio 的本地 Kubernetes DNS 解析 `istio-pilot.istio-system` 主机名。
-#### Primary Cluster: Deploy the control plane cluster
+#### 主集群:部署控制平面集群{#primary-cluster-deploy-the-control-plane-cluster}
-1. Create the `cacerts` secret using the Istio certificate samples in the
- `istio-system` namespace:
+1. 使用 `istio-system` 命名空间中的 Istio 证书示例创建 `cacerts` secret:
{{< text bash >}}
$ kubectl create ns istio-system
$ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
{{< /text >}}
-1. Deploy the Istio control plane with security enabled for the control plane
- and the application pod:
+1. 部署 Istio 控制平面,并为控制平面和应用程序容器启用安全性:
{{< text bash >}}
- $ istioctl manifest apply
+ $ istioctl manifest apply \
--set values.global.mtls.enabled=true \
--set values.security.selfSigned=false \
--set values.global.controlPlaneSecurityEnabled=true
{{< /text >}}
-#### Remote Cluster: Deploy Istio components
+#### 远程集群:部署 Istio 组件{#remote-cluster-deploy-Istio-components}
-1. Create the `cacerts` secret using the Istio certificate samples in the
- `istio-system` namespace:
+1. 使用 `istio-system` 命名空间中的 Istio 证书示例创建 `cacerts` secret:
{{< text bash >}}
$ kubectl create ns istio-system
$ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
{{< /text >}}
-1. Set the environment variables for the IP addresses of the pods as described
- in the [setting environment variables section](#environment-var).
+1. 按照[设置环境变量部分](#environment-var)中的说明设置端点 IP 环境变量。
-1. The following command deploys the remote cluster's components with security
- enabled for the control plane and the application pod and enables the
- creation of the an Istio Pilot selector-less service and endpoint to get a
- DNS entry in the remote cluster.
+1. 以下命令部署远程集群的组件,并为控制平面和应用程序 pod 启用安全性,并启用 Istio Pilot 无选择器服务和端点的创建,以在远程集群中获取 DNS 条目。
{{< text bash >}}
$ istioctl manifest apply \
@@ -483,20 +412,17 @@ and endpoint to allow the remote sidecars to resolve the
--set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${PILOT_POD_IP} \
--set values.global.remotePolicyAddress=${POLICY_POD_IP} \
- --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP}
+ --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
+ --set gateways.enabled=false \
+ --set autoInjection.enabled=true
{{< /text >}}
-1. To generate the `kubeconfig` configuration file for the remote cluster,
- follow the steps in the [Kubernetes configuration section](#kubeconfig)
+1. 要为远程集群生成 `kubeconfig` 配置文件,请遵循 [Kubernetes 配置部分](#kubeconfig)中的步骤。
-### Primary Cluster: Instantiate credentials
+### 主集群:实例化凭证{#primary-cluster-instantiate-credentials}
-You must instantiate credentials for each remote cluster. Follow the
-[instantiate credentials procedure](#credentials)
-to complete the deployment.
+您必须为每个远程集群都实例化凭证。请按照[实例化凭证过程](#credentials)完成部署。
-**Congratulations!**
+### 恭喜{#congratulations}
-You have configured all the Istio components in both clusters to use mutual TLS
-between application sidecars, the control plane components, and other
-application sidecars.
+您已将所有群集中的所有 Istio 组件都配置为在应用 sidecars、控制平面组件和其他应用 sidecars 之间使用双向 TLS。
diff --git a/content/zh/docs/tasks/security/plugin-ca-cert/index.md b/content/zh/docs/tasks/security/citadel-config/plugin-ca-cert/index.md
similarity index 100%
rename from content/zh/docs/tasks/security/plugin-ca-cert/index.md
rename to content/zh/docs/tasks/security/citadel-config/plugin-ca-cert/index.md