mirror of https://github.com/istio/istio.io.git
add zh in /zh、update lintcheck ci、fix conflicts (#5823)
This commit is contained in:
parent
0aa60f89f2
commit
f24e8d41f5
|
|
@ -1,10 +1,8 @@
|
|||
---
|
||||
title: My Title
|
||||
subtitle: My optional on-line subtitle
|
||||
description: My one-line description for the page
|
||||
description: My one-line description for the page.
|
||||
publishdate: 2017-05-24
|
||||
attribution: My Name
|
||||
attribution: My Name (My Company Name)
|
||||
keywords: [keyword1,keyword2]
|
||||
---
|
||||
|
||||
|
||||
---
|
||||
|
|
@ -1,10 +1,7 @@
|
|||
---
|
||||
title: My Title
|
||||
subtitle: My optional on-line subtitle
|
||||
description: My one-line description for the page
|
||||
description: My one-line description for the page.
|
||||
publishdate: 2017-05-24
|
||||
attribution: My Name
|
||||
keywords: [keyword1,keyword2]
|
||||
---
|
||||
|
||||
|
||||
---
|
||||
|
|
@ -54,6 +54,6 @@ consider participating in our
|
|||
|
||||
{{< community_item logo="./servicemesher.svg" alt="ServiceMesher" >}}
|
||||
Our Chinese-language documentation is maintained by the
|
||||
[ServiceMesher community](http://www.servicemesher.com), join us and get involved!
|
||||
[ServiceMesher community](https://www.servicemesher.com), join us and get involved!
|
||||
{{< /community_item >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,32 @@
|
|||
title: Istio
|
||||
description: 用于连接、保护、控制和观测服务。
|
||||
---
|
||||
<!-- these script blocks are only for the primary English home page -->
|
||||
<script type="application/ld+json">
|
||||
{
|
||||
"@context": "http://schema.org",
|
||||
"@type": "Organization",
|
||||
"url": "https://istio.io",
|
||||
"logo": "https://istio.io/img/logo.png",
|
||||
"sameAs": [
|
||||
"https://twitter.com/IstioMesh",
|
||||
"https://discuss.istio.io/"
|
||||
]
|
||||
}
|
||||
</script>
|
||||
<script type="application/ld+json">
|
||||
{
|
||||
"@context": "http://schema.org",
|
||||
"@type": "WebSite",
|
||||
"url": "https://istio.io/",
|
||||
"potentialAction": {
|
||||
"@type": "SearchAction",
|
||||
"target": "https://istio.io/search?q={search_term_string}",
|
||||
"query-input": "required name=search_term_string"
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<main class="landing">
|
||||
<div id="banner">
|
||||
{{< inline_image "landing/istio-logo.svg" >}}
|
||||
|
|
@ -16,7 +42,7 @@ description: 用于连接、保护、控制和观测服务。
|
|||
<a href="/zh/docs/concepts/traffic-management/">
|
||||
<div class="panel-img-top">
|
||||
{{< inline_image "landing/routing-and-load-balancing.svg" >}}
|
||||
</div>
|
||||
</div>
|
||||
<div class="panel-body">
|
||||
<hr class="panel-line">
|
||||
<h5 class="panel-title">连接</h5>
|
||||
|
|
@ -51,7 +77,7 @@ description: 用于连接、保护、控制和观测服务。
|
|||
</div>
|
||||
<div class="panel-body">
|
||||
<hr class="panel-line">
|
||||
<h5 class="panel-title">控制</h5>
|
||||
<h5 class="panel-title">Control</h5>
|
||||
<hr class="panel-line">
|
||||
<p class="panel-text">
|
||||
应用策略并确保其执行,使得资源在消费者之间公平分配。
|
||||
|
|
@ -80,6 +106,5 @@ description: 用于连接、保护、控制和观测服务。
|
|||
<div id="buttons">
|
||||
<a title="在 Kubernetes 上安装 Istio。" class="btn" href="/zh/docs/setup/getting-started/">开始吧</a>
|
||||
<a title="深入了解 Istio 是什么以及它是如何工作的。" class="btn" href="/zh/docs/concepts/what-is-istio/">了解更多</a>
|
||||
<a title="下载最新版本。" class="btn" href="/docs/setup/getting-started/#download">下载 {{< istio_release_name >}}</a>
|
||||
</div>
|
||||
</main>
|
||||
<a title="下载最新版本。" class="btn" href="/zh/docs/setup/getting-started/#download">下载 {{< istio_release_name >}}</a>
|
||||
</div>
|
||||
|
|
@ -15,7 +15,7 @@ icon: bugs
|
|||
|
||||
搜索我们的 [问题数据库](https://github.com/istio/istio/issues/) 来查看是否我们已经知道您的问题,并了解何时可以解决它。如果您在该数据库中没有找到你的问题,请打开一个 [新问题](https://github.com/istio/istio/issues/new/choose) 让我们知道出现了什么错误。
|
||||
|
||||
如果您认为错误实际上是一个安全漏洞,请访问 [报告安全漏洞](/about/security-vulnerabilities/) 了解如何处理。
|
||||
如果您认为错误实际上是一个安全漏洞,请访问 [报告安全漏洞](/zh/about/security-vulnerabilities/) 了解如何处理。
|
||||
|
||||
### Kubernetes 集群状态档案{#Kubernetes-cluster-state-archives}
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ skip_seealso: true
|
|||
{{< company_logo link="https://www.daocloud.io" logo="./daocloud.svg" alt="DaoCloud" >}}
|
||||
{{< company_logo link="https://www.descarteslabs.com" logo="./descarteslabs.png" alt="Descartes Labs" >}}
|
||||
{{< company_logo link="https://www.ebay.com" logo="./ebay.png" alt="eBay" >}}
|
||||
{{< company_logo link="https://www.flexe.com/" logo="/about/community/customers/flexe.svg" alt="FLEXE" >}}
|
||||
{{< company_logo link="https://www.flexe.com/" logo="./flexe.svg" alt="FLEXE" >}}
|
||||
{{< company_logo link="https://www.fitstation.com" logo="./fitstation.png" alt="FitStation" >}}
|
||||
{{< company_logo link="https://www.getyourguide.com/" logo="./getyourguide.svg" alt="GetYourGuide" >}}
|
||||
{{< company_logo link="https://juspay.in" logo="./juspay.png" alt="JUSPAY" >}}
|
||||
|
|
|
|||
|
|
@ -49,5 +49,5 @@ Istio 是一个开源项目,拥有一个支持其使用和持续开发的活
|
|||
|
||||
{{< community_item logo="./servicemesher.svg" alt="ServiceMesher" >}}
|
||||
中文内容由
|
||||
[ServiceMesher 社区](https://www.servicemesher.com) 维护,加入我们并参与进来吧!
|
||||
[ServiceMesher community](https://www.servicemesher.com) 维护,加入我们并参与进来吧!
|
||||
{{< /community_item >}}
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ This page shows how to create, test, and maintain Istio documentation topics.
|
|||
## Before you begin
|
||||
|
||||
Before you can work on Istio documentation, you first need to create a fork of the Istio documentation repository as described in
|
||||
[Working with GitHub](/about/contribute/github/).
|
||||
[Working with GitHub](/zh/about/contribute/github/).
|
||||
|
||||
## Choosing a page type
|
||||
|
||||
|
|
@ -229,7 +229,7 @@ the hierarchy of the site:
|
|||
current hierarchy:
|
||||
|
||||
{{< text markdown >}}
|
||||
[see here](/docs/adir/afile/)
|
||||
[see here](/zh/docs/adir/afile/)
|
||||
{{< /text >}}
|
||||
|
||||
### GitHub
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ To create your diagrams, follow these steps:
|
|||
1. Connect the shapes with the appropriate style of line.
|
||||
1. Label the shapes and lines with descriptive yet short text.
|
||||
1. Add a legend for any labels that apply multiple times.
|
||||
1. [Contribute](/about/contribute/github/#add) you diagram to our
|
||||
1. [Contribute](/zh/about/contribute/github/#add) you diagram to our
|
||||
documentation.
|
||||
|
||||
If you create the diagram in Google Draw, follow these steps:
|
||||
|
|
|
|||
|
|
@ -215,6 +215,6 @@ be considered new in a few months.
|
|||
|
||||
### Minimize use of callouts
|
||||
|
||||
[Callouts](/about/contribute/creating-and-editing-pages/#callouts) let you highlight some particular content in your pages, but
|
||||
[Callouts](/zh/about/contribute/creating-and-editing-pages/#callouts) let you highlight some particular content in your pages, but
|
||||
they need to be used sparingly. Callouts are intended for special notes to the user and over-using them
|
||||
throughout the site neutralizes their special attention-grabbing nature.
|
||||
|
|
|
|||
|
|
@ -27,13 +27,13 @@ within the project, not to the project as a whole. Here is a high level descript
|
|||
| **API** | No guarantees on backward compatibility | APIs are versioned | Dependable, production-worthy. APIs are versioned, with automated version conversion for backward compatibility
|
||||
| **Performance** | Not quantified or guaranteed | Not quantified or guaranteed | Performance (latency/scale) is quantified, documented, with guarantees against regression
|
||||
| **Deprecation Policy** | None | Weak - 3 months | Dependable, Firm. 1 year notice will be provided before changes
|
||||
| **Security** | Security vulnerabilities will be handled publicly as simple bug fixes | Security vulnerabilities will be handled according to our [security vulnerability policy](/about/security-vulnerabilities/) | Security vulnerabilities will be handled according to our [security vulnerability policy](/about/security-vulnerabilities/)
|
||||
| **Security** | Security vulnerabilities will be handled publicly as simple bug fixes | Security vulnerabilities will be handled according to our [security vulnerability policy](/zh/about/security-vulnerabilities/) | Security vulnerabilities will be handled according to our [security vulnerability policy](/zh/about/security-vulnerabilities/)
|
||||
|
||||
## Istio features
|
||||
|
||||
Below is our list of existing features and their current phases. This information will be updated after every monthly release.
|
||||
|
||||
### Traffic Management
|
||||
### Traffic management
|
||||
|
||||
| Feature | Phase
|
||||
|-------------------|-------------------
|
||||
|
|
@ -44,69 +44,66 @@ Below is our list of existing features and their current phases. This informatio
|
|||
| Gateway: Ingress, Egress for all protocols | Stable
|
||||
| TLS termination and SNI Support in Gateways | Stable
|
||||
| SNI (multiple certs) at ingress | Stable
|
||||
| [Locality load balancing](/docs/ops/traffic-management/locality-load-balancing/) | Beta
|
||||
| [Locality load balancing](/zh/docs/ops/traffic-management/locality-load-balancing/) | Beta
|
||||
| Enabling custom filters in Envoy | Alpha
|
||||
| CNI container interface | Alpha
|
||||
| [Sidecar API](/docs/reference/config/networking/sidecar/) | Beta
|
||||
| [Sidecar API](/zh/docs/reference/config/networking/sidecar/) | Beta
|
||||
|
||||
### Observability
|
||||
|
||||
| Feature | Phase
|
||||
|-------------------|-------------------
|
||||
| [Prometheus Integration](/docs/tasks/observability/metrics/querying-metrics/) | Stable
|
||||
| [Local Logging (STDIO)](/docs/tasks/observability/logs/collecting-logs/) | Stable
|
||||
| [Statsd Integration](/docs/reference/config/policy-and-telemetry/adapters/statsd/) | Stable
|
||||
| [Client and Server Telemetry Reporting](/docs/reference/config/policy-and-telemetry/) | Stable
|
||||
| [Service Dashboard in Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/) | Stable
|
||||
| [Istio Component Dashboard in Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/) | Stable
|
||||
| [Distributed Tracing](/docs/tasks/observability/distributed-tracing/) | Stable
|
||||
| [Stackdriver Integration](/docs/reference/config/policy-and-telemetry/adapters/stackdriver/) | Beta
|
||||
| [Distributed Tracing to Zipkin / Jaeger](/docs/tasks/observability/distributed-tracing/) | Beta
|
||||
| [Logging with Fluentd](/docs/tasks/observability/logs/fluentd/) | Beta
|
||||
| [Trace Sampling](/docs/tasks/observability/distributed-tracing/overview/#trace-sampling) | Beta
|
||||
| [Prometheus Integration](/zh/docs/tasks/observability/metrics/querying-metrics/) | Stable
|
||||
| [Local Logging (STDIO)](/zh/docs/tasks/observability/logs/collecting-logs/) | Stable
|
||||
| [Statsd Integration](/zh/docs/reference/config/policy-and-telemetry/adapters/statsd/) | Stable
|
||||
| [Client and Server Telemetry Reporting](/zh/docs/reference/config/policy-and-telemetry/) | Stable
|
||||
| [Service Dashboard in Grafana](/zh/docs/tasks/observability/metrics/using-istio-dashboard/) | Stable
|
||||
| [Istio Component Dashboard in Grafana](/zh/docs/tasks/observability/metrics/using-istio-dashboard/) | Stable
|
||||
| [Distributed Tracing](/zh/docs/tasks/observability/distributed-tracing/) | Stable
|
||||
| [Stackdriver Integration](/zh/docs/reference/config/policy-and-telemetry/adapters/stackdriver/) | Beta
|
||||
| [Distributed Tracing to Zipkin / Jaeger](/zh/docs/tasks/observability/distributed-tracing/) | Beta
|
||||
| [Logging with Fluentd](/zh/docs/tasks/observability/logs/fluentd/) | Beta
|
||||
| [Trace Sampling](/zh/docs/tasks/observability/distributed-tracing/overview/#trace-sampling) | Beta
|
||||
|
||||
### Security and Policy Enforcement
|
||||
### Security and policy enforcement
|
||||
|
||||
| Feature | Phase
|
||||
|-------------------|-------------------
|
||||
| [Deny Checker](/docs/reference/config/policy-and-telemetry/adapters/denier/) | Stable
|
||||
| [List Checker](/docs/reference/config/policy-and-telemetry/adapters/list/) | Stable
|
||||
| [Pluggable Key/Cert Support for Istio CA](/docs/tasks/security/citadel-config/plugin-ca-cert/) | Stable
|
||||
| [Service-to-service mutual TLS](/docs/concepts/security/#mutual-tls-authentication) | Stable
|
||||
| [Kubernetes: Service Credential Distribution](/docs/concepts/security/#pki) | Stable
|
||||
| [VM: Service Credential Distribution](/docs/concepts/security/#pki) | Beta
|
||||
| [Mutual TLS Migration](/docs/tasks/security/authentication/mtls-migration) | Beta
|
||||
| [Cert management on Ingress Gateway](/docs/tasks/traffic-management/ingress/secure-ingress-sds) | Beta
|
||||
| [Authorization (RBAC)](/docs/concepts/security/#authorization) | Alpha
|
||||
| [End User (JWT) Authentication](/docs/concepts/security/#authentication) | Alpha
|
||||
| [OPA Checker](/docs/reference/config/policy-and-telemetry/adapters/opa/) | Alpha
|
||||
| [TCP Authorization (RBAC)](/docs/tasks/security/authorization/authz-tcp) | Alpha
|
||||
| [SDS Integration](/docs/tasks/security/citadel-config/auth-sds/) | Alpha
|
||||
|
||||
The 'Authorization (RBAC)' runtime is considered Beta. However, its API is still subject to a backwards incompatible change. Due to this, we advertise it as Alpha.
|
||||
| [Deny Checker](/zh/docs/reference/config/policy-and-telemetry/adapters/denier/) | Stable
|
||||
| [List Checker](/zh/docs/reference/config/policy-and-telemetry/adapters/list/) | Stable
|
||||
| [Pluggable Key/Cert Support for Istio CA](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/) | Stable
|
||||
| [Service-to-service mutual TLS](/zh/docs/concepts/security/#mutual-TLS-authentication) | Stable
|
||||
| [Kubernetes: Service Credential Distribution](/zh/docs/concepts/security/#PKI) | Stable
|
||||
| [VM: Service Credential Distribution](/zh/docs/concepts/security/#PKI) | Beta
|
||||
| [Mutual TLS Migration](/zh/docs/tasks/security/authentication/mtls-migration) | Beta
|
||||
| [Cert management on Ingress Gateway](/zh/docs/tasks/traffic-management/ingress/secure-ingress-sds) | Beta
|
||||
| [Authorization](/zh/docs/concepts/security/#authorization) | Beta
|
||||
| [End User (JWT) Authentication](/zh/docs/concepts/security/#authentication) | Alpha
|
||||
| [OPA Checker](/zh/docs/reference/config/policy-and-telemetry/adapters/opa/) | Alpha
|
||||
| [SDS Integration](/zh/docs/tasks/security/citadel-config/auth-sds/) | Alpha
|
||||
|
||||
### Core
|
||||
|
||||
| Feature | Phase
|
||||
|-------------------|-------------------
|
||||
| [Standalone Operator](/docs/setup/install/standalone-operator/) | Alpha
|
||||
| [Kubernetes: Envoy Installation and Traffic Interception](/docs/setup/) | Stable
|
||||
| [Kubernetes: Istio Control Plane Installation](/docs/setup/) | Stable
|
||||
| [Attribute Expression Language](/docs/reference/config/policy-and-telemetry/expression-language/) | Stable
|
||||
| [Standalone Operator](/zh/docs/setup/install/standalone-operator/) | Alpha
|
||||
| [Kubernetes: Envoy Installation and Traffic Interception](/zh/docs/setup/) | Stable
|
||||
| [Kubernetes: Istio Control Plane Installation](/zh/docs/setup/) | Stable
|
||||
| [Attribute Expression Language](/zh/docs/reference/config/policy-and-telemetry/expression-language/) | Stable
|
||||
| Mixer Out-of-Process Adapter Authoring Model | Beta
|
||||
| [Helm](/docs/setup/install/helm/) | Beta
|
||||
| [Multicluster Mesh over VPN](/docs/setup/install/multicluster/) | Alpha
|
||||
| [Kubernetes: Istio Control Plane Upgrade](/docs/setup/) | Beta
|
||||
| [Helm](/zh/docs/setup/install/helm/) | Beta
|
||||
| [Multicluster Mesh over VPN](/zh/docs/setup/install/multicluster/) | Alpha
|
||||
| [Kubernetes: Istio Control Plane Upgrade](/zh/docs/setup/) | Beta
|
||||
| Consul Integration | Alpha
|
||||
| Basic Configuration Resource Validation | Beta
|
||||
| Configuration Processing with Galley | Beta
|
||||
| [Mixer Self Monitoring](/faq/mixer/#mixer-self-monitoring) | Beta
|
||||
| [Mixer Self Monitoring](/zh/faq/mixer/#mixer-self-monitoring) | Beta
|
||||
| [Custom Mixer Build Model](https://github.com/istio/istio/wiki/Mixer-Compiled-In-Adapter-Dev-Guide) | deprecated
|
||||
| [Out of Process Mixer Adapters (gRPC Adapters)](https://github.com/istio/istio/wiki/Mixer-Out-Of-Process-Adapter-Dev-Guide) | Beta
|
||||
| [Istio CNI plugin](/docs/setup/additional-setup/cni/) | Alpha
|
||||
| [Istio CNI plugin](/zh/docs/setup/additional-setup/cni/) | Alpha
|
||||
| IPv6 support for Kubernetes | Alpha
|
||||
| [Distroless base images for Istio](/docs/ops/security/harden-docker-images/) | Alpha
|
||||
| [Distroless base images for Istio](/zh/docs/ops/security/harden-docker-images/) | Alpha
|
||||
|
||||
{{< idea >}}
|
||||
Please get in touch by joining our [community](/about/community/) if there are features you'd like to see in our future releases!
|
||||
Please get in touch by joining our [community](/zh/about/community/) if there are features you'd like to see in our future releases!
|
||||
{{< /idea >}}
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ To make a report, send an email to the private
|
|||
[istio-security-vulnerability-reports@googlegroups.com](mailto:istio-security-vulnerability-reports@googlegroups.com)
|
||||
mailing list with the vulnerability details. For normal product bugs
|
||||
unrelated to latent security vulnerabilities, please head to
|
||||
our [Reporting Bugs](/about/bugs/) page to learn what to do.
|
||||
our [Reporting Bugs](/zh/about/bugs/) page to learn what to do.
|
||||
|
||||
### When to report a security vulnerability?
|
||||
|
||||
|
|
@ -69,7 +69,7 @@ branches.
|
|||
|
||||
- Once the binaries are available, an announcement is sent out on the following channels:
|
||||
|
||||
- The [Istio blog](/blog)
|
||||
- The [Istio blog](/zh/blog)
|
||||
- The [Announcements](https://discuss.istio.io/c/announcements) category on discuss.istio.io
|
||||
- The [Istio Twitter feed](https://twitter.com/IstioMesh)
|
||||
- The [#announcements channel on Slack](https://istio.slack.com/messages/CFXS256EQ/)
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ Whether we use one deployment or two, canary management using deployment feature
|
|||
|
||||
With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.
|
||||
|
||||
Istio’s [routing rules](/docs/concepts/traffic-management/#routing-rules) also provide other important advantages; you can easily control
|
||||
Istio’s [routing rules](/zh/docs/concepts/traffic-management/#routing-rules) also provide other important advantages; you can easily control
|
||||
fine-grained traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, let’s look at deploying the **helloworld** service and see how simple the problem becomes.
|
||||
|
||||
We begin by defining the **helloworld** Service, just like any other Kubernetes service, something like this:
|
||||
|
|
@ -90,7 +90,7 @@ spec:
|
|||
|
||||
Note that this is exactly the same way we would do a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) using plain Kubernetes, but in that case we would need to adjust the number of replicas of each Deployment to control the distribution of traffic. For example, to send 10% of the traffic to the canary version (**v2**), the replicas for **v1** and **v2** could be set to 9 and 1, respectively.
|
||||
|
||||
However, since we are going to deploy the service in an [Istio enabled](/docs/setup/) cluster, all we need to do is set a routing
|
||||
However, since we are going to deploy the service in an [Istio enabled](/zh/docs/setup/) cluster, all we need to do is set a routing
|
||||
rule to control the traffic distribution. For example if we want to send 10% of the traffic to the canary, we could use `kubectl`
|
||||
to set a routing rule something like this:
|
||||
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ spec:
|
|||
Here is the service graph for the Bookinfo application.
|
||||
|
||||
{{< image width="80%"
|
||||
link="/docs/examples/bookinfo/withistio.svg"
|
||||
link="/zh/docs/examples/bookinfo/withistio.svg"
|
||||
caption="Bookinfo Service Graph"
|
||||
>}}
|
||||
|
||||
|
|
|
|||
|
|
@ -27,10 +27,10 @@ Adapters are Go packages that are directly linked into the Mixer binary. It’s
|
|||
|
||||
## Philosophy
|
||||
|
||||
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes](/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
|
||||
Mixer is essentially an attribute processing and routing machine. The proxy sends it [attributes](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes) as part of doing precondition checks and telemetry reports, which it turns into a series of calls into adapters. The operator supplies configuration which describes how to map incoming attributes to inputs for the adapters.
|
||||
|
||||
{{< image width="60%"
|
||||
link="/docs/reference/config/policy-and-telemetry/mixer-overview/machine.svg"
|
||||
link="/zh/docs/reference/config/policy-and-telemetry/mixer-overview/machine.svg"
|
||||
caption="Attribute Machine"
|
||||
>}}
|
||||
|
||||
|
|
@ -40,26 +40,26 @@ Configuration is a complex task. In fact, evidence shows that the overwhelming m
|
|||
|
||||
Each adapter that Mixer uses requires some configuration to operate. Typically, adapters need things like the URL to their backend, credentials, caching options, and so forth. Each adapter defines the exact configuration data it needs via a [protobuf](https://developers.google.com/protocol-buffers/) message.
|
||||
|
||||
You configure each adapter by creating [*handlers*](/docs/reference/config/policy-and-telemetry/mixer-overview/#handlers) for them. A handler is a
|
||||
You configure each adapter by creating [*handlers*](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/#handlers) for them. A handler is a
|
||||
configuration resource which represents a fully configured adapter ready for use. There can be any number of handlers for a single adapter, making it possible to reuse an adapter in different scenarios.
|
||||
|
||||
## Templates: adapter input schema
|
||||
|
||||
Mixer is typically invoked twice for every incoming request to a mesh service, once for precondition checks and once for telemetry reporting. For every such call, Mixer invokes one or more adapters. Different adapters need different pieces of data as input in order to do their work. A logging adapter needs a log entry, a metric adapter needs a metric, an authorization adapter needs credentials, etc.
|
||||
Mixer [*templates*](/docs/reference/config/policy-and-telemetry/templates/) are used to describe the exact data that an adapter consumes at request time.
|
||||
Mixer [*templates*](/zh/docs/reference/config/policy-and-telemetry/templates/) are used to describe the exact data that an adapter consumes at request time.
|
||||
|
||||
Each template is specified as a [protobuf](https://developers.google.com/protocol-buffers/) message. A single template describes a bundle of data that is delivered to one or more adapters at runtime. Any given adapter can be designed to support any number of templates, the specific templates the adapter supports is determined by the adapter developer.
|
||||
|
||||
[`metric`](/docs/reference/config/policy-and-telemetry/templates/metric/) and [`logentry`](/docs/reference/config/policy-and-telemetry/templates/logentry/) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
|
||||
[`metric`](/zh/docs/reference/config/policy-and-telemetry/templates/metric/) and [`logentry`](/zh/docs/reference/config/policy-and-telemetry/templates/logentry/) are two of the most essential templates used within Istio. They represent respectively the payload to report a single metric and a single log entry to appropriate backends.
|
||||
|
||||
## Instances: attribute mapping
|
||||
|
||||
You control which data is delivered to individual adapters by creating
|
||||
[*instances*](/docs/reference/config/policy-and-telemetry/mixer-overview/#instances).
|
||||
Instances control how Mixer uses the [attributes](/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes) delivered
|
||||
[*instances*](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/#instances).
|
||||
Instances control how Mixer uses the [attributes](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/#attributes) delivered
|
||||
by the proxy into individual bundles of data that can be routed to different adapters.
|
||||
|
||||
Creating instances generally requires using [attribute expressions](/docs/reference/config/policy-and-telemetry/expression-language/). The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instance’s field.
|
||||
Creating instances generally requires using [attribute expressions](/zh/docs/reference/config/policy-and-telemetry/expression-language/). The point of these expressions is to use any attribute or literal value in order to produce a result that can be assigned to an instance’s field.
|
||||
|
||||
Every instance field has a type, as defined in the template, every attribute has a
|
||||
[type](https://github.com/istio/api/blob/{{< source_branch_name >}}/policy/v1beta1/value_type.proto), and every attribute expression has a type.
|
||||
|
|
@ -69,7 +69,7 @@ to a string field. This kind of strong typing is designed to minimize the risk
|
|||
## Rules: delivering data to adapters
|
||||
|
||||
The last piece to the puzzle is telling Mixer which instances to send to which handler and when. This is done by
|
||||
creating [*rules*](/docs/reference/config/policy-and-telemetry/mixer-overview/#rules). Each rule identifies a specific handler and the set of
|
||||
creating [*rules*](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/#rules). Each rule identifies a specific handler and the set of
|
||||
instances to send to that handler. Whenever Mixer processes an incoming call, it invokes the indicated handler and gives it the specific set of instances for processing.
|
||||
|
||||
Rules contain matching predicates. A predicate is an attribute expression which returns a true/false value. A rule only takes effect if its predicate expression returns true. Otherwise, it’s like the rule didn’t exist and the indicated handler isn’t invoked.
|
||||
|
|
@ -86,6 +86,6 @@ The refreshed Mixer adapter model is designed to provide a flexible framework to
|
|||
|
||||
Handlers provide configuration data for individual adapters, templates determine exactly what kind of data different adapters want to consume at runtime, instances let operators prepare this data, rules direct the data to one or more handlers.
|
||||
|
||||
You can learn more about Mixer's overall architecture [here](/docs/reference/config/policy-and-telemetry/mixer-overview/), and learn the specifics of templates, handlers,
|
||||
and rules [here](/docs/reference/config/policy-and-telemetry). You can find many examples of Mixer configuration resources in the Bookinfo sample
|
||||
You can learn more about Mixer's overall architecture [here](/zh/docs/reference/config/policy-and-telemetry/mixer-overview/), and learn the specifics of templates, handlers,
|
||||
and rules [here](/zh/docs/reference/config/policy-and-telemetry). You can find many examples of Mixer configuration resources in the Bookinfo sample
|
||||
[here]({{< github_tree >}}/samples/bookinfo).
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ aliases:
|
|||
target_release: 0.3
|
||||
---
|
||||
|
||||
As [Mixer](/docs/reference/config/policy-and-telemetry/) is in the request path, it is natural to question how it impacts
|
||||
As [Mixer](/zh/docs/reference/config/policy-and-telemetry/) is in the request path, it is natural to question how it impacts
|
||||
overall system availability and latency. A common refrain we hear when people first glance at Istio architecture diagrams is
|
||||
"Isn't this just introducing a single point of failure?"
|
||||
|
||||
|
|
|
|||
|
|
@ -14,36 +14,36 @@ microservices-based applications use functionality provided by legacy systems th
|
|||
to migrate these systems to the service mesh gradually. Until these systems are migrated, they must be accessed by the
|
||||
applications inside the mesh. In other cases, the applications use web services provided by third parties.
|
||||
|
||||
In this blog post, I modify the [Istio Bookinfo Sample Application](/docs/examples/bookinfo/) to fetch book details from
|
||||
In this blog post, I modify the [Istio Bookinfo Sample Application](/zh/docs/examples/bookinfo/) to fetch book details from
|
||||
an external web service ([Google Books APIs](https://developers.google.com/books/docs/v1/getting_started)). I show how
|
||||
to enable egress HTTPS traffic in Istio by using _mesh-external service entries_. I provide two options for egress
|
||||
HTTPS traffic and describe the pros and cons of each of the options.
|
||||
|
||||
## Initial setting
|
||||
|
||||
To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with [Istio installed](/docs/setup/getting-started/). Then I deploy
|
||||
[Istio Bookinfo Sample Application](/docs/examples/bookinfo/). This application uses the _details_ microservice to fetch
|
||||
To demonstrate the scenario of consuming an external web service, I start with a Kubernetes cluster with [Istio installed](/zh/docs/setup/getting-started/). Then I deploy
|
||||
[Istio Bookinfo Sample Application](/zh/docs/examples/bookinfo/). This application uses the _details_ microservice to fetch
|
||||
book details, such as the number of pages and the publisher. The original _details_ microservice provides the book
|
||||
details without consulting any external service.
|
||||
|
||||
The example commands in this blog post work with Istio 1.0+, with or without
|
||||
[mutual TLS](/docs/concepts/security/#mutual-tls-authentication) enabled. The Bookinfo configuration files reside in the
|
||||
[mutual TLS](/zh/docs/concepts/security/#mutual-TLS-authentication) enabled. The Bookinfo configuration files reside in the
|
||||
`samples/bookinfo` directory of the Istio release archive.
|
||||
|
||||
Here is a copy of the end-to-end architecture of the application from the original
|
||||
[Bookinfo sample application](/docs/examples/bookinfo/).
|
||||
[Bookinfo sample application](/zh/docs/examples/bookinfo/).
|
||||
|
||||
{{< image width="80%"
|
||||
link="/docs/examples/bookinfo/withistio.svg"
|
||||
link="/zh/docs/examples/bookinfo/withistio.svg"
|
||||
caption="The Original Bookinfo Application"
|
||||
>}}
|
||||
|
||||
Perform the steps in the
|
||||
[Deploying the application](/docs/examples/bookinfo/#deploying-the-application),
|
||||
[Confirm the app is running](/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster),
|
||||
[Apply default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules)
|
||||
[Deploying the application](/zh/docs/examples/bookinfo/#deploying-the-application),
|
||||
[Confirm the app is running](/zh/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster),
|
||||
[Apply default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules)
|
||||
sections, and
|
||||
[change Istio to the blocking-egress-by-default policy](/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
[change Istio to the blocking-egress-by-default policy](/zh/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
|
||||
## Bookinfo with HTTPS access to a Google Books web service
|
||||
|
||||
|
|
@ -71,10 +71,10 @@ Now direct all the traffic destined to the _details_ microservice, to _details v
|
|||
$ kubectl apply -f @samples/bookinfo/networking/virtual-service-details-v2.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
Note that the virtual service relies on a destination rule that you created in the [Apply default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules) section.
|
||||
Note that the virtual service relies on a destination rule that you created in the [Apply default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules) section.
|
||||
|
||||
Access the web page of the application, after
|
||||
[determining the ingress IP and port](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port).
|
||||
[determining the ingress IP and port](/zh/docs/examples/bookinfo/#determine-the-ingress-IP-and-port).
|
||||
|
||||
Oops... Instead of the book details you have the _Error fetching product details_ message displayed:
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ So what might have gone wrong? Ah... The answer is that I forgot to tell you to
|
|||
an external service, in this case to the Google Books web service. By default, the Istio sidecar proxies
|
||||
([Envoy proxies](https://www.envoyproxy.io)) **block all the traffic to destinations outside the cluster**. To enable
|
||||
such traffic, you must define a
|
||||
[mesh-external service entry](/docs/reference/config/networking/service-entry/).
|
||||
[mesh-external service entry](/zh/docs/reference/config/networking/service-entry/).
|
||||
|
||||
### Enable HTTPS access to a Google Books web service
|
||||
|
||||
|
|
@ -190,7 +190,7 @@ in this case `www.googleapis.com`.
|
|||
To allow Istio to perform monitoring and policy enforcement of egress requests based on HTTP details, the microservices
|
||||
must issue HTTP requests. Istio then opens an HTTPS connection to the destination (performs TLS origination). The code
|
||||
of the microservices must be written differently or configured differently, according to whether the microservice runs
|
||||
inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency](/docs/ops/architecture/#design-goals). Sometimes you need to compromise...
|
||||
inside or outside an Istio service mesh. This contradicts the Istio design goal of [maximizing transparency](/zh/docs/ops/architecture/#design-goals). Sometimes you need to compromise...
|
||||
|
||||
The diagram below shows two options for sending HTTPS traffic to external services. On the top, a microservice sends
|
||||
regular HTTPS requests, encrypted end-to-end. On the bottom, the same microservice sends unencrypted HTTP requests
|
||||
|
|
@ -302,7 +302,7 @@ In the next section you will configure TLS origination for accessing an external
|
|||
|
||||
1. Access the web page of the application and verify that the book details are displayed without errors.
|
||||
|
||||
1. [Enable Envoy’s access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
1. [Enable Envoy’s access logging](/zh/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
|
||||
1. Check the log of of the sidecar proxy of _details v2_ and see the HTTP request.
|
||||
|
||||
|
|
@ -328,7 +328,7 @@ $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-details-v2.yaml@
|
|||
### Relation to Istio mutual TLS
|
||||
|
||||
Note that the TLS origination in this case is unrelated to
|
||||
[the mutual TLS](/docs/concepts/security/#mutual-tls-authentication) applied by Istio. The TLS origination for the
|
||||
[the mutual TLS](/zh/docs/concepts/security/#mutual-TLS-authentication) applied by Istio. The TLS origination for the
|
||||
external services will work, whether the Istio mutual TLS is enabled or not. The **mutual** TLS secures
|
||||
service-to-service communication **inside** the service mesh and provides each service with a strong identity. The
|
||||
**external services** in this blog post were accessed using **one-way TLS**, the same mechanism used to secure communication between a
|
||||
|
|
|
|||
|
|
@ -2,16 +2,16 @@
|
|||
title: Consuming External MongoDB Services
|
||||
description: Describes a simple scenario based on Istio's Bookinfo example.
|
||||
publishdate: 2018-11-16
|
||||
last_update: 2019-04-18
|
||||
last_update: 2019-11-12
|
||||
subtitle: Istio Egress Control Options for MongoDB traffic
|
||||
attribution: Vadim Eisenberg
|
||||
keywords: [traffic-management,egress,tcp,mongo]
|
||||
target_release: 1.1
|
||||
---
|
||||
|
||||
In the [Consuming External TCP Services](/blog/2018/egress-tcp/) blog post, I described how external services
|
||||
In the [Consuming External TCP Services](/zh/blog/2018/egress-tcp/) blog post, I described how external services
|
||||
can be consumed by in-mesh Istio applications via TCP. In this post, I demonstrate consuming external MongoDB services.
|
||||
You use the [Istio Bookinfo sample application](/docs/examples/bookinfo/), the version in which the book
|
||||
You use the [Istio Bookinfo sample application](/zh/docs/examples/bookinfo/), the version in which the book
|
||||
ratings data is persisted in a MongoDB database. You deploy this database outside the cluster and configure the
|
||||
_ratings_ microservice to use it. You will learn multiple options of controlling traffic to external MongoDB services and their
|
||||
pros and cons.
|
||||
|
|
@ -19,7 +19,7 @@ pros and cons.
|
|||
## Bookinfo with external ratings database
|
||||
|
||||
First, you set up a MongoDB database instance to hold book ratings data outside of your Kubernetes cluster. Then you
|
||||
modify the [Bookinfo sample application](/docs/examples/bookinfo/) to use your database.
|
||||
modify the [Bookinfo sample application](/zh/docs/examples/bookinfo/) to use your database.
|
||||
|
||||
### Setting up the ratings database
|
||||
|
||||
|
|
@ -94,9 +94,9 @@ For this task you set up an instance of [MongoDB](https://www.mongodb.com). You
|
|||
|
||||
### Initial setting of Bookinfo application
|
||||
|
||||
To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with [Istio installed](/docs/setup/getting-started/). Then you deploy the
|
||||
[Istio Bookinfo sample application](/docs/examples/bookinfo/), [apply the default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules), and
|
||||
[change Istio to the blocking-egress-by-default policy](/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with [Istio installed](/zh/docs/setup/getting-started/). Then you deploy the
|
||||
[Istio Bookinfo sample application](/zh/docs/examples/bookinfo/), [apply the default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules), and
|
||||
[change Istio to the blocking-egress-by-default policy](/zh/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
|
||||
This application uses the `ratings` microservice to fetch book ratings, a number between 1 and 5. The ratings are
|
||||
displayed as stars for each review. There are several versions of the `ratings` microservice. You will deploy the
|
||||
|
|
@ -105,29 +105,36 @@ version that uses [MongoDB](https://www.mongodb.com) as the ratings database in
|
|||
The example commands in this blog post work with Istio 1.0.
|
||||
|
||||
As a reminder, here is the end-to-end architecture of the application from the
|
||||
[Bookinfo sample application](/docs/examples/bookinfo/).
|
||||
[Bookinfo sample application](/zh/docs/examples/bookinfo/).
|
||||
|
||||
{{< image width="80%" link="/docs/examples/bookinfo/withistio.svg" caption="The original Bookinfo application" >}}
|
||||
{{< image width="80%" link="/zh/docs/examples/bookinfo/withistio.svg" caption="The original Bookinfo application" >}}
|
||||
|
||||
### Use the external database in Bookinfo application
|
||||
|
||||
1. Deploy the spec of the _ratings_ microservice that uses a MongoDB database (_ratings v2_), while setting
|
||||
`MONGO_DB_URL` environment variable of the spec:
|
||||
1. Deploy the spec of the _ratings_ microservice that uses a MongoDB database (_ratings v2_):
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@ --dry-run -o yaml | kubectl set env --local -f - "MONGO_DB_URL=mongodb://bookinfo:$BOOKINFO_PASSWORD@$MONGODB_HOST:$MONGODB_PORT/test?authSource=test&ssl=true" -o yaml | kubectl apply -f -
|
||||
$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@
|
||||
serviceaccount "bookinfo-ratings-v2" created
|
||||
deployment "ratings-v2" created
|
||||
{{< /text >}}
|
||||
|
||||
1. Update the `MONGO_DB_URL` environment variable to the value of your MongoDB:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl set env deployment/ratings-v2 "MONGO_DB_URL=mongodb://bookinfo:$BOOKINFO_PASSWORD@$MONGODB_HOST:$MONGODB_PORT/test?authSource=test&ssl=true"
|
||||
deployment.extensions/ratings-v2 env updated
|
||||
{{< /text >}}
|
||||
|
||||
1. Route all the traffic destined to the _reviews_ service to its _v3_ version. You do this to ensure that the
|
||||
_reviews_ service always calls the _ratings_ service. In addition, route all the traffic destined to the _ratings_
|
||||
service to _ratings v2_ that uses your database.
|
||||
|
||||
Specify the routing for both services above by adding two
|
||||
[virtual services](/docs/reference/config/networking/virtual-service/). These virtual services are
|
||||
[virtual services](/zh/docs/reference/config/networking/virtual-service/). These virtual services are
|
||||
specified in `samples/bookinfo/networking/virtual-service-ratings-mongodb.yaml` of an Istio release archive.
|
||||
***Important:*** make sure you
|
||||
[applied the default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules) before running the
|
||||
[applied the default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules) before running the
|
||||
following command.
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -146,7 +153,7 @@ boundary of the service mesh is marked by a dashed line.
|
|||
### Access the webpage
|
||||
|
||||
Access the webpage of the application, after
|
||||
[determining the ingress IP and port](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port).
|
||||
[determining the ingress IP and port](/zh/docs/examples/bookinfo/#determine-the-ingress-IP-and-port).
|
||||
|
||||
Since you did not configure the egress traffic control yet, the access to the MongoDB service is blocked by Istio.
|
||||
This is why instead of the rating stars, the message _"Ratings service is currently unavailable"_ is currently
|
||||
|
|
@ -160,14 +167,14 @@ egress control in Istio.
|
|||
## Egress control for TCP
|
||||
|
||||
Since [MongoDB Wire Protocol](https://docs.mongodb.com/manual/reference/mongodb-wire-protocol/) runs on top of TCP, you
|
||||
can control the egress traffic to your MongoDB as traffic to any other [external TCP service](/blog/2018/egress-tcp/). To
|
||||
can control the egress traffic to your MongoDB as traffic to any other [external TCP service](/zh/blog/2018/egress-tcp/). To
|
||||
control TCP traffic, a block of IPs in the [CIDR](https://tools.ietf.org/html/rfc2317) notation that includes the IP
|
||||
address of your MongoDB host must be specified. The caveat here is that sometimes the IP of the MongoDB host is not
|
||||
stable or known in advance.
|
||||
|
||||
In the cases when the IP of the MongoDB host is not stable, the egress traffic can either be
|
||||
[controlled as TLS traffic](#egress-control-for-tls), or the traffic can be routed
|
||||
[directly](/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services), bypassing the Istio sidecar
|
||||
[directly](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services), bypassing the Istio sidecar
|
||||
proxies.
|
||||
|
||||
Get the IP address of your MongoDB database instance. As an option, you can use the
|
||||
|
|
@ -180,7 +187,7 @@ $ export MONGODB_IP=$(host $MONGODB_HOST | grep " has address " | cut -d" " -f4)
|
|||
### Control TCP egress traffic without a gateway
|
||||
|
||||
In case you do not need to direct the traffic through an
|
||||
[egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#use-case), for example if you do not have a
|
||||
[egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#use-case), for example if you do not have a
|
||||
requirement that all the traffic that exists your mesh must exit through the gateway, follow the
|
||||
instructions in this section. Alternatively, if you do want to direct your traffic through an egress gateway, proceed to
|
||||
[Direct TCP egress traffic through an egress gateway](#direct-tcp-egress-traffic-through-an-egress-gateway).
|
||||
|
|
@ -234,15 +241,19 @@ instructions in this section. Alternatively, if you do want to direct your traff
|
|||
### Direct TCP Egress traffic through an egress gateway
|
||||
|
||||
In this section you handle the case when you need to direct the traffic through an
|
||||
[egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#use-case). The sidecar proxy routes TCP
|
||||
[egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#use-case). The sidecar proxy routes TCP
|
||||
connections from the MongoDB client to the egress gateway, by matching the IP of the MongoDB host (a CIDR block of
|
||||
length 32). The egress gateway forwards the traffic to the MongoDB host, by its hostname.
|
||||
|
||||
1. [Deploy Istio egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#deploy-istio-egress-gateway).
|
||||
1. [Deploy Istio egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#deploy-Istio-egress-gateway).
|
||||
|
||||
1. If you did not perform the steps in [the previous section](#control-tcp-egress-traffic-without-a-gateway), perform them now.
|
||||
|
||||
1. Proceed to the following section.
|
||||
1. You may want to enable {{< gloss >}}mutual TLS Authentication{{< /gloss >}} between the sidecar proxies of
|
||||
your MongoDB clients and the egress gateway to let the egress gateway monitor the identity of the source pods and to
|
||||
enable Mixer policy enforcement based on that identity. By enabling mutual TLS you also encrypt the traffic.
|
||||
If you do not want to enable mutual TLS, proceed to the [Mutual TLS between the sidecar proxies and the egress gateway](http://localhost:1313/blog/2018/egress-mongo/#mutual-tls-between-the-sidecar-proxies-and-the-egress-gateway) section.
|
||||
Otherwise, proceed to the following section.
|
||||
|
||||
#### Configure TCP traffic from sidecars to the egress gateway
|
||||
|
||||
|
|
@ -258,7 +269,7 @@ connections from the MongoDB client to the egress gateway, by matching the IP of
|
|||
configured.
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio/ --name istio-egressgateway --namespace istio-system -x charts/gateways/templates/service.yaml --set gateways.istio-ingressgateway.enabled=false --set gateways.istio-egressgateway.enabled=true --set gateways.istio-egressgateway.ports[0].port=80 --set gateways.istio-egressgateway.ports[0].name=http --set gateways.istio-egressgateway.ports[1].port=443 --set gateways.istio-egressgateway.ports[1].name=https --set gateways.istio-egressgateway.ports[2].port=$EGRESS_GATEWAY_MONGODB_PORT --set gateways.istio-egressgateway.ports[2].name=mongo | kubectl apply -f -
|
||||
$ helm template install/kubernetes/helm/istio/ --name istio-egressgateway --namespace istio-system -x charts/gateways/templates/deployment.yaml -x charts/gateways/templates/service.yaml --set gateways.istio-ingressgateway.enabled=false --set gateways.istio-egressgateway.enabled=true --set gateways.istio-egressgateway.ports[0].port=80 --set gateways.istio-egressgateway.ports[0].name=http --set gateways.istio-egressgateway.ports[1].port=443 --set gateways.istio-egressgateway.ports[1].name=https --set gateways.istio-egressgateway.ports[2].port=$EGRESS_GATEWAY_MONGODB_PORT --set gateways.istio-egressgateway.ports[2].name=mongo | kubectl apply -f -
|
||||
{{< /text >}}
|
||||
|
||||
1. Check that the `istio-egressgateway` service indeed has the selected port:
|
||||
|
|
@ -269,6 +280,21 @@ connections from the MongoDB client to the egress gateway, by matching the IP of
|
|||
istio-egressgateway ClusterIP 172.21.202.204 <none> 80/TCP,443/TCP,7777/TCP 34d
|
||||
{{< /text >}}
|
||||
|
||||
1. Disable mutual TLS authentication for the `istio-egressgateway` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: authentication.istio.io/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: istio-egressgateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
targets:
|
||||
- name: istio-egressgateway
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. Create an egress `Gateway` for your MongoDB service, and destination rules and a virtual service to direct the
|
||||
traffic through the egress gateway and from the egress gateway to the external service.
|
||||
|
||||
|
|
@ -345,16 +371,30 @@ connections from the MongoDB client to the egress gateway, by matching the IP of
|
|||
|
||||
#### Mutual TLS between the sidecar proxies and the egress gateway
|
||||
|
||||
You may want to enable [mutual TLS Authentication](/docs/tasks/security/authentication/mutual-tls/) between the sidecar proxies of
|
||||
your MongoDB clients and the egress gateway to let the egress gateway monitor the identity of the source pods and to
|
||||
enable Mixer policy enforcement based on that identity. By enabling mutual TLS you also encrypt the traffic.
|
||||
|
||||
1. Delete the configuration from the previous section:
|
||||
1. Delete the previous configuration:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete gateway istio-egressgateway --ignore-not-found=true
|
||||
$ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true
|
||||
$ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true
|
||||
$ kubectl delete policy istio-egressgateway -n istio-system --ignore-not-found=true
|
||||
{{< /text >}}
|
||||
|
||||
1. Enforce mutual TLS authentication for the `istio-egressgateway` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: authentication.istio.io/v1alpha1
|
||||
kind: Policy
|
||||
metadata:
|
||||
name: istio-egressgateway
|
||||
namespace: istio-system
|
||||
spec:
|
||||
targets:
|
||||
- name: istio-egressgateway
|
||||
peers:
|
||||
- mtls: {}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. Create an egress `Gateway` for your MongoDB service, and destination rules and a virtual service
|
||||
|
|
@ -449,7 +489,7 @@ enable Mixer policy enforcement based on that identity. By enabling mutual TLS y
|
|||
|
||||
1. Refresh the web page of the application again and verify that the ratings are still displayed correctly.
|
||||
|
||||
1. [Enable Envoy’s access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
1. [Enable Envoy’s access logging](/zh/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
|
||||
1. Check the log of the egress gateway's Envoy and see a line that corresponds to your
|
||||
requests to the MongoDB service. If Istio is deployed in the `istio-system` namespace, the command to print the
|
||||
|
|
@ -467,6 +507,7 @@ $ kubectl delete serviceentry mongo
|
|||
$ kubectl delete gateway istio-egressgateway --ignore-not-found=true
|
||||
$ kubectl delete virtualservice direct-mongo-through-egress-gateway --ignore-not-found=true
|
||||
$ kubectl delete destinationrule egressgateway-for-mongo mongo --ignore-not-found=true
|
||||
$ kubectl delete policy istio-egressgateway -n istio-system --ignore-not-found=true
|
||||
{{< /text >}}
|
||||
|
||||
## Egress control for TLS
|
||||
|
|
@ -492,7 +533,7 @@ your MongoDB egress traffic on the TCP level, as described in the previous secti
|
|||
|
||||
### Control TLS egress traffic without a gateway
|
||||
|
||||
In case you [do not need an egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#use-case), follow the
|
||||
In case you [do not need an egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#use-case), follow the
|
||||
instructions in this section. If you want to direct your traffic through an egress gateway, proceed to
|
||||
[Direct TCP Egress traffic through an egress gateway](#direct-tcp-egress-traffic-through-an-egress-gateway).
|
||||
|
||||
|
|
@ -526,13 +567,13 @@ $ kubectl delete serviceentry mongo
|
|||
### Direct TLS Egress traffic through an egress gateway
|
||||
|
||||
In this section you handle the case when you need to direct the traffic through an
|
||||
[egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#use-case). The sidecar proxy routes TLS
|
||||
[egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#use-case). The sidecar proxy routes TLS
|
||||
connections from the MongoDB client to the egress gateway, by matching the SNI of the MongoDB host.
|
||||
The egress gateway forwards the traffic to the MongoDB host. Note that the sidecar proxy rewrites the destination port
|
||||
to be 443. The egress gateway accepts the MongoDB traffic on the port 443, matches the MongoDB host by SNI, and rewrites
|
||||
the port again to be the port of the MongoDB server.
|
||||
|
||||
1. [Deploy Istio egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#deploy-istio-egress-gateway).
|
||||
1. [Deploy Istio egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#deploy-Istio-egress-gateway).
|
||||
|
||||
1. Create a `ServiceEntry` for the MongoDB service:
|
||||
|
||||
|
|
@ -562,7 +603,7 @@ to be 443. The egress gateway accepts the MongoDB traffic on the port 443, match
|
|||
1. Create an egress `Gateway` for your MongoDB service, and destination rules and virtual services
|
||||
to direct the traffic through the egress gateway and from the egress gateway to the external service.
|
||||
|
||||
If you want to enable [mutual TLS Authentication](/docs/tasks/security/authentication/mutual-tls/) between the sidecar proxies of
|
||||
If you want to enable [mutual TLS Authentication](/zh/docs/tasks/security/authentication/mutual-tls/) between the sidecar proxies of
|
||||
your application pods and the egress gateway, use the following command. (You may want to enable mutual TLS to let
|
||||
the egress gateway monitor the identity of the source pods and to enable Mixer policy enforcement based on that
|
||||
identity.)
|
||||
|
|
@ -724,7 +765,7 @@ to be 443. The egress gateway accepts the MongoDB traffic on the port 443, match
|
|||
|
||||
1. [Verify that the traffic is directed though the egress gateway](#verify-that-egress-traffic-is-directed-through-the-egress-gateway)
|
||||
|
||||
#### Cleanup directing TLS Egress traffic through an egress gateway
|
||||
#### Cleanup directing TLS egress traffic through an egress gateway
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl delete serviceentry mongo
|
||||
|
|
@ -746,7 +787,7 @@ You can pick a wildcarded domain according to your MongoDB host.
|
|||
|
||||
To configure egress gateway traffic for a wildcarded domain, you will first need to deploy a custom egress
|
||||
gateway with
|
||||
[an additional SNI proxy](/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains).
|
||||
[an additional SNI proxy](/zh/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains).
|
||||
This is needed due to current limitations of Envoy, the proxy used by the standard Istio egress gateway.
|
||||
|
||||
#### Prepare a new egress gateway with an SNI proxy
|
||||
|
|
@ -1039,7 +1080,7 @@ to hold the configuration of the Nginx SNI proxy:
|
|||
|
||||
1. Refresh the web page of the application again and verify that the ratings are still displayed correctly.
|
||||
|
||||
1. [Enable Envoy’s access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
1. [Enable Envoy’s access logging](/zh/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging)
|
||||
|
||||
1. Check the log of the egress gateway's Envoy proxy. If Istio is deployed in the `istio-system` namespace, the command
|
||||
to print the log is:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ In this blog post, we show how to apply monitoring and access policies to HTTP e
|
|||
## Use case
|
||||
|
||||
Consider an organization that runs applications that process content from _cnn.com_. The applications are decomposed
|
||||
into microservices deployed in an Istio service mesh. The applications access pages of various topics from _cnn.com_: [edition.cnn.com/politics](https://edition.cnn.com/politics), [edition.cnn.com/sport](https://edition.cnn.com/sport) and [edition.cnn.com/health](https://edition.cnn.com/health). The organization [configures Istio to allow access to edition.cnn.com](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) and everything works fine. However, at some
|
||||
into microservices deployed in an Istio service mesh. The applications access pages of various topics from _cnn.com_: [edition.cnn.com/politics](https://edition.cnn.com/politics), [edition.cnn.com/sport](https://edition.cnn.com/sport) and [edition.cnn.com/health](https://edition.cnn.com/health). The organization [configures Istio to allow access to edition.cnn.com](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) and everything works fine. However, at some
|
||||
point in time, the organization decides to banish politics. Practically, it means blocking access to
|
||||
[edition.cnn.com/politics](https://edition.cnn.com/politics) and allowing access to
|
||||
[edition.cnn.com/sport](https://edition.cnn.com/sport) and [edition.cnn.com/health](https://edition.cnn.com/health)
|
||||
|
|
@ -33,19 +33,19 @@ will prevent any possibility for a malicious application to access the forbidden
|
|||
|
||||
## Related tasks and examples
|
||||
|
||||
* The [Control Egress Traffic](/docs/tasks/traffic-management/egress/) task demonstrates how external (outside the
|
||||
* The [Control Egress Traffic](/zh/docs/tasks/traffic-management/egress/) task demonstrates how external (outside the
|
||||
Kubernetes cluster) HTTP and HTTPS services can be accessed by applications inside the mesh.
|
||||
* The [Configure an Egress Gateway](/docs/tasks/traffic-management/egress/egress-gateway/) example describes how to configure
|
||||
* The [Configure an Egress Gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/) example describes how to configure
|
||||
Istio to direct egress traffic through a dedicated gateway service called _egress gateway_.
|
||||
* The [Egress Gateway with TLS Origination](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) example
|
||||
* The [Egress Gateway with TLS Origination](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) example
|
||||
demonstrates how to allow applications to send HTTP requests to external servers that require HTTPS, while directing
|
||||
traffic through egress gateway.
|
||||
* The [Collecting Metrics](/docs/tasks/observability/metrics/collecting-metrics/) task describes how to configure metrics for services in a mesh.
|
||||
* The [Visualizing Metrics with Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/)
|
||||
* The [Collecting Metrics](/zh/docs/tasks/observability/metrics/collecting-metrics/) task describes how to configure metrics for services in a mesh.
|
||||
* The [Visualizing Metrics with Grafana](/zh/docs/tasks/observability/metrics/using-istio-dashboard/)
|
||||
describes the Istio Dashboard to monitor mesh traffic.
|
||||
* The [Basic Access Control](/docs/tasks/policy-enforcement/denial-and-list/) task shows how to control access to
|
||||
* The [Basic Access Control](/zh/docs/tasks/policy-enforcement/denial-and-list/) task shows how to control access to
|
||||
in-mesh services.
|
||||
* The [Denials and White/Black Listing](/docs/tasks/policy-enforcement/denial-and-list/) task shows how to configure
|
||||
* The [Denials and White/Black Listing](/zh/docs/tasks/policy-enforcement/denial-and-list/) task shows how to configure
|
||||
access policies using black or white list checkers.
|
||||
|
||||
As opposed to the observability and security tasks above, this blog post describes Istio's monitoring and access policies
|
||||
|
|
@ -53,14 +53,14 @@ applied exclusively to the egress traffic.
|
|||
|
||||
## Before you begin
|
||||
|
||||
Follow the steps in the [Egress Gateway with TLS Origination](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) example, **with mutual TLS authentication enabled**, without
|
||||
the [Cleanup](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination//#cleanup) step.
|
||||
Follow the steps in the [Egress Gateway with TLS Origination](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/) example, **with mutual TLS authentication enabled**, without
|
||||
the [Cleanup](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination//#cleanup) step.
|
||||
After completing that example, you can access [edition.cnn.com/politics](https://edition.cnn.com/politics) from an in-mesh container with `curl` installed. This blog post assumes that the `SOURCE_POD` environment variable contains the source pod's name and that the container's name is `sleep`.
|
||||
|
||||
## Configure monitoring and access policies
|
||||
|
||||
Since you want to accomplish your tasks in a _secure way_, you should direct egress traffic through
|
||||
_egress gateway_, as described in the [Egress Gateway with TLS Origination](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/)
|
||||
_egress gateway_, as described in the [Egress Gateway with TLS Origination](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/)
|
||||
task. The _secure way_ here means that you want to prevent malicious applications from bypassing Istio monitoring and
|
||||
policy enforcement.
|
||||
|
||||
|
|
@ -73,12 +73,12 @@ the traffic to _edition.cnn.com_.
|
|||
### Logging
|
||||
|
||||
Configure Istio to log access to _*.cnn.com_. You create a `logentry` and two
|
||||
[stdio](/docs/reference/config/policy-and-telemetry/adapters/stdio/) `handlers`, one for logging forbidden access
|
||||
[stdio](/zh/docs/reference/config/policy-and-telemetry/adapters/stdio/) `handlers`, one for logging forbidden access
|
||||
(_error_ log level) and another one for logging all access to _*.cnn.com_ (_info_ log level). Then you create `rules` to
|
||||
direct your `logentry` instances to your `handlers`. One rule directs access to _*.cnn.com/politics_ to the handler for
|
||||
logging forbidden access, another rule directs log entries to the handler that outputs each access to _*.cnn.com_ as an
|
||||
_info_ log entry. To understand the Istio `logentries`, `rules`, and `handlers`, see
|
||||
[Istio Adapter Model](/blog/2017/adapter-model/). A diagram with the involved entities and dependencies between them
|
||||
[Istio Adapter Model](/zh/blog/2017/adapter-model/). A diagram with the involved entities and dependencies between them
|
||||
appears below:
|
||||
|
||||
{{< image width="80%"
|
||||
|
|
@ -242,7 +242,7 @@ accessing _/health_ and _/sport_ URL paths only. Such a simple policy control ca
|
|||
either _/health_ or _/sport_. Also note that this condition is added to the `istio-egressgateway`
|
||||
section of the `VirtualService`, since the egress gateway is a hardened component in terms of security (see
|
||||
[egress gateway security considerations]
|
||||
(/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations)). You don't want any tampering
|
||||
(/zh/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations)). You don't want any tampering
|
||||
with your policies.
|
||||
|
||||
1. Send the previous three HTTP requests to _cnn.com_:
|
||||
|
|
@ -280,24 +280,24 @@ accessing _/health_ and _/sport_ URL paths only. Such a simple policy control ca
|
|||
While implementing access control using Istio routing worked for us in this simple case, it would not suffice for more
|
||||
complex cases. For example, the organization may want to allow access to
|
||||
[edition.cnn.com/politics](https://edition.cnn.com/politics) under certain conditions, so more complex policy logic than
|
||||
just filtering by URL paths will be required. You may want to apply [Istio Mixer Adapters](/blog/2017/adapter-model/),
|
||||
just filtering by URL paths will be required. You may want to apply [Istio Mixer Adapters](/zh/blog/2017/adapter-model/),
|
||||
for example
|
||||
[white lists or black lists](/docs/tasks/policy-enforcement/denial-and-list/#attribute-based-whitelists-or-blacklists)
|
||||
[white lists or black lists](/zh/docs/tasks/policy-enforcement/denial-and-list/#attribute-based-whitelists-or-blacklists)
|
||||
of allowed/forbidden URL paths, respectively.
|
||||
[Policy Rules](/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/) allow specifying complex conditions,
|
||||
specified in a [rich expression language](/docs/reference/config/policy-and-telemetry/expression-language/), which
|
||||
[Policy Rules](/zh/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/) allow specifying complex conditions,
|
||||
specified in a [rich expression language](/zh/docs/reference/config/policy-and-telemetry/expression-language/), which
|
||||
includes AND and OR logical operators. The rules can be reused for both logging and policy checks. More advanced users
|
||||
may want to apply [Istio Role-Based Access Control](/docs/concepts/security/#authorization).
|
||||
may want to apply [Istio Role-Based Access Control](/zh/docs/concepts/security/#authorization).
|
||||
|
||||
An additional aspect is integration with remote access policy systems. If the organization in our use case operates some
|
||||
[Identity and Access Management](https://en.wikipedia.org/wiki/Identity_management) system, you may want to configure
|
||||
Istio to use access policy information from such a system. You implement this integration by applying
|
||||
[Istio Mixer Adapters](/blog/2017/adapter-model/).
|
||||
[Istio Mixer Adapters](/zh/blog/2017/adapter-model/).
|
||||
|
||||
Cancel the access control by routing you used in this section and implement access control by Mixer policy checks
|
||||
in the next section.
|
||||
|
||||
1. Replace the `VirtualService` for _edition.cnn.com_ with your previous version from the [Configure an Egress Gateway](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway) example:
|
||||
1. Replace the `VirtualService` for _edition.cnn.com_ with your previous version from the [Configure an Egress Gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-TLS-origination-with-an-egress-gateway) example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cat <<EOF | kubectl apply -f -
|
||||
|
|
@ -354,7 +354,7 @@ gateway.
|
|||
### Access control by Mixer policy checks
|
||||
|
||||
In this step you use a Mixer
|
||||
[`Listchecker` adapter](/docs/reference/config/policy-and-telemetry/adapters/list/), its whitelist
|
||||
[`Listchecker` adapter](/zh/docs/reference/config/policy-and-telemetry/adapters/list/), its whitelist
|
||||
variety. You define a `listentry` with the URL path of the request and a `listchecker` to check the `listentry` using a
|
||||
static list of allowed URL paths, specified by the `overrides` field. For an external [Identity and Access Management](https://en.wikipedia.org/wiki/Identity_management) system, use the `providerurl` field instead. The updated
|
||||
diagram of the instances, rules and handlers appears below. Note that you reuse the same policy rule, `handle-cnn-access`
|
||||
|
|
@ -570,7 +570,7 @@ caption="HTTPS egress traffic through an egress gateway"
|
|||
|
||||
The end-to-end HTTPS is considered a better approach from the security point of view. However, since the traffic is
|
||||
encrypted the Istio proxies and the egress gateway can only see the source and destination IPs and the [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) of the destination. Since you configure Istio to use mutual TLS between the sidecar proxy
|
||||
and the egress gateway, the [identity of the source](/docs/concepts/security/#istio-identity) is also known.
|
||||
and the egress gateway, the [identity of the source](/zh/docs/concepts/security/#istio-identity) is also known.
|
||||
The gateway is unable to inspect the URL path, the HTTP method and the headers of the requests, so no monitoring and
|
||||
policies based on the HTTP information can be possible.
|
||||
In our use case, the organization would be able to allow access to _edition.cnn.com_ and to specify which applications
|
||||
|
|
@ -593,8 +593,8 @@ demonstrated a simple policy that allowed certain URL paths only. We also showed
|
|||
|
||||
## Cleanup
|
||||
|
||||
1. Perform the instructions in [Cleanup](/docs/tasks/traffic-management/egress/egress-gateway//#cleanup) section of the
|
||||
[Configure an Egress Gateway](/docs/tasks/traffic-management/egress/egress-gateway//) example.
|
||||
1. Perform the instructions in [Cleanup](/zh/docs/tasks/traffic-management/egress/egress-gateway//#cleanup) section of the
|
||||
[Configure an Egress Gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway//) example.
|
||||
|
||||
1. Delete the logging and policy checks configuration:
|
||||
|
||||
|
|
|
|||
|
|
@ -13,21 +13,21 @@ target_release: 1.0
|
|||
|
||||
{{< tip >}}
|
||||
This blog post was updated on July 23, 2018 to use the new
|
||||
[v1alpha3 traffic management API](/blog/2018/v1alpha3-routing/). If you need to use the old version, follow these [docs](https://archive.istio.io/v0.7/blog/2018/egress-tcp.html).
|
||||
[v1alpha3 traffic management API](/zh/blog/2018/v1alpha3-routing/). If you need to use the old version, follow these [docs](https://archive.istio.io/v0.7/blog/2018/egress-tcp.html).
|
||||
{{< /tip >}}
|
||||
|
||||
In my previous blog post, [Consuming External Web Services](/blog/2018/egress-https/), I described how external services
|
||||
In my previous blog post, [Consuming External Web Services](/zh/blog/2018/egress-https/), I described how external services
|
||||
can be consumed by in-mesh Istio applications via HTTPS. In this post, I demonstrate consuming external services
|
||||
over TCP. You will use the [Istio Bookinfo sample application](/docs/examples/bookinfo/), the version in which the book
|
||||
over TCP. You will use the [Istio Bookinfo sample application](/zh/docs/examples/bookinfo/), the version in which the book
|
||||
ratings data is persisted in a MySQL database. You deploy this database outside the cluster and configure the
|
||||
_ratings_ microservice to use it. You define a
|
||||
[Service Entry](/docs/reference/config/networking/service-entry/) to allow the in-mesh applications to
|
||||
[Service Entry](/zh/docs/reference/config/networking/service-entry/) to allow the in-mesh applications to
|
||||
access the external database.
|
||||
|
||||
## Bookinfo sample application with external ratings database
|
||||
|
||||
First, you set up a MySQL database instance to hold book ratings data outside of your Kubernetes cluster. Then you
|
||||
modify the [Bookinfo sample application](/docs/examples/bookinfo/) to use your database.
|
||||
modify the [Bookinfo sample application](/zh/docs/examples/bookinfo/) to use your database.
|
||||
|
||||
### Setting up the database for ratings data
|
||||
|
||||
|
|
@ -150,8 +150,8 @@ Now you are ready to deploy a version of the Bookinfo application that will use
|
|||
|
||||
### Initial setting of Bookinfo application
|
||||
|
||||
To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with [Istio installed](/docs/setup/getting-started/). Then you deploy the
|
||||
[Istio Bookinfo sample application](/docs/examples/bookinfo/), [apply the default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules), and [change Istio to the blocking-egress-by-default policy](/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
To demonstrate the scenario of using an external database, you start with a Kubernetes cluster with [Istio installed](/zh/docs/setup/getting-started/). Then you deploy the
|
||||
[Istio Bookinfo sample application](/zh/docs/examples/bookinfo/), [apply the default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules), and [change Istio to the blocking-egress-by-default policy](/zh/docs/tasks/traffic-management/egress/egress-control/#change-to-the-blocking-by-default-policy).
|
||||
|
||||
This application uses the `ratings` microservice to fetch
|
||||
book ratings, a number between 1 and 5. The ratings are displayed as stars for each review. There are several versions
|
||||
|
|
@ -159,13 +159,13 @@ This application uses the `ratings` microservice to fetch
|
|||
as their database.
|
||||
|
||||
The example commands in this blog post work with Istio 0.8+, with or without
|
||||
[mutual TLS](/docs/concepts/security/#mutual-tls-authentication) enabled.
|
||||
[mutual TLS](/zh/docs/concepts/security/#mutual-TLS-authentication) enabled.
|
||||
|
||||
As a reminder, here is the end-to-end architecture of the application from the
|
||||
[Bookinfo sample application](/docs/examples/bookinfo/).
|
||||
[Bookinfo sample application](/zh/docs/examples/bookinfo/).
|
||||
|
||||
{{< image width="80%"
|
||||
link="/docs/examples/bookinfo/withistio.svg"
|
||||
link="/zh/docs/examples/bookinfo/withistio.svg"
|
||||
caption="The original Bookinfo application"
|
||||
>}}
|
||||
|
||||
|
|
@ -204,10 +204,10 @@ _reviews_ service always calls the _ratings_ service. In addition, route all the
|
|||
service to _ratings v2-mysql_ that uses your database.
|
||||
|
||||
Specify the routing for both services above by adding two
|
||||
[virtual services](/docs/reference/config/networking/virtual-service/). These virtual services are
|
||||
[virtual services](/zh/docs/reference/config/networking/virtual-service/). These virtual services are
|
||||
specified in `samples/bookinfo/networking/virtual-service-ratings-mysql.yaml` of an Istio release archive.
|
||||
***Important:*** make sure you
|
||||
[applied the default destination rules](/docs/examples/bookinfo/#apply-default-destination-rules) before running the
|
||||
[applied the default destination rules](/zh/docs/examples/bookinfo/#apply-default-destination-rules) before running the
|
||||
following command.
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -229,18 +229,18 @@ Note that the MySQL database is outside the Istio service mesh, or more precisel
|
|||
### Access the webpage
|
||||
|
||||
Access the webpage of the application, after
|
||||
[determining the ingress IP and port](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port).
|
||||
[determining the ingress IP and port](/zh/docs/examples/bookinfo/#determine-the-ingress-IP-and-port).
|
||||
|
||||
You have a problem... Instead of the rating stars, the message _"Ratings service is currently unavailable"_ is currently
|
||||
displayed below each review:
|
||||
|
||||
{{< image width="80%" link="./errorFetchingBookRating.png" caption="The Ratings service error messages" >}}
|
||||
|
||||
As in [Consuming External Web Services](/blog/2018/egress-https/), you experience **graceful service degradation**,
|
||||
As in [Consuming External Web Services](/zh/blog/2018/egress-https/), you experience **graceful service degradation**,
|
||||
which is good. The application did not crash due to the error in the _ratings_ microservice. The webpage of the
|
||||
application correctly displayed the book information, the details, and the reviews, just without the rating stars.
|
||||
|
||||
You have the same problem as in [Consuming External Web Services](/blog/2018/egress-https/), namely all the traffic
|
||||
You have the same problem as in [Consuming External Web Services](/zh/blog/2018/egress-https/), namely all the traffic
|
||||
outside the Kubernetes cluster, both TCP and HTTP, is blocked by default by the sidecar proxies. To enable such traffic
|
||||
for TCP, a mesh-external service entry for TCP must be defined.
|
||||
|
||||
|
|
@ -333,23 +333,23 @@ Also note that the IPs of an external service are not always static, for example
|
|||
be changed from time to time, for example due to infrastructure changes. In these cases, if the range of the possible
|
||||
IPs is known, you should specify the range by CIDR blocks. If the range of the possible IPs is not known, service
|
||||
entries for TCP cannot be used and
|
||||
[the external services must be called directly](/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services),
|
||||
[the external services must be called directly](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services),
|
||||
bypassing the sidecar proxies.
|
||||
|
||||
## Relation to mesh expansion
|
||||
## Relation to virtual machines support
|
||||
|
||||
Note that the scenario described in this post is different from the mesh expansion scenario, described in the
|
||||
[Bookinfo with Mesh Expansion](/docs/examples/virtual-machines/bookinfo) example. In that scenario, a MySQL instance runs on an
|
||||
Note that the scenario described in this post is different from the
|
||||
[Bookinfo with Virtual Machines](/zh/docs/examples/virtual-machines/bookinfo/) example. In that scenario, a MySQL instance runs on an
|
||||
external
|
||||
(outside the cluster) machine (a bare metal or a VM), integrated with the Istio service mesh. The MySQL service becomes
|
||||
a first-class citizen of the mesh with all the beneficial features of Istio applicable. Among other things, the service
|
||||
becomes addressable by a local cluster domain name, for example by `mysqldb.vm.svc.cluster.local`, and the communication
|
||||
to it can be secured by
|
||||
[mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication). There is no need to create a service
|
||||
[mutual TLS authentication](/zh/docs/concepts/security/#mutual-TLS-authentication). There is no need to create a service
|
||||
entry to access this service; however, the service must be registered with Istio. To enable such integration, Istio
|
||||
components (_Envoy proxy_, _node-agent_, `_istio-agent_`) must be installed on the machine and the Istio control plane
|
||||
(_Pilot_, _Mixer_, _Citadel_) must be accessible from it. See the
|
||||
[Istio Mesh Expansion](/docs/examples/virtual-machines/) instructions for more details.
|
||||
[Istio VM-related](/zh/docs/examples/virtual-machines/) tasks for more details.
|
||||
|
||||
In our case, the MySQL instance can run on any machine or can be provisioned as a service by a cloud provider. There is
|
||||
no requirement to integrate the machine with Istio. The Istio control plane does not have to be accessible from the
|
||||
|
|
|
|||
|
|
@ -11,11 +11,11 @@ target_release: 1.0
|
|||
|
||||
Traffic management is one of the critical benefits provided by Istio. At the heart of Istio’s traffic management is the ability to decouple traffic flow and infrastructure scaling. This lets you control your traffic in ways that aren’t possible without a service mesh like Istio.
|
||||
|
||||
For example, let’s say you want to execute a [canary deployment](https://martinfowler.com/bliki/CanaryRelease.html). With Istio, you can specify that **v1** of a service receives 90% of incoming traffic, while **v2** of that service only receives 10%. With standard Kubernetes deployments, the only way to achieve this is to manually control the number of available Pods for each version, for example 9 Pods running v1 and 1 Pod running v2. This type of manual control is hard to implement, and over time may have trouble scaling. For more information, check out [Canary Deployments using Istio](/blog/2017/0.1-canary/).
|
||||
For example, let’s say you want to execute a [canary deployment](https://martinfowler.com/bliki/CanaryRelease.html). With Istio, you can specify that **v1** of a service receives 90% of incoming traffic, while **v2** of that service only receives 10%. With standard Kubernetes deployments, the only way to achieve this is to manually control the number of available Pods for each version, for example 9 Pods running v1 and 1 Pod running v2. This type of manual control is hard to implement, and over time may have trouble scaling. For more information, check out [Canary Deployments using Istio](/zh/blog/2017/0.1-canary/).
|
||||
|
||||
The same issue exists when deploying updates to existing services. While you can update deployments with Kubernetes, it requires replacing v1 Pods with v2 Pods. Using Istio, you can deploy v2 of your service and use built-in traffic management mechanisms to shift traffic to your updated services at a network level, then remove the v1 Pods.
|
||||
|
||||
In addition to canary deployments and general traffic shifting, Istio also gives you the ability to implement dynamic request routing (based on HTTP headers), failure recovery, retries, circuit breakers, and fault injection. For more information, check out the [Traffic Management documentation](/docs/concepts/traffic-management/).
|
||||
In addition to canary deployments and general traffic shifting, Istio also gives you the ability to implement dynamic request routing (based on HTTP headers), failure recovery, retries, circuit breakers, and fault injection. For more information, check out the [Traffic Management documentation](/zh/docs/concepts/traffic-management/).
|
||||
|
||||
This post walks through a technique that highlights a particularly useful way that you can implement Istio incrementally -- in this case, only the traffic management features -- without having to individually update each of your Pods.
|
||||
|
||||
|
|
@ -39,15 +39,15 @@ Let’s say you have two services that are part of the Istio mesh, Service A and
|
|||
|
||||
If Services A and B are not part of the Istio mesh, there is no sidecar proxy that knows how to route traffic to different versions of Service B. In that case you need to use another approach to get traffic from Service A to Service B, following the 50/50 rules you’ve setup.
|
||||
|
||||
Fortunately, a standard Istio deployment already includes a [Gateway](/docs/concepts/traffic-management/#gateways) that specifically deals with ingress traffic outside of the Istio mesh. This Gateway is used to allow ingress traffic from outside the cluster via an external load balancer, or to allow ingress traffic from within the Kubernetes cluster but outside the service mesh. It can be configured to proxy incoming ingress traffic to the appropriate Pods, even if they don’t have a sidecar proxy. While this approach allows you to leverage Istio’s traffic management features, it does mean that traffic going through the ingress gateway will incur an extra hop.
|
||||
Fortunately, a standard Istio deployment already includes a [Gateway](/zh/docs/concepts/traffic-management/#gateways) that specifically deals with ingress traffic outside of the Istio mesh. This Gateway is used to allow ingress traffic from outside the cluster via an external load balancer, or to allow ingress traffic from within the Kubernetes cluster but outside the service mesh. It can be configured to proxy incoming ingress traffic to the appropriate Pods, even if they don’t have a sidecar proxy. While this approach allows you to leverage Istio’s traffic management features, it does mean that traffic going through the ingress gateway will incur an extra hop.
|
||||
|
||||
{{< image width="60%" link="./fifty-fifty-ingress-gateway.png" caption="50/50 Traffic Split using Ingress Gateway" >}}
|
||||
|
||||
## In action: traffic routing with Istio
|
||||
|
||||
A simple way to see this type of approach in action is to first setup your Kubernetes environment using the [Platform Setup](/docs/setup/platform-setup/) instructions, and then install the **minimal** Istio profile using [Helm](/docs/setup/install/helm/), including only the traffic management components (ingress gateway, egress gateway, Pilot). The following example uses [Google Kubernetes Engine](https://cloud.google.com/gke).
|
||||
A simple way to see this type of approach in action is to first setup your Kubernetes environment using the [Platform Setup](/zh/docs/setup/platform-setup/) instructions, and then install the **minimal** Istio profile using [Helm](/zh/docs/setup/install/helm/), including only the traffic management components (ingress gateway, egress gateway, Pilot). The following example uses [Google Kubernetes Engine](https://cloud.google.com/gke).
|
||||
|
||||
First, setup and configure [GKE](/docs/setup/platform-setup/gke/):
|
||||
First, setup and configure [GKE](/zh/docs/setup/platform-setup/gke/):
|
||||
|
||||
{{< text bash >}}
|
||||
$ gcloud container clusters create istio-inc --zone us-central1-f
|
||||
|
|
@ -57,7 +57,7 @@ $ kubectl create clusterrolebinding cluster-admin-binding \
|
|||
--user=$(gcloud config get-value core/account)
|
||||
{{< /text >}}
|
||||
|
||||
Next, [install Helm](https://helm.sh/docs/intro/install/) and [generate a minimal Istio install](/docs/setup/install/helm/) -- only traffic management components:
|
||||
Next, [install Helm](https://helm.sh/docs/intro/install/) and [generate a minimal Istio install](/zh/docs/setup/install/helm/) -- only traffic management components:
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio \
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ target_release: 0.8
|
|||
|
||||
Micro-segmentation is a security technique that creates secure zones in cloud deployments and allows organizations to
|
||||
isolate workloads from one another and secure them individually.
|
||||
[Istio's authorization feature](/docs/concepts/security/#authorization), also known as Istio Role Based Access Control,
|
||||
[Istio's authorization feature](/zh/docs/concepts/security/#authorization), also known as Istio Role Based Access Control,
|
||||
provides micro-segmentation for services in an Istio mesh. It features:
|
||||
|
||||
* Authorization at different levels of granularity, including namespace level, service level, and method level.
|
||||
|
|
@ -18,7 +18,7 @@ provides micro-segmentation for services in an Istio mesh. It features:
|
|||
* High performance, as it is enforced natively on Envoy.
|
||||
* Role-based semantics, which makes it easy to use.
|
||||
* High flexibility as it allows users to define conditions using
|
||||
[combinations of attributes](/docs/reference/config/security/constraints-and-properties/).
|
||||
[combinations of attributes](/zh/docs/reference/config/security/constraints-and-properties/).
|
||||
|
||||
In this blog post, you'll learn about the main authorization features and how to use them in different situations.
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ frontend service.
|
|||
duplicate configurations in multiple places and later forget to update some of them when you need to make changes.
|
||||
|
||||
On the other hand, Istio's authorization system is not a traditional RBAC system. It also allows users to define **conditions** using
|
||||
[combinations of attributes](/docs/reference/config/security/constraints-and-properties/). This gives Istio
|
||||
[combinations of attributes](/zh/docs/reference/config/security/constraints-and-properties/). This gives Istio
|
||||
flexibility to express complex access control policies. In fact, **the "RBAC + conditions” model
|
||||
that Istio authorization adopts, has all the benefits an RBAC system has, and supports the level of flexibility that
|
||||
normally an ABAC system provides.** You'll see some [examples](#examples) below.
|
||||
|
|
@ -69,7 +69,7 @@ In addition to the primary identity, you can also specify any conditions that de
|
|||
you can specify the client identity as "user Alice calling from Bookstore frontend service”, in which case,
|
||||
you have a combined identity of the calling service (`Bookstore frontend`) and the end user (`Alice`).
|
||||
|
||||
To improve security, you should enable [authentication features](/docs/concepts/security/#authentication),
|
||||
To improve security, you should enable [authentication features](/zh/docs/concepts/security/#authentication),
|
||||
and use authenticated identities in authorization policies. However, strongly authenticated identity is not required
|
||||
for using authorization. Istio authorization works with or without identities. If you are working with a legacy system,
|
||||
you may not have mutual TLS or JWT authentication setup for your mesh. In this case, the only way to identify the client is, for example,
|
||||
|
|
@ -77,9 +77,9 @@ through IP. You can still use Istio authorization to control which IP addresses
|
|||
|
||||
## Examples
|
||||
|
||||
The [authorization task](/docs/tasks/security/authorization/authz-http/) shows you how to
|
||||
The [authorization task](/zh/docs/tasks/security/authorization/authz-http/) shows you how to
|
||||
use Istio's authorization feature to control namespace level and service level access using the
|
||||
[Bookinfo application](/docs/examples/bookinfo/). In this section, you'll see more examples on how to achieve
|
||||
[Bookinfo application](/zh/docs/examples/bookinfo/). In this section, you'll see more examples on how to achieve
|
||||
micro-segmentation with Istio authorization.
|
||||
|
||||
### Namespace level segmentation via RBAC + conditions
|
||||
|
|
@ -143,7 +143,7 @@ spec:
|
|||
#### Using authenticated client identities
|
||||
|
||||
Suppose you want to grant this `book-reader` role to your `bookstore-frontend` service. If you have enabled
|
||||
[mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication) for your mesh, you can use a
|
||||
[mutual TLS authentication](/zh/docs/concepts/security/#mutual-TLS-authentication) for your mesh, you can use a
|
||||
service account to identify your `bookstore-frontend` service. Granting the `book-reader` role to the `bookstore-frontend`
|
||||
service can be done by creating a `ServiceRoleBinding` as shown below:
|
||||
|
||||
|
|
@ -163,7 +163,7 @@ spec:
|
|||
|
||||
You may want to restrict this further by adding a condition that "only users who belong to the `qualified-reviewer` group are
|
||||
allowed to read books”. The `qualified-reviewer` group is the end user identity that is authenticated by
|
||||
[JWT authentication](/docs/concepts/security/#authentication). In this case, the combination of the client service identity
|
||||
[JWT authentication](/zh/docs/concepts/security/#authentication). In this case, the combination of the client service identity
|
||||
(`bookstore-frontend`) and the end user identity (`qualified-reviewer`) is used in the authorization policy.
|
||||
|
||||
{{< text yaml >}}
|
||||
|
|
@ -210,5 +210,5 @@ Istio’s authorization feature provides authorization at namespace-level, servi
|
|||
It adopts "RBAC + conditions” model, which makes it easy to use and understand as an RBAC system, while providing the level of
|
||||
flexibility that an ABAC system normally provides. Istio authorization achieves high performance as it is enforced
|
||||
natively on Envoy. While it provides the best security by working together with
|
||||
[Istio authentication features](/docs/concepts/security/#authentication), Istio authorization can also be used to
|
||||
[Istio authentication features](/zh/docs/concepts/security/#authentication), Istio authorization can also be used to
|
||||
provide access control for legacy systems that do not have authentication.
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ blog.
|
|||
|
||||
{{< tip >}}
|
||||
This blog is a high-level description of how to deploy Istio in a
|
||||
limited multi-tenancy environment. The [docs](/docs/) section will be updated
|
||||
limited multi-tenancy environment. The [docs](/zh/docs/) section will be updated
|
||||
when official multi-tenancy support is provided.
|
||||
{{< /tip >}}
|
||||
|
||||
|
|
@ -76,8 +76,8 @@ istio-system1 istio-mixer-7d4f7b9968-66z44 3/3 Running 0
|
|||
istio-system1 istio-pilot-5bb6b7669c-779vb 2/2 Running 0 15d
|
||||
{{< /text >}}
|
||||
|
||||
The Istio [sidecar](/docs/setup/additional-setup/sidecar-injection/)
|
||||
and [addons](/docs/tasks/observability/), if required, manifests must also be
|
||||
The Istio [sidecar](/zh/docs/setup/additional-setup/sidecar-injection/)
|
||||
and [addons](/zh/docs/tasks/observability/), if required, manifests must also be
|
||||
deployed to match the configured `namespace` in use by the tenant's Istio
|
||||
control plane.
|
||||
|
||||
|
|
@ -230,7 +230,7 @@ ratings-default RouteRule.v1alpha2.config.istio.io ns-1
|
|||
reviews-default RouteRule.v1alpha2.config.istio.io ns-1
|
||||
{{< /text >}}
|
||||
|
||||
See the [Multiple Istio control planes](/blog/2018/soft-multitenancy/#multiple-istio-control-planes) section of this document for more details on `namespace` requirements in a
|
||||
See the [Multiple Istio control planes](/zh/blog/2018/soft-multitenancy/#multiple-istio-control-planes) section of this document for more details on `namespace` requirements in a
|
||||
multi-tenant environment.
|
||||
|
||||
### Test results
|
||||
|
|
@ -268,7 +268,7 @@ Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list p
|
|||
{{< /text >}}
|
||||
|
||||
The tenant administrator can deploy applications in the application namespace configured for
|
||||
that tenant. As an example, updating the [Bookinfo](/docs/examples/bookinfo/)
|
||||
that tenant. As an example, updating the [Bookinfo](/zh/docs/examples/bookinfo/)
|
||||
manifests and then deploying under the tenant's application namespace of *ns-0*, listing the
|
||||
pods in use by this tenant's namespace is permitted:
|
||||
|
||||
|
|
@ -290,8 +290,8 @@ $ kubectl get pods -n ns-1
|
|||
Error from server (Forbidden): pods is forbidden: User "dev-admin" cannot list pods in the namespace "ns-1"
|
||||
{{< /text >}}
|
||||
|
||||
If the [add-on tools](/docs/tasks/observability/), example
|
||||
[Prometheus](/docs/tasks/observability/metrics/querying-metrics/), are deployed
|
||||
If the [add-on tools](/zh/docs/tasks/observability/), example
|
||||
[Prometheus](/zh/docs/tasks/observability/metrics/querying-metrics/), are deployed
|
||||
(also limited by an Istio `namespace`) the statistical results returned would represent only
|
||||
that traffic seen from that tenant's application namespace.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ target_release: 0.5
|
|||
|
||||
Trying to enumerate all the possible combinations of test cases for testing services in non-production/test environments can be daunting. In some cases, you'll find that all of the effort that goes into cataloging these use cases doesn't match up to real production use cases. Ideally, we could use live production use cases and traffic to help illuminate all of the feature areas of the service under test that we might miss in more contrived testing environments.
|
||||
|
||||
Istio can help here. With the release of [Istio 0.5](/news/releases/0.x/announcing-0.5), Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:
|
||||
Istio can help here. With the release of [Istio 0.5](/zh/news/releases/0.x/announcing-0.5), Istio can mirror traffic to help test your services. You can write route rules similar to the following to enable traffic mirroring:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: config.istio.io/v1alpha2
|
||||
|
|
@ -40,5 +40,5 @@ A few things to note here:
|
|||
* Responses to any mirrored traffic is ignored; traffic is mirrored as "fire-and-forget"
|
||||
* You'll need to have the 0-weighted route to hint to Istio to create the proper Envoy cluster under the covers; [this should be ironed out in future releases](https://github.com/istio/istio/issues/3270).
|
||||
|
||||
Learn more about mirroring by visiting the [Mirroring Task](/docs/tasks/traffic-management/mirroring/) and see a more
|
||||
Learn more about mirroring by visiting the [Mirroring Task](/zh/docs/tasks/traffic-management/mirroring/) and see a more
|
||||
[comprehensive treatment of this scenario on my blog](https://dzone.com/articles/traffic-shadowing-with-istio-reducing-the-risk-of).
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ resources.
|
|||
|
||||
### `Gateway`
|
||||
|
||||
A [`Gateway`](/docs/reference/config/networking/gateway/)
|
||||
A [`Gateway`](/zh/docs/reference/config/networking/gateway/)
|
||||
configures a load balancer for HTTP/TCP traffic, regardless of
|
||||
where it will be running. Any number of gateways can exist within the mesh
|
||||
and multiple different gateway implementations can co-exist. In fact, a
|
||||
|
|
@ -157,9 +157,9 @@ scalability issues with the previous model.
|
|||
In effect, what has changed is that instead of configuring routing using a set of individual configuration resources
|
||||
(rules) for a particular destination service, each containing a precedence field to control the order of evaluation, we
|
||||
now configure the (virtual) destination itself, with all of its rules in an ordered list within a corresponding
|
||||
[`VirtualService`](/docs/reference/config/networking/virtual-service/) resource.
|
||||
[`VirtualService`](/zh/docs/reference/config/networking/virtual-service/) resource.
|
||||
For example, where previously we had two `RouteRule` resources for the
|
||||
[Bookinfo](/docs/examples/bookinfo/) application’s `reviews` service, like this:
|
||||
[Bookinfo](/zh/docs/examples/bookinfo/) application’s `reviews` service, like this:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: config.istio.io/v1alpha2
|
||||
|
|
@ -275,7 +275,7 @@ In addition to this fundamental restructuring, `VirtualService` includes several
|
|||
|
||||
### `DestinationRule`
|
||||
|
||||
A [`DestinationRule`](/docs/reference/config/networking/destination-rule/)
|
||||
A [`DestinationRule`](/zh/docs/reference/config/networking/destination-rule/)
|
||||
configures the set of policies to be applied while forwarding traffic to a service. They are
|
||||
intended to be authored by service owners, describing the circuit breakers, load balancer settings, TLS settings, etc..
|
||||
`DestinationRule` is more or less the same as its predecessor, `DestinationPolicy`, with the following exceptions:
|
||||
|
|
@ -319,7 +319,7 @@ Notice that, unlike `DestinationPolicy`, multiple policies (e.g., default and v2
|
|||
|
||||
### `ServiceEntry`
|
||||
|
||||
[`ServiceEntry`](/docs/reference/config/networking/service-entry/)
|
||||
[`ServiceEntry`](/zh/docs/reference/config/networking/service-entry/)
|
||||
is used to add additional entries into the service registry that Istio maintains internally.
|
||||
It is most commonly used to allow one to model traffic to external dependencies of the mesh
|
||||
such as APIs consumed from the web or traffic to services in legacy infrastructure.
|
||||
|
|
|
|||
|
|
@ -16,10 +16,10 @@ With the [App Identity and Access Adapter](https://github.com/ibm-cloud-security
|
|||
|
||||
## Understanding Istio and the adapter
|
||||
|
||||
[Istio](/docs/concepts/what-is-istio/) is an open source service mesh that transparently layers onto distributed applications and seamlessly integrates with Kubernetes. To reduce the complexity of deployments Istio provides behavioral insights and operational control over the service mesh as a whole.
|
||||
See [Istio Architecture](/docs/ops/architecture/) for more details.
|
||||
[Istio](/zh/docs/concepts/what-is-istio/) is an open source service mesh that transparently layers onto distributed applications and seamlessly integrates with Kubernetes. To reduce the complexity of deployments Istio provides behavioral insights and operational control over the service mesh as a whole.
|
||||
See [Istio Architecture](/zh/docs/ops/architecture/) for more details.
|
||||
|
||||
Istio uses [Envoy proxy sidecars](/blog/2019/data-plane-setup/) to mediate inbound and outbound traffic for all pods in the service mesh. Istio extracts telemetry from the Envoy sidecars and sends it to [Mixer](/docs/ops/architecture/#mixer), the Istio component responsible for collecting telemetry and policy enforcement.
|
||||
Istio uses [Envoy proxy sidecars](/zh/blog/2019/data-plane-setup/) to mediate inbound and outbound traffic for all pods in the service mesh. Istio extracts telemetry from the Envoy sidecars and sends it to [Mixer](/zh/docs/ops/architecture/#mixer), the Istio component responsible for collecting telemetry and policy enforcement.
|
||||
|
||||
The App Identity and Access adapter extends the Mixer functionality by analyzing the telemetry (attributes) against various access control policies across the service mesh. The access control policies can be linked to a particular Kubernetes services and can be finely tuned to specific service endpoints. For more information about policies and telemetry, see the Istio documentation.
|
||||
|
||||
|
|
|
|||
|
|
@ -44,9 +44,9 @@ The point of a dependency graph is to know which clients depend on a particular
|
|||
|
||||
### AppSwitch model and constructs
|
||||
|
||||
Now that we have a conceptual understanding of AppSwitch’s high-level approach, let’s look at the constructs involved. But first a quick summary of the usage model is in order. Even though it is written for a different context, reviewing my earlier [blog](/blog/2018/delayering-istio/) on this topic would be useful as well. For completeness, let me also note AppSwitch doesn’t bother with non-network dependencies. For example it may be possible for two services to interact using IPC mechanisms or through the shared file system. Processes with deep ties like that are typically part of the same service anyway and don’t require framework’s intervention for ordering.
|
||||
Now that we have a conceptual understanding of AppSwitch’s high-level approach, let’s look at the constructs involved. But first a quick summary of the usage model is in order. Even though it is written for a different context, reviewing my earlier [blog](/zh/blog/2018/delayering-istio/) on this topic would be useful as well. For completeness, let me also note AppSwitch doesn’t bother with non-network dependencies. For example it may be possible for two services to interact using IPC mechanisms or through the shared file system. Processes with deep ties like that are typically part of the same service anyway and don’t require framework’s intervention for ordering.
|
||||
|
||||
At its core, AppSwitch is built on a mechanism that allows instrumenting the BSD socket API and other related calls like `fcntl` and `ioctl` that deal with sockets. As interesting as the details of its implementation are, it’s going to distract us from the main topic, so I’d just summarize the key properties that distinguish it from other implementations. (1) It’s fast. It uses a combination of `seccomp` filtering and binary instrumentation to aggressively limit intervening with application’s normal execution. AppSwitch is particularly suited for service mesh and application networking use cases given that it implements those features without ever having to actually touch the data. In contrast, network level approaches incur per-packet cost. Take a look at this [blog](/blog/2018/delayering-istio/) for some of the performance measurements. (2) It doesn’t require any kernel support, kernel module or a patch and works on standard distro kernels (3) It can run as regular user (no root). In fact, the mechanism can even make it possible to run [Docker daemon without root](https://linuxpiter.com/en/materials/2478) by removing root requirement to network containers (4) It doesn’t require any changes to the applications whatsoever and works for any type of application -- from WebSphere ND and SAP to custom C apps to statically linked Go apps. Only requirement at this point is Linux/x86.
|
||||
At its core, AppSwitch is built on a mechanism that allows instrumenting the BSD socket API and other related calls like `fcntl` and `ioctl` that deal with sockets. As interesting as the details of its implementation are, it’s going to distract us from the main topic, so I’d just summarize the key properties that distinguish it from other implementations. (1) It’s fast. It uses a combination of `seccomp` filtering and binary instrumentation to aggressively limit intervening with application’s normal execution. AppSwitch is particularly suited for service mesh and application networking use cases given that it implements those features without ever having to actually touch the data. In contrast, network level approaches incur per-packet cost. Take a look at this [blog](/zh/blog/2018/delayering-istio/) for some of the performance measurements. (2) It doesn’t require any kernel support, kernel module or a patch and works on standard distro kernels (3) It can run as regular user (no root). In fact, the mechanism can even make it possible to run [Docker daemon without root](https://linuxpiter.com/en/materials/2478) by removing root requirement to network containers (4) It doesn’t require any changes to the applications whatsoever and works for any type of application -- from WebSphere ND and SAP to custom C apps to statically linked Go apps. Only requirement at this point is Linux/x86.
|
||||
|
||||
### Decoupling services from their references
|
||||
|
||||
|
|
@ -70,7 +70,7 @@ What if the application times out based on its own internal timer? Truth be tol
|
|||
|
||||
Service references can be used to address the Istio sidecar dependency issue mentioned earlier. AppSwitch allows the IP:port specified as part of a service reference to be a wildcard. That is, the service reference IP address can be a netmask indicating the IP address range to be captured. If the label selector of the service reference points to the sidecar service, then all outgoing connections of any application for which this service reference is applied, will be transparently redirected to the sidecar. And of course, the service reference remains valid while sidecar is still coming up and the race is removed.
|
||||
|
||||
Using service references for sidecar dependency ordering also implicitly redirects application’s connections to the sidecar without requiring iptables and attendant privilege issues. Essentially it works as if the application is directly making connections to the sidecar rather than the target destination, leaving the sidecar in charge of what to do. AppSwitch would interject metadata about the original destination etc. into the data stream of the connection using the proxy protocol that the sidecar could decode before passing the connection through to the application. Some of these details were discussed [here](/blog/2018/delayering-istio/). That takes care of outbound connections but what about incoming connections? With all services and their sidecars running under AppSwitch, any incoming connections that would have come from remote nodes would be redirected to their respective remote sidecars. So nothing special to do about incoming connections.
|
||||
Using service references for sidecar dependency ordering also implicitly redirects application’s connections to the sidecar without requiring iptables and attendant privilege issues. Essentially it works as if the application is directly making connections to the sidecar rather than the target destination, leaving the sidecar in charge of what to do. AppSwitch would interject metadata about the original destination etc. into the data stream of the connection using the proxy protocol that the sidecar could decode before passing the connection through to the application. Some of these details were discussed [here](/zh/blog/2018/delayering-istio/). That takes care of outbound connections but what about incoming connections? With all services and their sidecars running under AppSwitch, any incoming connections that would have come from remote nodes would be redirected to their respective remote sidecars. So nothing special to do about incoming connections.
|
||||
|
||||
## Summary
|
||||
|
||||
|
|
|
|||
|
|
@ -8,14 +8,14 @@ attribution: Julien Senon
|
|||
target_release: 1.0
|
||||
---
|
||||
|
||||
This post provides instructions to manually create a custom ingress [gateway](/docs/reference/config/networking/gateway/) with automatic provisioning of certificates based on cert-manager.
|
||||
This post provides instructions to manually create a custom ingress [gateway](/zh/docs/reference/config/networking/gateway/) with automatic provisioning of certificates based on cert-manager.
|
||||
|
||||
The creation of custom ingress gateway could be used in order to have different `loadbalancer` in order to isolate traffic.
|
||||
|
||||
## Before you begin
|
||||
|
||||
* Setup Istio by following the instructions in the
|
||||
[Installation guide](/docs/setup/).
|
||||
[Installation guide](/zh/docs/setup/).
|
||||
* Setup `cert-manager` with helm [chart](https://github.com/helm/charts/tree/master/stable/cert-manager#installing-the-chart)
|
||||
* We will use `demo.mydemo.com` for our example,
|
||||
it must be resolved with your DNS
|
||||
|
|
@ -129,7 +129,7 @@ The creation of custom ingress gateway could be used in order to have different
|
|||
desiredReplicas: 1
|
||||
{{< /text >}}
|
||||
|
||||
1. Apply your deployment with declaration provided in the [yaml definition](/blog/2019/custom-ingress-gateway/deployment-custom-ingress.yaml)
|
||||
1. Apply your deployment with declaration provided in the [yaml definition](/zh/blog/2019/custom-ingress-gateway/deployment-custom-ingress.yaml)
|
||||
|
||||
{{< tip >}}
|
||||
The annotations used, for example `aws-load-balancer-type`, only apply for AWS.
|
||||
|
|
@ -231,4 +231,4 @@ The creation of custom ingress gateway could be used in order to have different
|
|||
SSL certificate verify ok.
|
||||
{{< /text >}}
|
||||
|
||||
**Congratulations!** You can now use your custom `istio-custom-gateway` [gateway](/docs/reference/config/networking/gateway/) configuration object.
|
||||
**Congratulations!** You can now use your custom `istio-custom-gateway` [gateway](/zh/docs/reference/config/networking/gateway/) configuration object.
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ target_release: 1.0
|
|||
---
|
||||
A simple overview of an Istio service-mesh architecture always starts with describing the control-plane and data-plane.
|
||||
|
||||
From [Istio’s documentation](/docs/ops/architecture/):
|
||||
From [Istio’s documentation](/zh/docs/ops/architecture/):
|
||||
|
||||
{{< quote >}}
|
||||
An Istio service mesh is logically split into a data plane and a control plane.
|
||||
|
|
@ -45,7 +45,7 @@ This is the actual sidecar proxy (based on Envoy).
|
|||
|
||||
### Manual injection
|
||||
|
||||
In the manual injection method, you can use [`istioctl`](/docs/reference/commands/istioctl) to modify the pod template and add the configuration of the two containers previously mentioned. For both manual as well as automatic injection, Istio takes the configuration from the `istio-sidecar-injector` configuration map (configmap) and the mesh's `istio` configmap.
|
||||
In the manual injection method, you can use [`istioctl`](/zh/docs/reference/commands/istioctl) to modify the pod template and add the configuration of the two containers previously mentioned. For both manual as well as automatic injection, Istio takes the configuration from the `istio-sidecar-injector` configuration map (configmap) and the mesh's `istio` configmap.
|
||||
|
||||
Let’s look at the configuration of the `istio-sidecar-injector` configmap, to get an idea of what actually is going on.
|
||||
|
||||
|
|
@ -189,7 +189,7 @@ As seen in the output, the `State` of the `istio-init` container is `Terminated`
|
|||
|
||||
### Automatic injection
|
||||
|
||||
Most of the times, you don’t want to manually inject a sidecar every time you deploy an application, using the [`istioctl`](/docs/reference/commands/istioctl) command, but would prefer that Istio automatically inject the sidecar to your pod. This is the recommended approach and for it to work, all you need to do is to label the namespace where you are deploying the app with `istio-injection=enabled`.
|
||||
Most of the times, you don’t want to manually inject a sidecar every time you deploy an application, using the [`istioctl`](/zh/docs/reference/commands/istioctl) command, but would prefer that Istio automatically inject the sidecar to your pod. This is the recommended approach and for it to work, all you need to do is to label the namespace where you are deploying the app with `istio-injection=enabled`.
|
||||
|
||||
Once labeled, Istio injects the sidecar automatically for any pod you deploy in that namespace. In the following example, the sidecar gets automatically injected in the deployed pods in the `istio-dev` namespace.
|
||||
|
||||
|
|
@ -306,7 +306,7 @@ This example shows there are many variables, based on whether the automatic side
|
|||
- default policy (Configured in the ConfigMap `istio-sidecar-injector`)
|
||||
- per-pod override annotation (`sidecar.istio.io/inject`)
|
||||
|
||||
The [injection status table](/docs/ops/common-problems/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
The [injection status table](/zh/docs/ops/common-problems/injection/) shows a clear picture of the final injection status based on the value of the above variables.
|
||||
|
||||
## Traffic flow from application container to sidecar proxy
|
||||
|
||||
|
|
|
|||
|
|
@ -28,4 +28,4 @@ Chiron is the component provisioning and managing DNS certificates in Istio.
|
|||
caption="The architecture of provisioning and managing DNS certificates in Istio"
|
||||
>}}
|
||||
|
||||
To try this new feature, refer to the [DNS certificate management task](/docs/tasks/security/dns-cert).
|
||||
To try this new feature, refer to the [DNS certificate management task](/zh/docs/tasks/security/dns-cert).
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ keywords: [performance,traffic-management,egress,mongo]
|
|||
target_release: 1.0
|
||||
---
|
||||
|
||||
The main objective of this investigation was to determine the impact on performance and resource utilization when an egress gateway is added in the service mesh to access an external service (MongoDB, in this case). The steps to configure an egress gateway for an external MongoDB are described in the blog [Consuming External MongoDB Services](/blog/2018/egress-mongo/).
|
||||
The main objective of this investigation was to determine the impact on performance and resource utilization when an egress gateway is added in the service mesh to access an external service (MongoDB, in this case). The steps to configure an egress gateway for an external MongoDB are described in the blog [Consuming External MongoDB Services](/zh/blog/2018/egress-mongo/).
|
||||
|
||||
The application used for this investigation was the Java version of Acmeair, which simulates an airline reservation system. This application is used in the Performance Regression Patrol of Istio daily builds, but on that setup the microservices have been accessing the external MongoDB directly via their sidecars, without an egress gateway.
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ Once you agree that you should control the egress traffic coming from your clust
|
|||
What is required from a system for secure control of egress traffic? Which is the best solution to fulfill
|
||||
these requirements? (spoiler: Istio in my opinion)
|
||||
Future installments will describe
|
||||
[the implementation of the secure control of egress traffic in Istio](/blog/2019/egress-traffic-control-in-istio-part-2/)
|
||||
[the implementation of the secure control of egress traffic in Istio](/zh/blog/2019/egress-traffic-control-in-istio-part-2/)
|
||||
and compare it with other solutions.
|
||||
|
||||
The most important security aspect for a service mesh is probably ingress traffic. You definitely must prevent attackers
|
||||
|
|
@ -80,7 +80,7 @@ combined them with the
|
|||
Istio 1.1 satisfies all gathered requirements:
|
||||
|
||||
1. Support for [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) with
|
||||
[SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) or for [TLS origination](/docs/reference/glossary/#tls-origination) by Istio.
|
||||
[SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) or for [TLS origination](/zh/docs/reference/glossary/#tls-origination) by Istio.
|
||||
|
||||
1. **Monitor** SNI and the source workload of every egress access.
|
||||
|
||||
|
|
@ -170,9 +170,9 @@ all of these requirements, in particular it is transparent, DNS-aware, and Kuber
|
|||
## Summary
|
||||
|
||||
I hope that you are convinced that controlling egress traffic is important for the security of your cluster. In [the
|
||||
part 2 of this series](/blog/2019/egress-traffic-control-in-istio-part-2/) I describe the Istio way to perform secure
|
||||
part 2 of this series](/zh/blog/2019/egress-traffic-control-in-istio-part-2/) I describe the Istio way to perform secure
|
||||
control of egress traffic. In
|
||||
[the
|
||||
part 3 of this series](/blog/2019/egress-traffic-control-in-istio-part-3/) I compare it with alternative solutions such as
|
||||
part 3 of this series](/zh/blog/2019/egress-traffic-control-in-istio-part-3/) I compare it with alternative solutions such as
|
||||
[Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and legacy
|
||||
egress proxies/firewalls.
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ target_release: 1.2
|
|||
---
|
||||
|
||||
Welcome to part 2 in our new series about secure control of egress traffic in Istio.
|
||||
In [the first part in the series](/blog/2019/egress-traffic-control-in-istio-part-1/), I presented the attacks involving
|
||||
In [the first part in the series](/zh/blog/2019/egress-traffic-control-in-istio-part-1/), I presented the attacks involving
|
||||
egress traffic and the requirements we collected for a secure control system for egress traffic.
|
||||
In this installment, I describe the Istio way to securely control the egress traffic, and show how Istio can help you
|
||||
prevent the attacks.
|
||||
|
|
@ -17,10 +17,10 @@ prevent the attacks.
|
|||
## Secure control of egress traffic in Istio
|
||||
|
||||
To implement secure control of egress traffic in Istio, you must
|
||||
[direct TLS traffic to external services through an egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic).
|
||||
[direct TLS traffic to external services through an egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-https-traffic).
|
||||
Alternatively, you
|
||||
can [direct HTTP traffic through an egress gateway](/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-http-traffic)
|
||||
and [let the egress gateway perform TLS origination](/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-tls-origination-with-an-egress-gateway).
|
||||
can [direct HTTP traffic through an egress gateway](/zh/docs/tasks/traffic-management/egress/egress-gateway/#egress-gateway-for-http-traffic)
|
||||
and [let the egress gateway perform TLS origination](/zh/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-TLS-origination-with-an-egress-gateway).
|
||||
|
||||
Both alternatives have their pros and cons, you should choose between them according to your circumstances.
|
||||
The choice mainly depends on whether your application can send unencrypted HTTP requests and whether your
|
||||
|
|
@ -32,18 +32,18 @@ The same in the case your organization's security policies do not allow sending
|
|||
|
||||
If the application sends HTTP requests and the egress gateway performs TLS origination, you can monitor HTTP
|
||||
information like HTTP methods, headers, and URL paths. You can also
|
||||
[define policies](/blog/2018/egress-monitoring-access-control) based on said HTTP information. If the application
|
||||
[define policies](/zh/blog/2018/egress-monitoring-access-control) based on said HTTP information. If the application
|
||||
performs TLS origination, you can
|
||||
[monitor SNI and the service account](/docs/tasks/traffic-management/egress/egress_sni_monitoring_and_policies/) of the
|
||||
[monitor SNI and the service account](/zh/docs/tasks/traffic-management/egress/egress_sni_monitoring_and_policies/) of the
|
||||
source pod's TLS traffic, and define policies based on SNI and service accounts.
|
||||
|
||||
You must ensure that traffic from your cluster to the outside cannot bypass the egress gateway. Istio cannot enforce it
|
||||
for you, so you must apply some
|
||||
[additional security mechanisms](/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations),
|
||||
[additional security mechanisms](/zh/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations),
|
||||
for example,
|
||||
the [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) or an L3
|
||||
firewall. See an example of the
|
||||
[Kubernetes network policies configuration](/docs/tasks/traffic-management/egress/egress-gateway/#apply-kubernetes-network-policies).
|
||||
[Kubernetes network policies configuration](/zh/docs/tasks/traffic-management/egress/egress-gateway/#apply-Kubernetes-network-policies).
|
||||
According to the [Defense in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)) concept, the more
|
||||
security mechanisms you apply for the same goal, the better.
|
||||
|
||||
|
|
@ -65,7 +65,7 @@ Once you direct egress traffic through an egress gateway and apply the additiona
|
|||
you can securely monitor and enforce security policies for the traffic.
|
||||
|
||||
The following diagram shows Istio's security architecture, augmented with an L3 firewall which is part of the
|
||||
[additional security mechanisms](/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations)
|
||||
[additional security mechanisms](/zh/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations)
|
||||
that should be provided outside of Istio.
|
||||
|
||||
{{< image width="80%" link="./SecurityArchitectureWithL3Firewalls.svg" caption="Istio Security Architecture with Egress Gateway and L3 Firewall" >}}
|
||||
|
|
@ -114,7 +114,7 @@ Having failed to achieve their goals in a straightforward way, the malicious act
|
|||
disable enforcement of the security policies. This attack is prevented by applying the special security measures to
|
||||
the egress gateway pods.
|
||||
- **Impersonate as application B** since application **B** is allowed to access `mongo1.composedb.com`. This attack,
|
||||
fortunately, is prevented by Istio's [strong identity support](/docs/concepts/security/#istio-identity).
|
||||
fortunately, is prevented by Istio's [strong identity support](/zh/docs/concepts/security/#istio-identity).
|
||||
|
||||
As far as we can see, all the forbidden access is prevented, or at least is monitored and can be prevented later.
|
||||
If you see other attacks that involve egress traffic or security holes in the current design, we would be happy
|
||||
|
|
@ -123,7 +123,7 @@ If you see other attacks that involve egress traffic or security holes in the cu
|
|||
## Summary
|
||||
|
||||
Hopefully, I managed to convince you that Istio is an effective tool to prevent attacks involving egress
|
||||
traffic. In [the next part of this series](/blog/2019/egress-traffic-control-in-istio-part-3/), I compare secure control of egress traffic in Istio with alternative
|
||||
traffic. In [the next part of this series](/zh/blog/2019/egress-traffic-control-in-istio-part-3/), I compare secure control of egress traffic in Istio with alternative
|
||||
solutions such as
|
||||
[Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and legacy
|
||||
egress proxies/firewalls.
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ target_release: 1.2
|
|||
---
|
||||
|
||||
Welcome to part 3 in our series about secure control of egress traffic in Istio.
|
||||
In [the first part in the series](/blog/2019/egress-traffic-control-in-istio-part-1/), I presented the attacks involving
|
||||
In [the first part in the series](/zh/blog/2019/egress-traffic-control-in-istio-part-1/), I presented the attacks involving
|
||||
egress traffic and the requirements we collected for a secure control system for egress traffic.
|
||||
In [the second part in the series](/blog/2019/egress-traffic-control-in-istio-part-2/), I presented the Istio way of
|
||||
In [the second part in the series](/zh/blog/2019/egress-traffic-control-in-istio-part-2/), I presented the Istio way of
|
||||
securing egress traffic and showed how you can prevent the attacks using Istio.
|
||||
|
||||
In this installment, I compare secure control of egress traffic in Istio with alternative solutions such as using Kubernetes
|
||||
|
|
@ -20,10 +20,10 @@ secure control of egress traffic in Istio.
|
|||
|
||||
## Alternative solutions for egress traffic control
|
||||
|
||||
First, let's remember the [requirements for egress traffic control](/blog/2019/egress-traffic-control-in-istio-part-1/#requirements-for-egress-traffic-control) we previously collected:
|
||||
First, let's remember the [requirements for egress traffic control](/zh/blog/2019/egress-traffic-control-in-istio-part-1/#requirements-for-egress-traffic-control) we previously collected:
|
||||
|
||||
1. Support of [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) with
|
||||
[SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) or of [TLS origination](/docs/reference/glossary/#tls-origination).
|
||||
[SNI](https://en.wikipedia.org/wiki/Server_Name_Indication) or of [TLS origination](/zh/docs/reference/glossary/#tls-origination).
|
||||
1. **Monitor** SNI and the source workload of every egress access.
|
||||
1. Define and enforce **policies per cluster**.
|
||||
1. Define and enforce **policies per source**, _Kubernetes-aware_.
|
||||
|
|
@ -69,7 +69,7 @@ are not transparent and not Kubernetes-aware.
|
|||
|
||||
Istio egress traffic control is **secure**: it is based on the strong identity of Istio and, when you
|
||||
apply
|
||||
[additional security measures](/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations),
|
||||
[additional security measures](/zh/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations),
|
||||
Istio's traffic control is resilient to tampering.
|
||||
|
||||
Additionally, Istio's egress traffic control provides the following advantages:
|
||||
|
|
@ -79,7 +79,7 @@ Additionally, Istio's egress traffic control provides the following advantages:
|
|||
- Out-of-the-Box integration of Istio's egress traffic control with Istio's policy and observability adapters.
|
||||
- Write the adapters to use external monitoring or access control systems with Istio only once and
|
||||
apply them for all types of traffic: ingress, egress, and in-cluster.
|
||||
- Use Istio's [traffic management features](/docs/concepts/traffic-management/) for egress traffic:
|
||||
- Use Istio's [traffic management features](/zh/docs/concepts/traffic-management/) for egress traffic:
|
||||
load balancing, passive and active health checking, circuit breaker, timeouts, retries, fault injection, and others.
|
||||
|
||||
We refer to a system with the advantages above as **Istio-aware**.
|
||||
|
|
@ -102,14 +102,14 @@ Traffic passes through two proxies:
|
|||
- The application's sidecar proxy
|
||||
- The egress gateway's proxy
|
||||
|
||||
If you use [TLS egress traffic to wildcard domains](/docs/tasks/traffic-management/egress/wildcard-egress-hosts/),
|
||||
If you use [TLS egress traffic to wildcard domains](/zh/docs/tasks/traffic-management/egress/wildcard-egress-hosts/),
|
||||
you must add
|
||||
[an additional proxy](/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains)
|
||||
[an additional proxy](/zh/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#wildcard-configuration-for-arbitrary-domains)
|
||||
between the application and the external service. Since the traffic between the egress gateway's proxy and
|
||||
the proxy needed for the configuration of arbitrary domains using wildcards is on the pod's local
|
||||
network, that traffic shouldn't have a significant impact on latency.
|
||||
|
||||
See a [performance evaluation](/blog/2019/egress-performance/) of different Istio configurations set to control egress
|
||||
See a [performance evaluation](/zh/blog/2019/egress-performance/) of different Istio configurations set to control egress
|
||||
traffic. I would encourage you to carefully measure different configurations with your own applications and your own
|
||||
external services, before you decide whether you can afford the performance overhead for your use cases. You should weigh the
|
||||
required level of security versus your performance requirements and compare the performance overhead of all
|
||||
|
|
@ -143,10 +143,10 @@ Istio is the only solution I'm aware of that lets you:
|
|||
|
||||
In my opinion, secure control of egress traffic is a great choice if you are looking for your first Istio use case.
|
||||
In this case, Istio already provides you some benefits even before you start using all other Istio features:
|
||||
[traffic management](/docs/tasks/traffic-management/), [security](/docs/tasks/security/),
|
||||
[policies](/docs/tasks/policy-enforcement/) and [observability](/docs/tasks/observability/), applied to traffic between
|
||||
[traffic management](/zh/docs/tasks/traffic-management/), [security](/zh/docs/tasks/security/),
|
||||
[policies](/zh/docs/tasks/policy-enforcement/) and [observability](/zh/docs/tasks/observability/), applied to traffic between
|
||||
microservices inside the cluster.
|
||||
|
||||
So, if you haven't had the chance to work with Istio yet, [install Istio](/docs/setup/install/) on your cluster
|
||||
and check our [egress traffic control tasks](/docs/tasks/traffic-management/egress/) and the tasks for the other
|
||||
[Istio features](/docs/tasks/). We also want to hear from you, please join us at [discuss.istio.io](https://discuss.istio.io).
|
||||
So, if you haven't had the chance to work with Istio yet, [install Istio](/zh/docs/setup/install/) on your cluster
|
||||
and check our [egress traffic control tasks](/zh/docs/tasks/traffic-management/egress/) and the tasks for the other
|
||||
[Istio features](/zh/docs/tasks/). We also want to hear from you, please join us at [discuss.istio.io](https://discuss.istio.io).
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ keywords: [apis,composability,evolution]
|
|||
target_release: 1.2
|
||||
---
|
||||
|
||||
One of Istio’s main goals has always been, and continues to be, enabling teams to develop abstractions that work best for their specific organization and workloads. Istio provides robust and powerful building blocks for service-to-service networking. Since [Istio 0.1](/news/releases/0.x/announcing-0.1), the Istio team has been learning from production users about how they map their own architectures, workloads, and constraints to Istio’s capabilities, and we’ve been evolving Istio’s APIs to make them work better for you.
|
||||
One of Istio’s main goals has always been, and continues to be, enabling teams to develop abstractions that work best for their specific organization and workloads. Istio provides robust and powerful building blocks for service-to-service networking. Since [Istio 0.1](/zh/news/releases/0.x/announcing-0.1), the Istio team has been learning from production users about how they map their own architectures, workloads, and constraints to Istio’s capabilities, and we’ve been evolving Istio’s APIs to make them work better for you.
|
||||
|
||||
## Evolving Istio’s APIs
|
||||
|
||||
|
|
@ -40,7 +40,7 @@ higher-level APIs. [Knative Serving](https://knative.dev/docs/serving/), a compo
|
|||
serving serverless applications and functions, provides an opinionated workflow for application developers to manage routes and revisions of their services.
|
||||
Thanks to that opinionated approach, Knative Serving exposes a subset of Istio’s networking APIs that are most relevant to application developers via a simplified
|
||||
[Routes](https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#route) object that supports revisions and traffic routing,
|
||||
abstracting Istio’s [`VirtualService`](/docs/reference/config/networking/virtual-service/) and [`DestinationRule`](/docs/reference/config/networking/destination-rule/)
|
||||
abstracting Istio’s [`VirtualService`](/zh/docs/reference/config/networking/virtual-service/) and [`DestinationRule`](/zh/docs/reference/config/networking/destination-rule/)
|
||||
resources.
|
||||
|
||||
As Istio has matured, we’ve also seen production users develop workload- and organization-specific abstractions on top of Istio’s infrastructure APIs.
|
||||
|
|
@ -56,8 +56,8 @@ Some areas of improvement that we’re working on for upcoming releases include:
|
|||
- Support for routing all traffic by default to constrain routing incrementally
|
||||
- Add a single global flag to enable mutual TLS and encrypt all inter-pod traffic
|
||||
|
||||
Oh, and if for some reason you judge a toolbox by the list of CRDs it installs, in Istio 1.2 we cut the number from 54 down to 23. Why? It turns out that if you have a bunch of features, you need to have a way to configure them all. With the improvements we’ve made to our installer, you can now install Istio using a [configuration](/docs/setup/additional-setup/config-profiles/) that works with your adapters.
|
||||
Oh, and if for some reason you judge a toolbox by the list of CRDs it installs, in Istio 1.2 we cut the number from 54 down to 23. Why? It turns out that if you have a bunch of features, you need to have a way to configure them all. With the improvements we’ve made to our installer, you can now install Istio using a [configuration](/zh/docs/setup/additional-setup/config-profiles/) that works with your adapters.
|
||||
|
||||
All service meshes and, by extension, Istio seeks to automate complex infrastructure operations, like networking and security. That means there will always be complexity in its APIs, but Istio will always aim to solve the needs of operators, while continuing to evolve the API to provide robust building blocks and prioritize flexibility through role-centric abstractions.
|
||||
|
||||
We can't wait for you to join our [community](/about/community/join/) to see what you build with Istio next!
|
||||
We can't wait for you to join our [community](/zh/about/community/join/) to see what you build with Istio next!
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ keywords: [mixer,adapter,knative,scale-from-zero]
|
|||
target_release: 1.3
|
||||
---
|
||||
|
||||
This post demonstrates how you can use [Mixer](/faq/mixer/) to push application logic
|
||||
This post demonstrates how you can use [Mixer](/zh/faq/mixer/) to push application logic
|
||||
into Istio. It describes a Mixer adapter which implements the [Knative](https://knative.dev/) scale-from-zero logic
|
||||
with simple code and similar performance to the original implementation.
|
||||
|
||||
|
|
@ -34,7 +34,7 @@ Once the application is up and running again, Knative restores the routing from
|
|||
|
||||
## Mixer adapter
|
||||
|
||||
[Mixer](/faq/mixer/) provides a rich intermediation layer between the Istio components and infrastructure backends.
|
||||
[Mixer](/zh/faq/mixer/) provides a rich intermediation layer between the Istio components and infrastructure backends.
|
||||
It is designed as a stand-alone component, separate from [Envoy](https://www.envoyproxy.io/), and has a simple extensibility model
|
||||
to enable Istio to interoperate with a wide breadth of backends. Mixer is inherently easier to extend
|
||||
than Envoy is.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ requested feature by production users of Istio and we are excited that the
|
|||
support for this was added in release 1.3.
|
||||
|
||||
To implement this, the Istio [default
|
||||
metrics](/docs/reference/config/policy-and-telemetry/metrics) are augmented with
|
||||
metrics](/zh/docs/reference/config/policy-and-telemetry/metrics) are augmented with
|
||||
explicit labels to capture blocked and passthrough external service traffic.
|
||||
This blog will cover how you can use these augmented metrics to monitor all
|
||||
external service traffic.
|
||||
|
|
@ -48,7 +48,7 @@ options, first to block all external service access (enabled by setting
|
|||
second to allow all access to external service (enabled by setting
|
||||
`global.outboundTrafficPolicy.mode` to `ALLOW_ANY`). The default option for this
|
||||
setting (as of Istio 1.3) is to allow all external service access. This
|
||||
option can be configured via [mesh configuration](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode).
|
||||
option can be configured via [mesh configuration](/zh/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-OutboundTrafficPolicy-Mode).
|
||||
|
||||
This is where the BlackHole and Passthrough clusters are used.
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ This is where the BlackHole and Passthrough clusters are used.
|
|||
* **BlackHoleCluster** - The BlackHoleCluster is a virtual cluster created
|
||||
in the Envoy configuration when `global.outboundTrafficPolicy.mode` is set to
|
||||
`REGISTRY_ONLY`. In this mode, all traffic to external service is blocked unless
|
||||
[service entries](/docs/reference/config/networking/service-entry)
|
||||
[service entries](/zh/docs/reference/config/networking/service-entry)
|
||||
are explicitly added for each service. To implement this, the default virtual
|
||||
outbound listener at `0.0.0.0:15001` which uses
|
||||
[original destination](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#original-destination)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ target_release: 1.0
|
|||
---
|
||||
|
||||
If you've spent any time looking at Istio, you've probably noticed that it includes a lot of features that
|
||||
can be demonstrated with simple [tasks](/docs/tasks/) and [examples](/docs/examples/)
|
||||
can be demonstrated with simple [tasks](/zh/docs/tasks/) and [examples](/zh/docs/examples/)
|
||||
running on a single Kubernetes cluster.
|
||||
Because most, if not all, real-world cloud and microservices-based applications are not that simple
|
||||
and will need to have the services distributed and running in more than one location, you may be
|
||||
|
|
@ -18,18 +18,18 @@ wondering if all these things will be just as simple in your real production env
|
|||
Fortunately, Istio provides several ways to configure a service mesh so that applications
|
||||
can, more-or-less transparently, be part of a mesh where the services are running
|
||||
in more than one cluster, i.e., in a
|
||||
[multicluster deployment](/docs/ops/prep/deployment-models/#multiple-clusters).
|
||||
[multicluster deployment](/zh/docs/ops/prep/deployment-models/#multiple-clusters).
|
||||
The simplest way to set up a multicluster mesh, because it has no special networking requirements,
|
||||
is using a replicated
|
||||
[control plane model](/docs/ops/prep/deployment-models/#control-plane-models).
|
||||
[control plane model](/zh/docs/ops/prep/deployment-models/#control-plane-models).
|
||||
In this configuration, each Kubernetes cluster contributing to the mesh has its own control plane,
|
||||
but each control plane is synchronized and running under a single administrative control.
|
||||
|
||||
In this article we'll look at how one of the features of Istio,
|
||||
[traffic management](/docs/concepts/traffic-management/), works in a multicluster mesh with
|
||||
[traffic management](/zh/docs/concepts/traffic-management/), works in a multicluster mesh with
|
||||
a dedicated control plane topology.
|
||||
We'll show how to configure Istio route rules to call remote services in a multicluster service mesh
|
||||
by deploying the [Bookinfo sample]({{<github_tree>}}/samples/bookinfo) with version `v1` of the `reviews` service
|
||||
by deploying the [Bookinfo sample]({{< github_tree >}}/samples/bookinfo) with version `v1` of the `reviews` service
|
||||
running in one cluster, versions `v2` and `v3` running in a second cluster.
|
||||
|
||||
## Set up clusters
|
||||
|
|
@ -37,7 +37,7 @@ running in one cluster, versions `v2` and `v3` running in a second cluster.
|
|||
To start, you'll need two Kubernetes clusters, both running a slightly customized configuration of Istio.
|
||||
|
||||
* Set up a multicluster environment with two Istio clusters by following the
|
||||
[replicated control planes](/docs/setup/install/multicluster/gateways/) instructions.
|
||||
[replicated control planes](/zh/docs/setup/install/multicluster/gateways/) instructions.
|
||||
|
||||
* The `kubectl` command is used to access both clusters with the `--context` flag.
|
||||
Use the following command to list your contexts:
|
||||
|
|
@ -263,7 +263,7 @@ Just like any application, we'll use an Istio gateway to access the `bookinfo` a
|
|||
$ kubectl apply --context=$CTX_CLUSTER1 -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
|
||||
{{< /text >}}
|
||||
|
||||
* Follow the [Bookinfo sample instructions](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port)
|
||||
* Follow the [Bookinfo sample instructions](/zh/docs/examples/bookinfo/#determine-the-ingress-IP-and-port)
|
||||
to determine the ingress IP and port and then point your browser to `http://$GATEWAY_URL/productpage`.
|
||||
|
||||
You should see the `productpage` with reviews, but without ratings, because only `v1` of the `reviews` service
|
||||
|
|
@ -271,7 +271,7 @@ is running on `cluster1` and we have not yet configured access to `cluster2`.
|
|||
|
||||
## Create a service entry and destination rule on `cluster1` for the remote reviews service
|
||||
|
||||
As described in the [setup instructions](/docs/setup/install/multicluster/gateways/#setup-dns),
|
||||
As described in the [setup instructions](/zh/docs/setup/install/multicluster/gateways/#setup-DNS),
|
||||
remote services are accessed with a `.global` DNS name. In our case, it's `reviews.default.global`,
|
||||
so we need to create a service entry and destination rule for that host.
|
||||
The service entry will use the `cluster2` gateway as the endpoint address to access the service.
|
||||
|
|
@ -330,7 +330,7 @@ EOF
|
|||
The address `240.0.0.3` of the service entry can be any arbitrary unallocated IP.
|
||||
Using an IP from the class E addresses range 240.0.0.0/4 is a good choice.
|
||||
Check out the
|
||||
[gateway-connected multicluster example](/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
[gateway-connected multicluster example](/zh/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
for more details.
|
||||
|
||||
Note that the labels of the subsets in the destination rule map to the service entry
|
||||
|
|
|
|||
|
|
@ -8,11 +8,11 @@ attribution: Megan O'Keefe (Google), John Howard (Google), Mandar Jog (Google)
|
|||
keywords: [performance,scalability,scale,benchmarks]
|
||||
---
|
||||
|
||||
Service meshes add a lot of functionality to application deployments, including [traffic policies](/docs/concepts/what-is-istio/#traffic-management), [observability](/docs/concepts/what-is-istio/#observability), and [secure communication](/docs/concepts/what-is-istio/#security). But adding a service mesh to your environment comes at a cost, whether that's time (added latency) or resources (CPU cycles). To make an informed decision on whether a service mesh is right for your use case, it's important to evaluate how your application performs when deployed with a service mesh.
|
||||
Service meshes add a lot of functionality to application deployments, including [traffic policies](/zh/docs/concepts/what-is-istio/#traffic-management), [observability](/zh/docs/concepts/what-is-istio/#observability), and [secure communication](/zh/docs/concepts/what-is-istio/#security). But adding a service mesh to your environment comes at a cost, whether that's time (added latency) or resources (CPU cycles). To make an informed decision on whether a service mesh is right for your use case, it's important to evaluate how your application performs when deployed with a service mesh.
|
||||
|
||||
Earlier this year, we published a [blog post](/blog/2019/istio1.1_perf/) on Istio's performance improvements in version 1.1. Following the release of [Istio 1.2](/news/releases/1.2.x/announcing-1.2), we want to provide guidance and tools to help you benchmark Istio's data plane performance in a production-ready Kubernetes environment.
|
||||
Earlier this year, we published a [blog post](/zh/blog/2019/istio1.1_perf/) on Istio's performance improvements in version 1.1. Following the release of [Istio 1.2](/zh/news/releases/1.2.x/announcing-1.2), we want to provide guidance and tools to help you benchmark Istio's data plane performance in a production-ready Kubernetes environment.
|
||||
|
||||
Overall, we found that Istio's [sidecar proxy](/docs/ops/architecture/#envoy) latency scales with the number of concurrent connections. At 1000 requests per second (RPS), across 16 connections, Istio adds **3 milliseconds** per request in the 50th percentile, and **10 milliseconds** in the 99th percentile.
|
||||
Overall, we found that Istio's [sidecar proxy](/zh/docs/ops/architecture/#envoy) latency scales with the number of concurrent connections. At 1000 requests per second (RPS), across 16 connections, Istio adds **3 milliseconds** per request in the 50th percentile, and **10 milliseconds** in the 99th percentile.
|
||||
|
||||
In the [Istio Tools repository](https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark), you’ll find scripts and instructions for measuring Istio's data plane performance, with additional instructions on how to run the scripts with [Linkerd](https://linkerd.io), another service mesh implementation. [Follow along](https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark#setup) as we detail some best practices for each step of the performance test framework.
|
||||
|
||||
|
|
@ -20,9 +20,9 @@ In the [Istio Tools repository](https://github.com/istio/tools/tree/3ac7ab40db8a
|
|||
|
||||
To accurately measure the performance of a service mesh at scale, it's important to use an [adequately-sized](https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/istio-install#istio-setup) Kubernetes cluster. We test using three worker nodes, each with at least 4 vCPUs and 15 GB of memory.
|
||||
|
||||
Then, it's important to use a production-ready Istio **installation profile** on that cluster. This lets us achieve performance-oriented settings such as control plane pod autoscaling, and ensures that resource limits are appropriate for heavy traffic load. The [default](/docs/setup/install/helm/#option-1-install-with-helm-via-helm-template) Istio installation is suitable for most benchmarking use cases. For extensive performance benchmarking, with thousands of proxy-injected services, we also provide [a tuned Istio install](https://github.com/istio/tools/blob/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/istio-install/values.yaml) that allocates extra memory and CPU to the Istio control plane.
|
||||
Then, it's important to use a production-ready Istio **installation profile** on that cluster. This lets us achieve performance-oriented settings such as control plane pod autoscaling, and ensures that resource limits are appropriate for heavy traffic load. The [default](/zh/docs/setup/install/helm/#option-1-install-with-helm-via-helm-template) Istio installation is suitable for most benchmarking use cases. For extensive performance benchmarking, with thousands of proxy-injected services, we also provide [a tuned Istio install](https://github.com/istio/tools/blob/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/istio-install/values.yaml) that allocates extra memory and CPU to the Istio control plane.
|
||||
|
||||
{{< warning_icon >}} Istio's [demo installation](/docs/setup/getting-started/) is not suitable for performance testing, because it is designed to be deployed on a small trial cluster, and has full tracing and access logs enabled to showcase Istio's features.
|
||||
{{< warning_icon >}} Istio's [demo installation](/zh/docs/setup/getting-started/) is not suitable for performance testing, because it is designed to be deployed on a small trial cluster, and has full tracing and access logs enabled to showcase Istio's features.
|
||||
|
||||
## 2. Focus on the data plane
|
||||
|
||||
|
|
@ -33,7 +33,7 @@ Say you run 2,000 Envoy-injected pods, each handling 1,000 requests per second.
|
|||
It is also important to focus on data plane performance for **latency** reasons. This is because most application requests move through the Istio data plane, not the control plane. There are two exceptions:
|
||||
|
||||
1. **Telemetry reporting:** Each proxy sends raw telemetry data to {{<gloss>}}Mixer{{</gloss>}}, which Mixer processes into metrics, traces, and other telemetry. The raw telemetry data is similar to access logs, and therefore comes at a cost. Access log processing consumes CPU and keeps a worker thread from picking up the next unit of work. At higher throughput, it is more likely that the next unit of work is waiting in the queue to be picked up by the worker. This can lead to long-tail (99th percentile) latency for Envoy.
|
||||
1. **Custom policy checks:** When using [custom Istio policy adapters](/docs/concepts/observability/), policy checks are on the request path. This means that request headers and metadata on the data path will be sent to the control plane (Mixer), resulting in higher request latency. **Note:** These policy checks are [disabled by default](/docs/reference/config/installation-options/#global-options), as the most common policy use case ([RBAC](/docs/reference/config/security/istio.rbac.v1alpha1)) is performed entirely by the Envoy proxies.
|
||||
1. **Custom policy checks:** When using [custom Istio policy adapters](/zh/docs/concepts/observability/), policy checks are on the request path. This means that request headers and metadata on the data path will be sent to the control plane (Mixer), resulting in higher request latency. **Note:** These policy checks are [disabled by default](/zh/docs/reference/config/installation-options/#global-options), as the most common policy use case ([RBAC](/zh/docs/reference/config/security/istio.rbac.v1alpha1)) is performed entirely by the Envoy proxies.
|
||||
|
||||
Both of these exceptions will go away in a future Istio release, when [Mixer V2](https://docs.google.com/document/d/1QKmtem5jU_2F3Lh5SqLp0IuPb80_70J7aJEYu4_gS-s) moves all policy and telemetry features directly into the proxies.
|
||||
|
||||
|
|
@ -41,15 +41,15 @@ Next, when testing Istio's data plane performance at scale, it's important to te
|
|||
|
||||
Lastly, our test environment measures requests between two pods, not many. The client pod is [Fortio](http://fortio.org/), which sends traffic to the server pod.
|
||||
|
||||
Why test with only two pods? Because scaling up throughput (RPS) and connections (threads) has a greater effect on Envoy's performance than increasing the total size of the service registry — or, the total number of pods and services in the Kubernetes cluster. When the size of the service registry grows, Envoy does have to keep track of more endpoints, and lookup time per request does increase, but by a tiny constant. If you have many services, and this constant becomes a latency concern, Istio provides a [Sidecar resource](/docs/reference/config/networking/sidecar/), which allows you to limit which services each Envoy knows about.
|
||||
Why test with only two pods? Because scaling up throughput (RPS) and connections (threads) has a greater effect on Envoy's performance than increasing the total size of the service registry — or, the total number of pods and services in the Kubernetes cluster. When the size of the service registry grows, Envoy does have to keep track of more endpoints, and lookup time per request does increase, but by a tiny constant. If you have many services, and this constant becomes a latency concern, Istio provides a [Sidecar resource](/zh/docs/reference/config/networking/sidecar/), which allows you to limit which services each Envoy knows about.
|
||||
|
||||
## 3. Measure with and without proxies
|
||||
|
||||
While many Istio features, such as [mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication), rely on an Envoy proxy next to an application pod, you can [selectively disable](/docs/setup/additional-setup/sidecar-injection/#disabling-or-updating-the-webhook) sidecar proxy injection for some of your mesh services. As you scale up Istio for production, you may want to incrementally add the sidecar proxy to your workloads.
|
||||
While many Istio features, such as [mutual TLS authentication](/zh/docs/concepts/security/#mutual-TLS-authentication), rely on an Envoy proxy next to an application pod, you can [selectively disable](/zh/docs/setup/additional-setup/sidecar-injection/#disabling-or-updating-the-webhook) sidecar proxy injection for some of your mesh services. As you scale up Istio for production, you may want to incrementally add the sidecar proxy to your workloads.
|
||||
|
||||
To that end, the test scripts provide [three different modes](https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark#run-performance-tests). These modes analyze Istio's performance when a request goes through both the client and server proxies (`both`), just the server proxy (`serveronly`), and neither proxy (`baseline`).
|
||||
|
||||
You can also disable [Mixer](/docs/concepts/observability/) to stop Istio's telemetry during the performance tests, which provides results in line with the performance we expect when the Mixer V2 work is completed. Istio also supports [Envoy native telemetry](https://github.com/istio/istio/wiki/Envoy-native-telemetry), which performs similarly to having Istio's telemetry disabled.
|
||||
You can also disable [Mixer](/zh/docs/concepts/observability/) to stop Istio's telemetry during the performance tests, which provides results in line with the performance we expect when the Mixer V2 work is completed. Istio also supports [Envoy native telemetry](https://github.com/istio/istio/wiki/Envoy-native-telemetry), which performs similarly to having Istio's telemetry disabled.
|
||||
|
||||
## Istio 1.2 Performance
|
||||
|
||||
|
|
@ -108,6 +108,6 @@ For a mesh with 1000 RPS across 16 connections, Istio 1.2 adds just **3 millisec
|
|||
Istio's performance depends on your specific setup and traffic load. Because of this variance, make sure your test setup accurately reflects your production workloads. To try out the benchmarking scripts, head over [to the Istio Tools repository](https://github.com/istio/tools/tree/3ac7ab40db8a0d595b71f47b8ba246763ecd6213/perf/benchmark).
|
||||
{{< /tip >}}
|
||||
|
||||
Also check out the [Istio Performance and Scalability guide](/docs/ops/performance-and-scalability) for the most up-to-date performance data.
|
||||
Also check out the [Istio Performance and Scalability guide](/zh/docs/ops/performance-and-scalability) for the most up-to-date performance data.
|
||||
|
||||
Thank you for reading, and happy benchmarking!
|
||||
Thank you for reading, and happy benchmarking!
|
||||
|
|
@ -7,8 +7,8 @@ attribution: Vadim Eisenberg (IBM)
|
|||
keywords: [traffic-management,ingress,https,http]
|
||||
---
|
||||
|
||||
The [Control Ingress Traffic](/docs/tasks/traffic-management/ingress/ingress-control/) and the
|
||||
[Ingress Gateway without TLS Termination](/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/) tasks describe
|
||||
The [Control Ingress Traffic](/zh/docs/tasks/traffic-management/ingress/ingress-control/) and the
|
||||
[Ingress Gateway without TLS Termination](/zh/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/) tasks describe
|
||||
how to configure an ingress gateway to expose services inside the mesh to external traffic. The services can be HTTP or
|
||||
HTTPS. In the case of HTTPS, the gateway passes the traffic through, without terminating TLS.
|
||||
|
||||
|
|
@ -203,10 +203,10 @@ The blog post shows configuring access to an HTTP and an HTTPS external service,
|
|||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. [Enable Envoy's access logging](/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging).
|
||||
1. [Enable Envoy's access logging](/zh/docs/tasks/observability/logs/access-log/#enable-envoy-s-access-logging).
|
||||
|
||||
1. Follow the instructions in
|
||||
[Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
[Determining the ingress IP and ports](/zh/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-i-p-and-ports)
|
||||
to define the `SECURE_INGRESS_PORT` and `INGRESS_HOST` environment variables.
|
||||
|
||||
1. Access the `httbin.org` service through your ingress IP and port which you stored in the
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ you need to schedule regular root transitions before they expire.
|
|||
An expiration of a root certificate may lead to an unexpected cluster-wide outage.
|
||||
The issue affects new clusters created with versions up to 1.0.7 and 1.1.7.
|
||||
|
||||
See [Extending Self-Signed Certificate Lifetime](/docs/ops/security/root-transition/) for
|
||||
See [Extending Self-Signed Certificate Lifetime](/zh/docs/ops/security/root-transition/) for
|
||||
information on how to gauge the age of your certificates and how to perform rotation.
|
||||
|
||||
{{< tip >}}
|
||||
|
|
|
|||
|
|
@ -22,6 +22,6 @@ target_release: 1.3
|
|||
- 文章中着重介绍了 Istio 的功能。
|
||||
- 文章中详细讲解了使用 Istio 完成一个任务或者满足特定用例需求的相关内容。
|
||||
|
||||
仅需[提交一个 PR](/zh/about/contribute/github/#how-to-contribute) 就可以发表您的博客,如果需要,可以[申请评审](/zh/about/contribute/github/#review).
|
||||
仅需[提交一个 PR](/zh/about/contribute/github/#how-to-contribute) 就可以发表您的博客,如果需要,可以[申请评审](/zh/about/contribute/github/#review)。
|
||||
|
||||
我们期待能够很快就能在博客上看到你的 Istio 体验!
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ target_release: 1.4
|
|||
---
|
||||
|
||||
Istio 1.4 introduces the
|
||||
[`v1beta1` authorization policy](/docs/reference/config/security/authorization-policy/),
|
||||
[`v1beta1` authorization policy](/zh/docs/reference/config/security/authorization-policy/),
|
||||
which is a major update to the previous `v1alpha1` role-based access control
|
||||
(RBAC) policy. The new policy provides these improvements:
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ configuration resources `ClusterRbacConfig`, `ServiceRole`, and
|
|||
|
||||
This post describes the new `v1beta1` authorization policy model, its
|
||||
design goals and the migration from `v1alpha1` RBAC policies. See the
|
||||
[authorization concept page](/docs/concepts/security/#authorization)
|
||||
[authorization concept page](/zh/docs/concepts/security/#authorization)
|
||||
for a detailed in-depth explanation of the `v1beta1` authorization policy.
|
||||
|
||||
We welcome your feedback about the `v1beta1` authorization policy at
|
||||
|
|
@ -74,7 +74,7 @@ The new `v1beta1` authorization policy had several design goals:
|
|||
|
||||
## `AuthorizationPolicy`
|
||||
|
||||
An [`AuthorizationPolicy` custom resource](/docs/reference/config/security/authorization-policy/)
|
||||
An [`AuthorizationPolicy` custom resource](/zh/docs/reference/config/security/authorization-policy/)
|
||||
enables access control on workloads. This section gives an overview of the
|
||||
changes in the `v1beta1` authorization policy.
|
||||
|
||||
|
|
@ -210,7 +210,7 @@ spec:
|
|||
|
||||
A policy in the root namespace applies to all workloads in the mesh in every
|
||||
namespaces. The root namespace is configurable in the
|
||||
[`MeshConfig`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig)
|
||||
[`MeshConfig`](/zh/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig)
|
||||
and has the default value of `istio-system`.
|
||||
|
||||
For example, you installed Istio in `istio-system` namespace and deployed
|
||||
|
|
|
|||
|
|
@ -37,4 +37,4 @@ will not be able to alter the webhook configurations.
|
|||
and that the certificate chain used by the webhook server is valid. This reduces the errors
|
||||
that can occur before a server is ready or if a server has invalid certificates.
|
||||
|
||||
To try this new feature, refer to the [Istio webhook management task](/docs/tasks/security/webhook).
|
||||
To try this new feature, refer to the [Istio webhook management task](/zh/docs/tasks/security/webhook).
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
---
|
||||
|
||||
## Reporting vulnerabilities
|
||||
|
||||
We’d like to remind our community to follow the [vulnerability reporting process](/zh/about/security-vulnerabilities/) to report any bug that can result in a
|
||||
security vulnerability.
|
||||
|
|
@ -113,7 +113,7 @@ Istio 支持很多追踪系统,包括 [Zipkin](/zh/docs/tasks/observability/di
|
|||
|
||||
Istio 为一个请求生成的分布式追踪数据:
|
||||
|
||||
{{< image link="/docs/tasks/observability/distributed-tracing/zipkin/istio-tracing-details-zipkin.png" caption="Distributed Trace for a single request" >}}
|
||||
{{< image link="/zh/docs/tasks/observability/distributed-tracing/zipkin/istio-tracing-details-zipkin.png" caption="Distributed Trace for a single request" >}}
|
||||
|
||||
## 访问日志 {#access-logs}
|
||||
|
||||
|
|
|
|||
|
|
@ -21,9 +21,7 @@ Istio Security 尝试提供全面的安全解决方案来解决所有这些问
|
|||
|
||||
本页概述了如何使用 Istio 的安全功能来保护您的服务,无论您在何处运行它们。特别是 Istio 安全性可以缓解针对您的数据、端点、通信和平台的内部和外部威胁。
|
||||
|
||||
{{< image width="80%" link="overview.svg"
|
||||
caption="Istio 安全概述"
|
||||
>}}
|
||||
{{< image width="80%" link="overview.svg" caption="Istio 安全概述" >}}
|
||||
|
||||
Istio 安全功能提供强大的身份,强大的策略,透明的 TLS 加密以及用于保护您的服务和数据的身份验证,授权和审计(AAA)工具。 Istio 安全的目标是:
|
||||
|
||||
|
|
@ -33,7 +31,7 @@ Istio 安全功能提供强大的身份,强大的策略,透明的 TLS 加密
|
|||
|
||||
- **零信任网络**: 在不受信任的网络上构建安全解决方案
|
||||
|
||||
请访问我们的[双向 TLS 迁移](/zh/docs/tasks/security/mtls-migration/)相关文章,开始在部署的服务中使用 Istio 安全功能。
|
||||
请访问我们的[双向 TLS 迁移](/zh/docs/tasks/security/authentication/mtls-migration/)相关文章,开始在部署的服务中使用 Istio 安全功能。
|
||||
请访问我们的[安全任务](/zh/docs/tasks/security/),以获取有关使用安全功能的详细说明。
|
||||
|
||||
## 高级架构{#high-level-architecture}
|
||||
|
|
@ -230,7 +228,7 @@ Istio 隧道通过客户端和服务器端进行服务到服务通信 [Envoy 代
|
|||
|
||||
1. Istio 将出站流量从客户端重新路由到客户端的本地 sidecar Envoy。
|
||||
|
||||
1. 客户端 Envoy 与服务器端 Envoy 开始双向 TLS 握手。在握手期间,客户端 Envoy 还做了[安全命名](/docs/concepts/security/#secure-naming)检查,以验证服务器证书中显示的服务帐户是否被授权运行到目标服务。
|
||||
1. 客户端 Envoy 与服务器端 Envoy 开始双向 TLS 握手。在握手期间,客户端 Envoy 还做了[安全命名](/zh/docs/concepts/security/#secure-naming)检查,以验证服务器证书中显示的服务帐户是否被授权运行到目标服务。
|
||||
|
||||
1. 客户端 Envoy 和服务器端 Envoy 建立了一个双向的 TLS 连接,Istio 将流量从客户端 Envoy 转发到服务器端 Envoy。
|
||||
|
||||
|
|
@ -242,7 +240,7 @@ Istio 双向 TLS 具有一个宽容模式(permissive mode),允许 service
|
|||
|
||||
在运维人员希望将服务移植到启用了双向 TLS 的 Istio 上时,许多非 Istio 客户端和非 Istio 服务端通信时会产生问题。通常情况下,运维人员无法同时为所有客户端安装 Istio sidecar,甚至没有这样做的权限。即使在服务端上安装了 Istio sidecar,运维人员也无法在不中断现有连接的情况下启用双向 TLS。
|
||||
|
||||
启用宽容模式后,服务同时接受纯文本和双向 TLS 流量。这个模式为入门提供了极大的灵活性。服务中安装的 Istio sidecar 立即接受双向 TLS 流量而不会打断现有的纯文本流量。因此,运维人员可以逐步安装和配置客户端 Istio sidecars 发送双向 TLS 流量。一旦客户端配置完成,运维人员便可以将服务端配置为仅 TLS 模式。更多信息请访问[双向 TLS 迁移向导](/zh/docs/tasks/security/mtls-migration)。
|
||||
启用宽容模式后,服务同时接受纯文本和双向 TLS 流量。这个模式为入门提供了极大的灵活性。服务中安装的 Istio sidecar 立即接受双向 TLS 流量而不会打断现有的纯文本流量。因此,运维人员可以逐步安装和配置客户端 Istio sidecars 发送双向 TLS 流量。一旦客户端配置完成,运维人员便可以将服务端配置为仅 TLS 模式。更多信息请访问[双向 TLS 迁移向导](/zh/docs/tasks/security/authentication/mtls-migration)。
|
||||
|
||||
#### 安全命名{#secure-naming}
|
||||
|
||||
|
|
@ -390,7 +388,7 @@ principalBinding: USE_ORIGIN
|
|||
|
||||
您可以随时更改身份认证策略,Istio 几乎实时地将更改推送到端点。但是,Istio 无法保证所有端点同时收到新策略。以下是在更新身份认证策略时避免中断的建议:
|
||||
|
||||
- 启用或禁用双向 TLS:使用带有 `mode:` 键和 `PERMISSIVE` 值的临时策略。这会将接收服务配置为接受两种类型的流量:纯文本和 TLS。因此,不会丢弃任何请求。一旦所有客户端切换到预期协议,无论是否有双向 TLS,您都可以将 `PERMISSIVE` 策略替换为最终策略。有关更多信息,请访问[双向 TLS 的迁移](/zh/docs/tasks/security/mtls-migration)。
|
||||
- 启用或禁用双向 TLS:使用带有 `mode:` 键和 `PERMISSIVE` 值的临时策略。这会将接收服务配置为接受两种类型的流量:纯文本和 TLS。因此,不会丢弃任何请求。一旦所有客户端切换到预期协议,无论是否有双向 TLS,您都可以将 `PERMISSIVE` 策略替换为最终策略。有关更多信息,请访问[双向 TLS 的迁移](/zh/docs/tasks/security/authentication/mtls-migration)。
|
||||
|
||||
{{< text yaml >}}
|
||||
peers:
|
||||
|
|
@ -420,264 +418,276 @@ Pilot 监督 Istio 授权策略的变更。如果发现任何更改,它将获
|
|||
|
||||
每个 Envoy 代理都运行一个授权引擎,该引擎在运行时授权请求。当请求到达代理时,授权引擎根据当前授权策略评估请求上下文,并返回授权结果 `ALLOW` 或 `DENY`。
|
||||
|
||||
### 启用授权{#enabling-authorization}
|
||||
### Implicit enablement{#implicit-enablement}
|
||||
|
||||
您可以使用 `RbacConfig` 对象启用 Istio Authorization。`RbacConfig` 对象是一个网格范围的单例,其固定名称值为 `default`。您只能在网格中使用一个 `RbacConfig` 实例。与其他 Istio 配置对象一样,`RbacConfig` 被定义为 Kubernetes `CustomResourceDefinition` [(CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 对象。
|
||||
There is no need to explicitly enable Istio's authorization feature, you just apply
|
||||
the `AuthorizationPolicy` on **workloads** to enforce access control.
|
||||
|
||||
在 `RbacConfig` 对象中,运算符可以指定 `mode` 值,它可以是:
|
||||
If no `AuthorizationPolicy` applies to a workload, no access control will be enforced,
|
||||
In other words, all requests will be allowed.
|
||||
|
||||
- **`OFF`**:禁用 Istio 授权。
|
||||
- **`ON`**:为网格中的所有服务启用了 Istio 授权。
|
||||
- **`ON_WITH_INCLUSION`**:仅对`包含`字段中指定的服务和命名空间启用 Istio 授权。
|
||||
- **`ON_WITH_EXCLUSION`**:除了`排除`字段中指定的服务和命名空间外,网格中的所有服务都启用了 Istio 授权。
|
||||
If any `AuthorizationPolicy` applies to a workload, access to that workload is
|
||||
denied by default, unless explicitly allowed by a rule declared in the policy.
|
||||
|
||||
在以下示例中,为 `default` 命名空间启用了 Istio 授权。
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ClusterRbacConfig
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
mode: 'ON_WITH_INCLUSION'
|
||||
inclusion:
|
||||
namespaces: ["default"]
|
||||
{{< /text >}}
|
||||
Currently `AuthorizationPolicy` only supports `ALLOW` action. This means that if
|
||||
multiple authorization policies apply to the same workload, the effect is additive.
|
||||
|
||||
### 授权策略{#authorization-policy}
|
||||
|
||||
要配置 Istio 授权策略,请指定 `ServiceRole` 和 `ServiceRoleBinding`。与其他 Istio 配置对象一样,它们被定义为 Kubernetes `CustomResourceDefinition`([CRD](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/))对象。
|
||||
To configure an Istio authorization policy, you create an
|
||||
[`AuthorizationPolicy` resource](/zh/docs/reference/config/security/authorization-policy/).
|
||||
|
||||
- **`ServiceRole`** 定义了一组访问服务的权限。
|
||||
- **`ServiceRoleBinding`** 向特定主题授予 `ServiceRole`,例如用户、组或服务。
|
||||
An authorization policy includes a selector and a list of rules. The selector
|
||||
specifies the **target** that the policy applies to, while the rules specify **who**
|
||||
is allowed to do **what** under which **conditions**. Specifically:
|
||||
|
||||
`ServiceRole` 和 `ServiceRoleBinding` 的组合规定:允许**谁**在**哪些条件**下**做什么** 。明确地说:
|
||||
- **target** refers to the `selector` section in the `AuthorizationPolicy`.
|
||||
- **who** refers to the `from` section in the `rule` of the `AuthorizationPolicy`.
|
||||
- **what** refers to the `to` section in the `rule` of the `AuthorizationPolicy`.
|
||||
- **conditions** refers to the `when` section in the `rule` of the `AuthorizationPolicy`.
|
||||
|
||||
- **谁**指的是 `ServiceRoleBinding` 中的 `subject` 部分。
|
||||
- **做什么**指的是 `ServiceRole` 中的 `permissions` 部分。
|
||||
- **哪些条件**指的是你可以在 `ServiceRole` 或 `ServiceRoleBinding` 中使用 [Istio 属性](/zh/docs/reference/config/policy-and-telemetry/attribute-vocabulary/)指定的 `conditions` 部分。
|
||||
Each rule has the following standard fields:
|
||||
|
||||
#### `ServiceRole`{#service-role}
|
||||
- **`from`**: A list of sources.
|
||||
- **`to`**: A list of operations.
|
||||
- **`when`**: A list of custom conditions.
|
||||
|
||||
`ServiceRole` 规范包括`规则`、所谓的权限列表。每条规则都有以下标准字段:
|
||||
|
||||
- **`services`**:服务名称列表。您可以将值设置为 `*` 以包括指定命名空间中的所有服务。
|
||||
|
||||
- **`methods`**:HTTP 方法名称列表,对于 gRPC 请求的权限,HTTP 动词始终是 `POST`。您可以将值设置为 `*` 以包含所有 HTTP 方法。
|
||||
|
||||
- **`paths`**:HTTP 路径或 gRPC 方法。gRPC 方法必须采用 `/packageName.serviceName/methodName` 的形式,并且区分大小写。
|
||||
|
||||
`ServiceRole` 规范仅适用于 `metadata` 部分中指定的命名空间。规则中需要 `services` 和 `methods` 字段。`paths` 是可选的。如果未指定规则或将其设置为 `*`,则它适用于任何实例。
|
||||
|
||||
下面的示例显示了一个简单的角色:`service-admin`,它可以完全访问 `default` 命名空间中的所有服务。
|
||||
The following example shows an `AuthorizationPolicy` that allows two sources
|
||||
(service account `cluster.local/ns/default/sa/sleep` and namespace `dev`) to access the
|
||||
workloads with labels `app: httpbin` and `version: v1` in namespace foo when the request
|
||||
is sent with a valid JWT token.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: service-admin
|
||||
namespace: default
|
||||
name: httpbin
|
||||
namespace: foo
|
||||
spec:
|
||||
rules:
|
||||
- services: ["*"]
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
version: v1
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals: ["cluster.local/ns/default/sa/sleep"]
|
||||
- source:
|
||||
namespaces: ["dev"]
|
||||
to:
|
||||
- operation:
|
||||
methods: ["GET"]
|
||||
when:
|
||||
- key: request.auth.claims[iss]
|
||||
values: ["https://accounts.google.com"]
|
||||
{{< /text >}}
|
||||
|
||||
这是另一个角色:`products-viewer`,它有读取权限,包括 `GET` 和 `HEAD`,能够访问 `default` 命名空间中的 `products.default.svc.cluster.local` 服务。
|
||||
#### Policy Target
|
||||
|
||||
Policy scope (target) is determined by `metadata/namespace` and an optional `selector`.
|
||||
|
||||
The `metadata/namespace` tells which namespace the policy applies to. If set to the
|
||||
root namespace, the policy applies to all namespaces in a mesh. The value of
|
||||
root namespace is configurable, and the default is `istio-system`. If set to a
|
||||
normal namespace, the policy will only apply to the specified namespace.
|
||||
|
||||
A workload `selector` can be used to further restrict where a policy applies.
|
||||
The `selector` uses pod labels to select the target workload. The workload
|
||||
selector contains a list of `{key: value}` pairs, where the `key` is the name of the label.
|
||||
If not set, the authorization policy will be applied to all workloads in the same namespace
|
||||
as the authorization policy.
|
||||
|
||||
The following example policy `allow-read` allows `"GET"` and `"HEAD"` access to
|
||||
the workload with label `app: products` in the `default` namespace.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: products-viewer
|
||||
name: allow-read
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: products
|
||||
rules:
|
||||
- services: ["products.default.svc.cluster.local"]
|
||||
methods: ["GET", "HEAD"]
|
||||
- to:
|
||||
- operation:
|
||||
methods: ["GET", "HEAD"]
|
||||
{{< /text >}}
|
||||
|
||||
此外,我们支持规则中所有字段的前缀匹配和后缀匹配。例如,您可以在 `default`命名空间中定义具有以下权限的 `tester` 角色:
|
||||
#### Value matching
|
||||
|
||||
- 完全访问前缀为 `test-*` 的所有服务,例如:`test-bookstore`、`test-performance`、`test-api.default.svc.cluster.local`。
|
||||
- 读取(`GET`)使用 `*/reviews` 后缀访问的所有路径,例如:在 `bookstore .default.svc.cluster.local` 服务中的 `/books/reviews`、`/events/booksale/reviews`、`/reviews`。
|
||||
Exact match, prefix match, suffix match, and presence match are supported for most
|
||||
of the field with a few exceptions (e.g., the `key` field under the `when` section,
|
||||
the `ipBlocks` under the `source` section and the `ports` field under the `to` section only support exact match).
|
||||
|
||||
- **Exact match**. i.e., exact string match.
|
||||
- **Prefix match**. A string with an ending `"*"`. For example, `"test.abc.*"` matches `"test.abc.com"`, `"test.abc.com.cn"`, `"test.abc.org"`, etc.
|
||||
- **Suffix match**. A string with a starting `"*"`. For example, `"*.abc.com"` matches `"eng.abc.com"`, `"test.eng.abc.com"`, etc.
|
||||
- **Presence match**. `*` is used to specify anything but not empty. You can specify a field must be present using the format `fieldname: ["*"]`.
|
||||
This means that the field can match any value, but it cannot be empty. Note that it is different from leaving a field unspecified, which means anything including empty.
|
||||
|
||||
The following example policy allows access at paths with prefix `"/test/"` or suffix `"/info"`.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: tester
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: products
|
||||
rules:
|
||||
- services: ["test-*"]
|
||||
methods: ["*"]
|
||||
- services: ["bookstore.default.svc.cluster.local"]
|
||||
paths: ["*/reviews"]
|
||||
methods: ["GET"]
|
||||
- to:
|
||||
- operation:
|
||||
paths: ["/test/*", "*/info"]
|
||||
{{< /text >}}
|
||||
|
||||
在 `ServiceRole` 中,`namespace` + `services` + `paths` + `methods` 的组合定义了**如何访问服务**。在某些情况下,您可能需要为规则指定其他条件。例如,规则可能仅适用于服务的某个**版本**,或仅适用于具有特定**标签**的服务,如 `foo`。您可以使用 `constraints` 轻松指定这些条件。
|
||||
#### Allow-all and deny-all
|
||||
|
||||
例如,下面的 `ServiceRole` 定义在以前的 `products-viewer` 角色基础之上添加了一个约束:`request.headers[version]` 为 `v1` 或 `v2`。在[约束和属性页面](/zh/docs/reference/config/authorization/constraints-and-properties/)中列出了约束支持的 `key` 值。在属性值是 `map` 类型的情况下,例如 `request.headers`,`key` 是 map 中的一个条目,例如 `request.headers[version]`。
|
||||
The example below shows a simple policy `allow-all` which allows full access to all
|
||||
workloads in the `default` namespace.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: products-viewer-version
|
||||
name: allow-all
|
||||
namespace: default
|
||||
spec:
|
||||
rules:
|
||||
- services: ["products.default.svc.cluster.local"]
|
||||
methods: ["GET", "HEAD"]
|
||||
constraints:
|
||||
- key: request.headers[version]
|
||||
values: ["v1", "v2"]
|
||||
- {}
|
||||
{{< /text >}}
|
||||
|
||||
#### `ServiceRoleBinding`{#service-role-binding}
|
||||
|
||||
`ServiceRoleBinding` 规范包括两部分:
|
||||
|
||||
- **`roleRef`** 指的是同一命名空间中的 `ServiceRole` 资源。
|
||||
- **`subjects`** 分配给角色的列表。
|
||||
|
||||
您可以使用 `user` 或一组 `properties` 显式指定 *subject*。`ServiceRoleBinding` *subject* 中的 *property* 类似于 `ServiceRole` 规范中的 *constraint*。 *property* 还允许您使用条件指定分配给此角色的一组帐户。它包含一个 `key` 及其允许的*值*。约束支持的 `key` 值列在[约束和属性页面](/zh/docs/reference/config/authorization/constraints-and-properties/)中。
|
||||
|
||||
下面的例子显示了一个名为 `test-binding-products` 的 `ServiceRoleBinding`,它将两个 `subject` 绑定到名为 `product-viewer` 的 `ServiceRole` 并具有以下 `subject`
|
||||
|
||||
- 代表服务 **a** 的服务帐户,`service-account-a`。
|
||||
- 代表 Ingress 服务的服务帐户 `istio-ingress-service-account` **并且** 它的 JWT 中的 `mail` 项声明为 `a@foo.com`。
|
||||
The example below shows a simple policy `deny-all` which denies access to all workloads
|
||||
in the `admin` namespace.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: test-binding-products
|
||||
namespace: default
|
||||
name: deny-all
|
||||
namespace: admin
|
||||
spec:
|
||||
subjects:
|
||||
- user: "service-account-a"
|
||||
- user: "istio-ingress-service-account"
|
||||
properties:
|
||||
request.auth.claims[email]: "a@foo.com"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "products-viewer"
|
||||
{}
|
||||
{{< /text >}}
|
||||
|
||||
如果您想要公开访问服务,可以将 `subject` 设置为 `user:"*"` 。此值将 `ServiceRole` 分配给**所有(经过身份验证和未经身份验证的)**用户和服务,例如:
|
||||
#### Custom conditions
|
||||
|
||||
You can also use the `when` section to specify additional conditions. For example, the following
|
||||
`AuthorizationPolicy` definition includes a condition that `request.headers[version]` is either `"v1"` or `"v2"`.
|
||||
In this case, the key is `request.headers[version]`, which is an entry in the Istio attribute `request.headers`,
|
||||
which is a map.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: binding-products-allusers
|
||||
namespace: default
|
||||
name: httpbin
|
||||
namespace: foo
|
||||
spec:
|
||||
subjects:
|
||||
- user: "*"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "products-viewer"
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
version: v1
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals: ["cluster.local/ns/default/sa/sleep"]
|
||||
to:
|
||||
- operation:
|
||||
methods: ["GET"]
|
||||
when:
|
||||
- key: request.headers[version]
|
||||
values: ["v1", "v2"]
|
||||
{{< /text >}}
|
||||
|
||||
要将 `ServiceRole` 分配给**经过身份验证的**用户和服务,请使用 `source.principal:"*"` 代替,例如:
|
||||
The supported `key` values of a condition are listed in the
|
||||
[conditions page](/zh/docs/reference/config/security/conditions/).
|
||||
|
||||
#### Authenticated and unauthenticated identity
|
||||
|
||||
If you want to make a workload publicly accessible, you need to leave the
|
||||
`source` section empty. This allows sources from **all (both authenticated and
|
||||
unauthenticated)** users and workloads, for example:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: binding-products-all-authenticated-users
|
||||
namespace: default
|
||||
name: httpbin
|
||||
namespace: foo
|
||||
spec:
|
||||
subjects:
|
||||
- properties:
|
||||
source.principal: "*"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "products-viewer"
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
version: v1
|
||||
rules:
|
||||
- to:
|
||||
- operation:
|
||||
methods: ["GET", "POST"]
|
||||
{{< /text >}}
|
||||
|
||||
### 在普通 TCP 协议上使用 Istio 认证{#using-Istio-authorization-on-plain-TCP-protocols}
|
||||
|
||||
[Service role](#service-role) 和 [Service role binding](#service-role-binding) 中的例子展示了在使用 HTTP 协议的 service 上使用 Istio 认证的典型方法。在那些例子中,service role 和 service role binding 里的所有字段都可以支持。
|
||||
|
||||
Istio 授权支持使用任何普通 TCP 协议的 service,例如 MongoDB。在这种情况下,您可以像配置 HTTP 服务一样配置 service role 和 service role binding。不同之处在于某些字段,约束和属性仅适用于 HTTP 服务。这些字段包括:
|
||||
|
||||
- service role 配置对象中的 `paths` 和 `methods` 字段。
|
||||
- service role binding 配置对象中的 `group` 字段。
|
||||
|
||||
支持的约束和属性在[约束和属性页面](
|
||||
/zh/docs/reference/config/authorization/constraints-and-properties/)中列出。
|
||||
|
||||
如果您在 TCP service 中使用了任意 HTTP 独有的字段,Istio 将会完全忽略 service role 或 service role binding 自定义资源,以及里面设置的策略。
|
||||
|
||||
假设您有一个 MongoDB service 在 27017 端口上监听,下面的示例配置了一个 service role 和一个 service role binding,仅允许 Istio 网格中的 `bookinfo-ratings-v2` 访问 MongoDB service。
|
||||
To allow only **authenticated** users, set `principal` to `"*"` instead, for example:
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
apiVersion: security.istio.io/v1beta1
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: mongodb-viewer
|
||||
namespace: default
|
||||
name: httpbin
|
||||
namespace: foo
|
||||
spec:
|
||||
rules:
|
||||
- services: ["mongodb.default.svc.cluster.local"]
|
||||
constraints:
|
||||
- key: "destination.port"
|
||||
values: ["27017"]
|
||||
---
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
metadata:
|
||||
name: bind-mongodb-viewer
|
||||
namespace: default
|
||||
spec:
|
||||
subjects:
|
||||
- user: "cluster.local/ns/default/sa/bookinfo-ratings-v2"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "mongodb-viewer"
|
||||
selector:
|
||||
matchLabels:
|
||||
app: httpbin
|
||||
version: v1
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals: ["*"]
|
||||
to:
|
||||
- operation:
|
||||
methods: ["GET", "POST"]
|
||||
{{< /text >}}
|
||||
|
||||
### 授权宽容模式{#authorization-permissive-mode}
|
||||
### Using Istio authorization on plain TCP protocols
|
||||
|
||||
授权宽容模式(authorization permissive mode)是 Istio 1.1 发布版中的实验特性。其接口可能在未来的发布中发生变化。
|
||||
Istio authorization supports workloads using any plain TCP protocols, such as MongoDB. In this case,
|
||||
you configure the authorization policy in the same way you did for the HTTP workloads.
|
||||
The difference is that certain fields and conditions are only applicable to HTTP workloads.
|
||||
These fields include:
|
||||
|
||||
授权宽容模式允许您在将授权策略提交到生产环境部署之前对其进行验证。
|
||||
- The `request_principals` field in the source section of the authorization policy object
|
||||
- The `hosts`, `methods` and `paths` fields in the operation section of the authorization policy object
|
||||
|
||||
您可以在全局授权配置和单个独立策略中启用授权宽容模式。如果在全局授权配置中设置,所有策略都将切换至授权宽容模式,不管其本身的模式。如果您设置全局授权模式为 ENFORCED,单个策略设置的强制模式将起作用。如果您没有设置任何模式,全局授权配置和单个策略都将默认被设置为 ENFORCED。
|
||||
The supported conditions are listed in the [conditions page](/zh/docs/reference/config/security/conditions/).
|
||||
|
||||
要全局启用宽容模式,请将全局 Istio RBAC 授权配置中的 `enforcement_mode:` 设置为 PERMISSIVE,如下面的示例所示。
|
||||
If you use any HTTP only fields for a TCP workload, Istio will ignore HTTP only fields in the
|
||||
authorization policy.
|
||||
|
||||
Assuming you have a MongoDB service on port 27017, the following example configures an authorization
|
||||
policy to only allow the `bookinfo-ratings-v2` service in the Istio mesh to access the MongoDB workload.
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ClusterRbacConfig
|
||||
apiVersion: "security.istio.io/v1beta1"
|
||||
kind: AuthorizationPolicy
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
mode: 'ON_WITH_INCLUSION'
|
||||
inclusion:
|
||||
namespaces: ["default"]
|
||||
enforcement_mode: PERMISSIVE
|
||||
{{< /text >}}
|
||||
|
||||
如要为特定策略启用宽容模式,请将策略配置文件中的 `mode:` 设置为 `PERMISSIVE`,如下面的示例所示。
|
||||
|
||||
{{< text yaml >}}
|
||||
apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
metadata:
|
||||
name: bind-details-reviews
|
||||
name: mongodb-policy
|
||||
namespace: default
|
||||
spec:
|
||||
subjects:
|
||||
- user: "cluster.local/ns/default/sa/bookinfo-productpage"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "details-reviews-viewer"
|
||||
mode: PERMISSIVE
|
||||
selector:
|
||||
matchLabels:
|
||||
app: mongodb
|
||||
rules:
|
||||
- from:
|
||||
- source:
|
||||
principals: ["cluster.local/ns/default/sa/bookinfo-ratings-v2"]
|
||||
to:
|
||||
- operation:
|
||||
ports: ["27017"]
|
||||
{{< /text >}}
|
||||
|
||||
### 使用其他授权机制{#using-other-authorization-mechanisms}
|
||||
|
|
|
|||
|
|
@ -29,10 +29,10 @@ changes to your services.
|
|||
|
||||
If you’re interested in the details of how the features described in this guide
|
||||
work, you can find out more about Istio’s traffic management implementation in the
|
||||
[architecture overview](/docs/ops/architecture/). The rest of
|
||||
[architecture overview](/zh/docs/ops/architecture/). The rest of
|
||||
this guide introduces Istio’s traffic management features.
|
||||
|
||||
## Introducing Istio Traffic Management
|
||||
## Introducing Istio traffic management
|
||||
|
||||
In order to direct traffic within your mesh, Istio needs to know where all your
|
||||
endpoints are, and which services they belong to. To populate its own
|
||||
|
|
@ -78,7 +78,7 @@ are built in to the API resources.
|
|||
|
||||
## Virtual services {#virtual-services}
|
||||
|
||||
[Virtual services](/docs/reference/config/networking/virtual-service/#VirtualService),
|
||||
[Virtual services](/zh/docs/reference/config/networking/virtual-service/#VirtualService),
|
||||
along with [destination rules](#destination-rules), are the key building blocks of Istio’s traffic
|
||||
routing functionality. A virtual service lets you configure how requests are
|
||||
routed to a service within an Istio service mesh, building on the basic
|
||||
|
|
@ -121,7 +121,7 @@ instances implementing the new service version can scale up and down based on
|
|||
traffic load without referring to traffic routing at all. By contrast, container
|
||||
orchestration platforms like Kubernetes only support traffic distribution based
|
||||
on instance scaling, which quickly becomes complex. You can read more about how
|
||||
virtual services help with canary deployments in [Canary Deployments using Istio](/blog/2017/0.1-canary/).
|
||||
virtual services help with canary deployments in [Canary Deployments using Istio](/zh/blog/2017/0.1-canary/).
|
||||
|
||||
Virtual services also let you:
|
||||
|
||||
|
|
@ -135,7 +135,7 @@ Virtual services also let you:
|
|||
`monolith.com` go to `microservice A`", and so on. You can see how this works
|
||||
in [one of our examples below](#more-about-routing-rules).
|
||||
- Configure traffic rules in combination with
|
||||
[gateways](/docs/concepts/traffic-management/#gateways) to control ingress
|
||||
[gateways](/zh/docs/concepts/traffic-management/#gateways) to control ingress
|
||||
and egress traffic.
|
||||
|
||||
In some cases you also need to configure destination rules to use these
|
||||
|
|
@ -198,9 +198,9 @@ The `http` section contains the virtual service’s routing rules, describing
|
|||
match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent
|
||||
to the destination(s) specified in the hosts field (you can also use `tcp` and
|
||||
`tls` sections to configure routing rules for
|
||||
[TCP](/docs/reference/config/networking/virtual-service/#TCPRoute) and
|
||||
[TCP](/zh/docs/reference/config/networking/virtual-service/#TCPRoute) and
|
||||
unterminated
|
||||
[TLS](/docs/reference/config/networking/virtual-service/#TLSRoute)
|
||||
[TLS](/zh/docs/reference/config/networking/virtual-service/#TLSRoute)
|
||||
traffic). A routing rule consists of the destination where you want the traffic
|
||||
to go and zero or more match conditions, depending on your use case.
|
||||
|
||||
|
|
@ -322,7 +322,7 @@ You can also have multiple routing rules for any given virtual service. This
|
|||
lets you make your routing conditions as complex or simple as you like within a
|
||||
single virtual service. A full list of match condition fields and their possible
|
||||
values can be found in the
|
||||
[`HTTPMatchRequest` reference](/docs/reference/config/networking/virtual-service/#HTTPMatchRequest).
|
||||
[`HTTPMatchRequest` reference](/zh/docs/reference/config/networking/virtual-service/#HTTPMatchRequest).
|
||||
|
||||
In addition to using match conditions, you can distribute traffic
|
||||
by percentage "weight". This is useful for A/B testing and canary rollouts:
|
||||
|
|
@ -351,12 +351,12 @@ example:
|
|||
- Set a [retry policy](#retries) for calls to this destination.
|
||||
|
||||
To learn more about the actions available, see the
|
||||
[`HTTPRoute` reference](/docs/reference/config/networking/virtual-service/#HTTPRoute).
|
||||
[`HTTPRoute` reference](/zh/docs/reference/config/networking/virtual-service/#HTTPRoute).
|
||||
|
||||
## Destination rules {#destination-rules}
|
||||
|
||||
Along with [virtual services](#virtual-services),
|
||||
[destination rules](/docs/reference/config/networking/destination-rule/#DestinationRule)
|
||||
[destination rules](/zh/docs/reference/config/networking/destination-rule/#DestinationRule)
|
||||
are a key part of Istio’s traffic routing functionality. You can think of
|
||||
virtual services as how you route your traffic **to** a given destination, and
|
||||
then you use destination rules to configure what happens to traffic **for** that
|
||||
|
|
@ -372,7 +372,7 @@ Destination rules also let you customize Envoy’s traffic policies when calling
|
|||
the entire destination service or a particular service subset, such as your
|
||||
preferred load balancing model, TLS security mode, or circuit breaker settings.
|
||||
You can see a complete list of destination rule options in the
|
||||
[Destination Rule reference](/docs/reference/config/networking/destination-rule/).
|
||||
[Destination Rule reference](/zh/docs/reference/config/networking/destination-rule/).
|
||||
|
||||
### Load balancing options
|
||||
|
||||
|
|
@ -435,7 +435,7 @@ subset’s field.
|
|||
|
||||
## Gateways {#gateways}
|
||||
|
||||
You use a [gateway](/docs/reference/config/networking/gateway/#Gateway) to
|
||||
You use a [gateway](/zh/docs/reference/config/networking/gateway/#Gateway) to
|
||||
manage inbound and outbound traffic for your mesh, letting you specify which
|
||||
traffic you want to enter or leave the mesh. Gateway configurations are applied
|
||||
to standalone Envoy proxies that are running at the edge of the mesh, rather
|
||||
|
|
@ -455,15 +455,15 @@ Gateways are primarily used to manage ingress traffic, but you can also
|
|||
configure egress gateways. An egress gateway lets you configure a dedicated exit
|
||||
node for the traffic leaving the mesh, letting you limit which services can or
|
||||
should access external networks, or to enable
|
||||
[secure control of egress traffic](/blog/2019/egress-traffic-control-in-istio-part-1/)
|
||||
[secure control of egress traffic](/zh/blog/2019/egress-traffic-control-in-istio-part-1/)
|
||||
to add security to your mesh, for example. You can also use a gateway to
|
||||
configure a purely internal proxy.
|
||||
|
||||
Istio provides some preconfigured gateway proxy deployments
|
||||
(`istio-ingressgateway` and `istio-egressgateway`) that you can use - both are
|
||||
deployed if you use our [demo installation](/docs/setup/getting-started/),
|
||||
deployed if you use our [demo installation](/zh/docs/setup/getting-started/),
|
||||
while just the ingress gateway is deployed with our
|
||||
[default or sds profiles.](/docs/setup/additional-setup/config-profiles/) You
|
||||
[default or sds profiles.](/zh/docs/setup/additional-setup/config-profiles/) You
|
||||
can apply your own gateway configurations to these deployments or deploy and
|
||||
configure your own gateway proxies.
|
||||
|
||||
|
|
@ -518,7 +518,7 @@ traffic.
|
|||
## Service entries {#service-entries}
|
||||
|
||||
You use a
|
||||
[service entry](/docs/reference/config/networking/service-entry/#ServiceEntry) to add
|
||||
[service entry](/zh/docs/reference/config/networking/service-entry/#ServiceEntry) to add
|
||||
an entry to the service registry that Istio maintains internally. After you add
|
||||
the service entry, the Envoy proxies can send traffic to the service as if it
|
||||
was a service in your mesh. Configuring service entries allows you to manage
|
||||
|
|
@ -528,10 +528,10 @@ traffic for services running outside of the mesh, including the following tasks:
|
|||
consumed from the web, or traffic to services in legacy infrastructure.
|
||||
- Define [retry](#retries), [timeout](#timeouts), and
|
||||
[fault injection](#fault-injection) policies for external destinations.
|
||||
- Add a service running in a Virtual Machine (VM) to the mesh to
|
||||
[expand your mesh](/docs/examples/virtual-machines/single-network/#running-services-on-the-added-vm).
|
||||
- Run a mesh service in a Virtual Machine (VM) by
|
||||
[adding VMs to your mesh](/zh/docs/examples/virtual-machines/).
|
||||
- Logically add services from a different cluster to the mesh to configure a
|
||||
[multicluster Istio mesh](/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
[multicluster Istio mesh](/zh/docs/setup/install/multicluster/gateways/#configure-the-example-services)
|
||||
on Kubernetes.
|
||||
|
||||
You don’t need to add a service entry for every external service that you want
|
||||
|
|
@ -585,14 +585,14 @@ spec:
|
|||
{{< /text >}}
|
||||
|
||||
See the
|
||||
[Service Entry reference](/docs/reference/config/networking/service-entry)
|
||||
[Service Entry reference](/zh/docs/reference/config/networking/service-entry)
|
||||
for more possible configuration options.
|
||||
|
||||
## Sidecars {#sidecars}
|
||||
|
||||
By default, Istio configures every Envoy proxy to accept traffic on all the
|
||||
ports of its associated workload, and to reach every workload in the mesh when
|
||||
forwarding traffic. You can use a [sidecar](/docs/reference/config/networking/sidecar/#Sidecar) configuration to do the following:
|
||||
forwarding traffic. You can use a [sidecar](/zh/docs/reference/config/networking/sidecar/#Sidecar) configuration to do the following:
|
||||
|
||||
- Fine-tune the set of ports and protocols that an Envoy proxy accepts.
|
||||
- Limit the set of services that the Envoy proxy can reach.
|
||||
|
|
@ -621,7 +621,7 @@ spec:
|
|||
- "istio-system/*"
|
||||
{{< /text >}}
|
||||
|
||||
See the [Sidecar reference](/docs/reference/config/networking/sidecar/)
|
||||
See the [Sidecar reference](/zh/docs/reference/config/networking/sidecar/)
|
||||
for more details.
|
||||
|
||||
## Network resilience and testing {#network-resilience-and-testing}
|
||||
|
|
@ -740,7 +740,7 @@ spec:
|
|||
{{< /text >}}
|
||||
|
||||
You can find out more about creating circuit breakers in
|
||||
[Circuit Breaking](/docs/tasks/traffic-management/circuit-breaking/).
|
||||
[Circuit Breaking](/zh/docs/tasks/traffic-management/circuit-breaking/).
|
||||
|
||||
### Fault injection {#fault-injection}
|
||||
|
||||
|
|
@ -790,7 +790,7 @@ spec:
|
|||
{{< /text >}}
|
||||
|
||||
For detailed instructions on how to configure delays and aborts, see
|
||||
[Fault Injection](/docs/tasks/traffic-management/fault-injection/).
|
||||
[Fault Injection](/zh/docs/tasks/traffic-management/fault-injection/).
|
||||
|
||||
### Working with your applications {#working-with-your-applications}
|
||||
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ Istio 简单的规则配置和流量路由允许您控制服务之间的流量
|
|||
|
||||
有了更好的对流量的可视性和开箱即用的故障恢复特性,您就可以在问题产生之前捕获它们,无论面对什么情况都可以使调用更可靠,网络更健壮。
|
||||
|
||||
请参考 [流量管理文档](/docs/concepts/traffic-management/) 获取更多细节。
|
||||
请参考 [流量管理文档](/zh/docs/concepts/traffic-management/) 获取更多细节。
|
||||
|
||||
### 安全{#security}
|
||||
|
||||
|
|
@ -55,7 +55,7 @@ Istio 的安全特性解放了开发人员,使其只需要专注于应用程
|
|||
|
||||
Istio 是独立于平台的,可以与 Kubernetes(或基础设施)的网络策略一起使用。但它更强大,能够在网络和应用层面保护{{<gloss>}}pod{{</gloss>}}到 pod 或者服务到服务之间的通信。
|
||||
|
||||
请参考 [安全文档](/docs/concepts/security/) 获取更多细节。
|
||||
请参考 [安全文档](/zh/docs/concepts/security/) 获取更多细节。
|
||||
|
||||
### 策略{#policies}
|
||||
|
||||
|
|
@ -65,9 +65,9 @@ Istio 允许您为应用程序配置自定义的策略并在运行时执行规
|
|||
* Denials、白名单和黑名单用来限制对服务的访问
|
||||
* Header 的重写和重定向
|
||||
|
||||
Istio 还容许你创建自己的[策略适配器](/docs/tasks/policy-enforcement/control-headers) 来添加诸如自定义的授权行为。
|
||||
Istio 还容许你创建自己的[策略适配器](/zh/docs/tasks/policy-enforcement/control-headers) 来添加诸如自定义的授权行为。
|
||||
|
||||
请参考 [策略文档](/docs/concepts/policies/) 获取更多细节。
|
||||
请参考 [策略文档](/zh/docs/concepts/policies/) 获取更多细节。
|
||||
|
||||
### 可观察性{#observability}
|
||||
|
||||
|
|
@ -77,7 +77,7 @@ Istio 的 Mixer 组件负责策略控制和遥测数据收集。它提供了后
|
|||
|
||||
所有这些特性都使您能够更有效地设置、监控和加强服务的 SLO。当然,底线是您可以快速有效地检测到并修复出现的问题。
|
||||
|
||||
请参考 [可观察性文档](/docs/concepts/observability/) 获取更多细节。
|
||||
请参考 [可观察性文档](/zh/docs/concepts/observability/) 获取更多细节。
|
||||
|
||||
## 平台支持{#platform-support}
|
||||
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ Bookinfo 应用中的几个微服务是由不同的语言编写的。
|
|||
<title>Simple Bookstore App</title>
|
||||
{{< /text >}}
|
||||
|
||||
### 确定 Ingress 的 IP 和端口{#determine-the-ingress-i-p-and-port}
|
||||
### 确定 Ingress 的 IP 和端口{#determine-the-ingress-IP-and-port}
|
||||
|
||||
现在 Bookinfo 服务启动并运行中,您需要使应用程序可以从外部访问 Kubernetes 集群,例如使用浏览器。可以用 [Istio Gateway](/zh/docs/concepts/traffic-management/#gateways) 来实现这个目标。
|
||||
|
||||
|
|
|
|||
|
|
@ -1,97 +0,0 @@
|
|||
---
|
||||
title: 安装 Istio 到 Google Cloud Endpoints 服务
|
||||
description: 介绍如何手动将 Google Cloud Endpoints 服务和 Istio 集成。
|
||||
weight: 42
|
||||
aliases:
|
||||
- /zh/docs/guides/endpoints/index.html
|
||||
---
|
||||
|
||||
本文档说明了如何手动将 Istio 与现有 Google Cloud Endpoints 服务集成。
|
||||
|
||||
## 开始之前{#before-you-begin}
|
||||
|
||||
如果您没有 Endpoints 服务,但是想尝试一下,
|
||||
可以按照该[说明指南](https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine)在 GKE 上安装 Endpoints 服务。
|
||||
安装完成后,您应该能获取到 API 密钥,并将其存储在环境变量 `ENDPOINTS_KEY` 和外部 IP 地址 `EXTERNAL_IP` 中。
|
||||
您可以使用以下命令对服务进行测试:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${EXTERNAL_IP}/echo?key=${ENDPOINTS_KEY}"
|
||||
{{< /text >}}
|
||||
|
||||
要在 GKE 上安装 Istio,请参考 [Google Kubernetes Engine 快速开始](/zh/docs/setup/platform-setup/gke)
|
||||
|
||||
## HTTP Endpoints 服务{#http-endpoints-service}
|
||||
|
||||
1. 通过该[说明指南](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services),
|
||||
使用 `--includeIPRanges` 将服务和 Deployment 注入到网格中,这样 Egress 能够直接调用外部服务。
|
||||
否则,ESP 将无法访问 Google cloud 服务控件。
|
||||
|
||||
1. 注入以后,发出与上述相同的测试命令,以确保 ESP 调用在持续工作。
|
||||
|
||||
1. 如果您想通过 Istio ingress 来访问服务,创建以下网络定义:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: echo-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway # use Istio default gateway implementation
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
hosts:
|
||||
- "*"
|
||||
gateways:
|
||||
- echo-gateway
|
||||
http:
|
||||
- match:
|
||||
- uri:
|
||||
prefix: /echo
|
||||
route:
|
||||
- destination:
|
||||
port:
|
||||
number: 80
|
||||
host: esp-echo
|
||||
---
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. 通过该[说明指南](/zh/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-i-p-and-ports)获取 ingress 网关 IP 和端口。
|
||||
您可以通过 Istio ingress 来验证将要访问到的 Endpoints 服务。
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${INGRESS_HOST}:${INGRESS_PORT}/echo?key=${ENDPOINTS_KEY}"
|
||||
{{< /text >}}
|
||||
|
||||
## 使用安全加固的 Ingress 访问 HTTPS Endpoints service{#https-endpoints-service-using-secured-ingress}
|
||||
|
||||
安全地访问网格 Endpoints 服务的推荐方式是通过配置了 TLS 的 ingress。
|
||||
|
||||
1. 在启用严格双向 TLS 的情况下安装 Istio。确认以下命令的输出结果为 `STRICT` 或为空:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get meshpolicy default -n istio-system -o=jsonpath='{.spec.peers[0].mtls.mode}'
|
||||
{{< /text >}}
|
||||
|
||||
1. 通过该[说明指南](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services),
|
||||
使用 `--includeIPRanges` 将服务和 Deployment 重新注入到网格中,这样 Egress 可以直接调用外部服务。
|
||||
否则,ESP 将无法访问到 Google cloud 服务控件。
|
||||
|
||||
1. 然后,您将发现对 `ENDPOINTS_IP` 的访问将不再有效,因为 Istio 的代理只接受安全的网格连接。
|
||||
通过 Istio ingress 进行的访问应该依然有效,因为在网格里的 ingress 代理会初始化双向 TLS 连接。
|
||||
|
||||
1. 要确保 ingress 的访问安全性,请参考[说明指南](/zh/docs/tasks/traffic-management/ingress/secure-ingress-mount/)。
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
title: 网格扩展
|
||||
description: 配置一个跨 Kubernetes 集群、VM 和裸机的 Istio 网格。
|
||||
weight: 90
|
||||
aliases:
|
||||
- /zh/docs/examples/mesh-expansion/
|
||||
keywords: [kubernetes,mesh expansion]
|
||||
---
|
||||
|
||||
|
|
@ -47,4 +47,4 @@ In this module you prepare your local computer for the tutorial.
|
|||
|
||||
Congratulations, you configured your local computer!
|
||||
|
||||
You are ready to [run a single service locally](/docs/examples/microservices-istio/single/).
|
||||
You are ready to [run a single service locally](/zh/docs/examples/microservices-istio/single/).
|
||||
|
|
|
|||
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
title: 多集群服务网格
|
||||
description: 您可以尝试的 Istio 的多集群服务网格示例。
|
||||
weight: 100
|
||||
keywords: [multicluster]
|
||||
---
|
||||
有关更多信息,请参阅[多集群服务网格](/zh/docs/setup/deployment-models/)概念文档。
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: Platform-specific Examples (Deprecated)
|
||||
description: Examples for specific platform installations of Istio.
|
||||
weight: 110
|
||||
keywords: [multicluster]
|
||||
---
|
||||
|
||||
{{< warning >}}
|
||||
These examples are platform-specific and deprecated. They will be removed in the next release.
|
||||
{{< /warning >}}
|
||||
|
|
@ -0,0 +1,102 @@
|
|||
---
|
||||
title: Install Istio for Google Cloud Endpoints Services
|
||||
description: Explains how to manually integrate Google Cloud Endpoints services with Istio.
|
||||
weight: 10
|
||||
aliases:
|
||||
- /zh/docs/guides/endpoints/index.html
|
||||
- /zh/docs/examples/endpoints/
|
||||
---
|
||||
|
||||
This document shows how to manually integrate Istio with existing
|
||||
Google Cloud Endpoints services.
|
||||
|
||||
## Before you begin
|
||||
|
||||
If you don't have an Endpoints service and want to try it out, you can follow
|
||||
the [instructions](https://cloud.google.com/endpoints/docs/openapi/get-started-kubernetes-engine)
|
||||
to setup an Endpoints service on GKE.
|
||||
After setup, you should be able to get an API key and store it in `ENDPOINTS_KEY` environment variable and the external IP address `EXTERNAL_IP`.
|
||||
You may test the service using the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${EXTERNAL_IP}/echo?key=${ENDPOINTS_KEY}"
|
||||
{{< /text >}}
|
||||
|
||||
To install Istio for GKE, follow our [Quick Start with Google Kubernetes Engine](/zh/docs/setup/platform-setup/gke).
|
||||
|
||||
## HTTP endpoints service
|
||||
|
||||
1. Inject the service and the deployment into the mesh using `--includeIPRanges` by following the
|
||||
[instructions](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
|
||||
so that Egress is allowed to call external services directly.
|
||||
Otherwise, ESP will not be able to access Google cloud service control.
|
||||
|
||||
1. After injection, issue the same test command as above to ensure that calling ESP continues to work.
|
||||
|
||||
1. If you want to access the service through Istio ingress, create the following networking definitions:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: echo-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway # use Istio default gateway implementation
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
hosts:
|
||||
- "*"
|
||||
gateways:
|
||||
- echo-gateway
|
||||
http:
|
||||
- match:
|
||||
- uri:
|
||||
prefix: /echo
|
||||
route:
|
||||
- destination:
|
||||
port:
|
||||
number: 80
|
||||
host: esp-echo
|
||||
---
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. Get the ingress gateway IP and port by following the [instructions](/zh/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-i-p-and-ports).
|
||||
You can verify accessing the Endpoints service through Istio ingress:
|
||||
|
||||
{{< text bash >}}
|
||||
$ curl --request POST --header "content-type:application/json" --data '{"message":"hello world"}' "http://${INGRESS_HOST}:${INGRESS_PORT}/echo?key=${ENDPOINTS_KEY}"
|
||||
{{< /text >}}
|
||||
|
||||
## HTTPS endpoints service using secured Ingress
|
||||
|
||||
The recommended way to securely access a mesh Endpoints service is through an ingress configured with TLS.
|
||||
|
||||
1. Install Istio with strict mutual TLS enabled. Confirm that the following command outputs either `STRICT` or empty:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get meshpolicy default -n istio-system -o=jsonpath='{.spec.peers[0].mtls.mode}'
|
||||
{{< /text >}}
|
||||
|
||||
1. Re-inject the service and the deployment into the mesh using `--includeIPRanges` by following the
|
||||
[instructions](/zh/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services)
|
||||
so that Egress is allowed to call external services directly.
|
||||
Otherwise, ESP will not be able to access Google cloud service control.
|
||||
|
||||
1. After this, you will find access to `ENDPOINTS_IP` no longer works because the Istio proxy only accepts secure mesh connections.
|
||||
Accessing through Istio ingress should continue to work since the ingress proxy initiates mutual TLS connections within the mesh.
|
||||
|
||||
1. To secure the access at the ingress, follow the [instructions](/zh/docs/tasks/traffic-management/ingress/secure-ingress-mount/).
|
||||
|
|
@ -5,10 +5,11 @@ weight: 65
|
|||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /zh/docs/tasks/multicluster/gke/
|
||||
- /zh/docs/examples/multicluster/gke/
|
||||
---
|
||||
|
||||
This example shows how to configure a multicluster mesh with a
|
||||
[single-network deployment](/docs/ops/prep/deployment-models/#single-network)
|
||||
[single-network deployment](/zh/docs/ops/prep/deployment-models/#single-network)
|
||||
over 2 [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) clusters.
|
||||
|
||||
## Before you begin
|
||||
|
|
@ -23,7 +24,7 @@ In addition to the prerequisites for installing Istio the following setup is req
|
|||
|
||||
* Install and initialize the [Google Cloud SDK](https://cloud.google.com/sdk/install)
|
||||
|
||||
## Create the GKE Clusters
|
||||
## Create the GKE clusters
|
||||
|
||||
1. Set the default project for `gcloud` to perform actions on:
|
||||
|
||||
|
|
@ -220,7 +221,7 @@ $ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n $
|
|||
$ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
|
||||
{{< /text >}}
|
||||
|
||||
## Deploy Bookinfo Example Across Clusters
|
||||
## Deploy the Bookinfo example across clusters
|
||||
|
||||
1. Install Bookinfo on the first cluster. Remove the `reviews-v3` deployment to deploy on remote:
|
||||
|
||||
|
|
@ -262,7 +263,7 @@ $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}
|
|||
## Uninstalling
|
||||
|
||||
The following should be done in addition to the uninstall of Istio as described in the
|
||||
[VPN-based multicluster uninstall section](/docs/setup/install/multicluster/shared-vpn/):
|
||||
[VPN-based multicluster uninstall section](/zh/docs/setup/install/multicluster/shared-vpn/):
|
||||
|
||||
1. Delete the Google Cloud firewall rule:
|
||||
|
||||
|
|
@ -5,14 +5,15 @@ weight: 70
|
|||
keywords: [kubernetes,multicluster]
|
||||
aliases:
|
||||
- /zh/docs/tasks/multicluster/icp/
|
||||
- /zh/docs/examples/multicluster/icp/
|
||||
---
|
||||
|
||||
This example demonstrates how to setup network connectivity between two
|
||||
[IBM Cloud Private](https://www.ibm.com/cloud/private) clusters
|
||||
and then compose them into a multicluster mesh using a
|
||||
[single-network deployment](/docs/ops/prep/deployment-models/#single-network).
|
||||
[single-network deployment](/zh/docs/ops/prep/deployment-models/#single-network).
|
||||
|
||||
## Create the IBM Cloud Private Clusters
|
||||
## Create the IBM Cloud Private clusters
|
||||
|
||||
1. [Install two IBM Cloud Private clusters](https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/installing/install.html).
|
||||
|
||||
|
|
@ -47,7 +48,7 @@ and then compose them into a multicluster mesh using a
|
|||
|
||||
1. Repeat above two steps to validate `cluster-2`.
|
||||
|
||||
## Configure Pod Communication Across IBM Cloud Private Clusters
|
||||
## Configure pod communication across IBM Cloud Private clusters
|
||||
|
||||
IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client
|
||||
on each node distributes the IP router information to all nodes.
|
||||
|
|
@ -147,14 +148,14 @@ across all nodes in the two IBM Cloud Private Clusters.
|
|||
|
||||
## Install Istio for multicluster
|
||||
|
||||
Follow the [single-network shared control plane instructions](/docs/setup/install/multicluster/shared-vpn/) to install and configure
|
||||
Follow the [single-network shared control plane instructions](/zh/docs/setup/install/multicluster/shared-vpn/) to install and configure
|
||||
local Istio control plane and Istio remote on `cluster-1` and `cluster-2`.
|
||||
|
||||
In this guide, it is assumed that the local Istio control plane is deployed in `cluster-1`, while the Istio remote is deployed in `cluster-2`.
|
||||
|
||||
## Deploy the Bookinfo example across clusters
|
||||
|
||||
The following example enables [automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection).
|
||||
The following example enables [automatic sidecar injection](/zh/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection).
|
||||
|
||||
1. Install `bookinfo` on the first cluster `cluster-1`. Remove the `reviews-v3` deployment which will be deployed on cluster `cluster-2` in the following step:
|
||||
|
||||
|
|
@ -236,7 +237,7 @@ The following example enables [automatic sidecar injection](/docs/setup/addition
|
|||
service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as
|
||||
in a federated Kubernetes environment.
|
||||
|
||||
1. [Determine the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
1. [Determine the ingress IP and ports](/zh/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-i-p-and-ports)
|
||||
for `istio-ingressgateway`'s `INGRESS_HOST` and `INGRESS_PORT` variables to access the gateway.
|
||||
|
||||
Access `http://<INGRESS_HOST>:<INGRESS_PORT>/productpage` repeatedly and each version of `reviews` should be equally load balanced,
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Virtual Machines
|
||||
description: Examples that add workloads running on virtual machines to an Istio mesh.
|
||||
weight: 30
|
||||
aliases:
|
||||
- /zh/docs/examples/mesh-expansion/
|
||||
- /zh/docs/examples/mesh-expansion
|
||||
- /zh/docs/tasks/virtual-machines
|
||||
keywords:
|
||||
- kubernetes
|
||||
- vms
|
||||
- virtual-machine
|
||||
---
|
||||
|
|
@ -1,14 +1,19 @@
|
|||
---
|
||||
title: Bookinfo with Mesh Expansion
|
||||
description: Illustrates how to expand the Bookinfo application's mesh with a raw VM service.
|
||||
title: Bookinfo with a Virtual Machine
|
||||
description: Run the Bookinfo application with a MySQL service running on a virtual
|
||||
machine within your mesh.
|
||||
weight: 60
|
||||
keywords: [vms]
|
||||
keywords:
|
||||
- virtual-machine
|
||||
- vms
|
||||
aliases:
|
||||
- /zh/docs/examples/integrating-vms/
|
||||
- /zh/docs/examples/integrating-vms/
|
||||
- /zh/docs/examples/mesh-expansion/bookinfo-expanded
|
||||
- /zh/docs/examples/vm-bookinfo
|
||||
---
|
||||
|
||||
This example deploys the Bookinfo services across Kubernetes and a set of
|
||||
Virtual Machines, and illustrates how to use Istio service mesh to control
|
||||
This example deploys the Bookinfo application across Kubernetes with one
|
||||
service running on a virtual machine (VM), and illustrates how to control
|
||||
this infrastructure as a single mesh.
|
||||
|
||||
{{< warning >}}
|
||||
|
|
@ -19,7 +24,7 @@ VMs cannot initiate any direct communication to Kubernetes Pods even when using
|
|||
|
||||
## Overview
|
||||
|
||||
{{< image width="80%" link="./mesh-expansion.svg" caption="Bookinfo Application with Istio Mesh Expansion" >}}
|
||||
{{< image width="80%" link="./vm-bookinfo.svg" caption="Bookinfo running on VMs" >}}
|
||||
|
||||
<!-- source of the drawing
|
||||
https://docs.google.com/drawings/d/1G1592HlOVgtbsIqxJnmMzvy6ejIdhajCosxF1LbvspI/edit
|
||||
|
|
@ -27,12 +32,12 @@ https://docs.google.com/drawings/d/1G1592HlOVgtbsIqxJnmMzvy6ejIdhajCosxF1LbvspI/
|
|||
|
||||
## Before you begin
|
||||
|
||||
* Setup Istio by following the instructions in the
|
||||
[Installation guide](/docs/setup/getting-started/).
|
||||
- Setup Istio by following the instructions in the
|
||||
[Installation guide](/zh/docs/setup/getting-started/).
|
||||
|
||||
* Deploy the [Bookinfo](/docs/examples/bookinfo/) sample application (in the `bookinfo` namespace).
|
||||
- Deploy the [Bookinfo](/zh/docs/examples/bookinfo/) sample application (in the `bookinfo` namespace).
|
||||
|
||||
* Create a VM named 'vm-1' in the same project as Istio cluster, and [Join the Mesh](/docs/examples/virtual-machines/single-network/).
|
||||
- Create a VM named 'vm-1' in the same project as the Istio cluster, and [join the mesh](/zh/docs/examples/virtual-machines/single-network/).
|
||||
|
||||
## Running MySQL on the VM
|
||||
|
||||
|
|
@ -64,7 +69,7 @@ To make it easy to visually inspect the difference in the output of the Bookinfo
|
|||
following commands to inspect the ratings:
|
||||
|
||||
{{< text bash >}}
|
||||
$ mysql -u root -ppassword test -e "select * from ratings;"
|
||||
$ mysql -u root -password test -e "select * from ratings;"
|
||||
+----------+--------+
|
||||
| ReviewID | Rating |
|
||||
+----------+--------+
|
||||
|
|
@ -95,7 +100,7 @@ $ hostname -I
|
|||
|
||||
## Registering the mysql service with the mesh
|
||||
|
||||
On a host with access to [`istioctl`](/docs/reference/commands/istioctl) commands, register the VM and mysql db service
|
||||
On a host with access to [`istioctl`](/zh/docs/reference/commands/istioctl) commands, register the VM and mysql db service
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl register -n vm mysqldb <ip-address-of-vm> 3306
|
||||
|
Before Width: | Height: | Size: 218 KiB After Width: | Height: | Size: 218 KiB |
|
|
@ -1,55 +1,67 @@
|
|||
---
|
||||
title: Multi-network Mesh Expansion
|
||||
description: Integrate VMs and bare metal hosts into an Istio mesh deployed on Kubernetes with gateways.
|
||||
title: Virtual Machines in Multi-Network Meshes
|
||||
description: Learn how to add a service running on a virtual machine to your multi-network
|
||||
Istio mesh.
|
||||
weight: 30
|
||||
keywords: [kubernetes,vms,gateways]
|
||||
keywords:
|
||||
- kubernetes
|
||||
- virtual-machine
|
||||
- gateways
|
||||
- vms
|
||||
aliases:
|
||||
- /zh/docs/examples/mesh-expansion/multi-network
|
||||
- /zh/docs/tasks/virtual-machines/multi-network
|
||||
---
|
||||
|
||||
This example provides instructions to integrate VMs and bare metal hosts into
|
||||
an Istio mesh deployed on Kubernetes with gateways. No VPN connectivity nor direct network access between workloads in
|
||||
VMs, bare metals and clusters is required.
|
||||
This example provides instructions to integrate a VM or a bare metal host into a
|
||||
multi-network Istio mesh deployed on Kubernetes using gateways. This approach
|
||||
doesn't require VPN connectivity or direct network access between the VM, the
|
||||
bare metal and the clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* One or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
- One or more Kubernetes clusters with versions: {{< supported_kubernetes_versions >}}.
|
||||
|
||||
* Mesh expansion machines must have IP connectivity to the Ingress gateways in the mesh.
|
||||
|
||||
* Install the [Helm client](https://docs.helm.sh/using_helm/). Helm is needed to enable mesh expansion.
|
||||
- Virtual machines (VMs) must have IP connectivity to the Ingress gateways in the mesh.
|
||||
|
||||
## Installation steps
|
||||
|
||||
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
|
||||
|
||||
### Customized installation of Istio on the Cluster
|
||||
### Customized installation of Istio on the cluster
|
||||
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and
|
||||
generate the configuration files that let mesh expansion VMs connect to the mesh. To prepare the
|
||||
cluster for mesh expansion, run the following commands on a machine with cluster admin privileges:
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to
|
||||
configure the Istio installation itself, and generate the configuration files
|
||||
that let VMs connect to the mesh. Prepare the cluster for the VM with the
|
||||
following commands on a machine with cluster admin privileges:
|
||||
|
||||
1. Generate a `meshexpansion-gateways` Istio configuration file using `helm`:
|
||||
1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key) for more details.
|
||||
|
||||
{{< text bash >}}
|
||||
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
|
||||
-f https://github.com/irisdingbj/meshExpansion/blob/master/values-istio-meshexpansion-gateways.yaml \ > $HOME/istio-mesh-expansion-gatways.yaml
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[Installation with Helm](/docs/setup/install/helm/) instructions.
|
||||
|
||||
1. Deploy Istio control plane into the cluster
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
|
||||
$ kubectl apply -f $HOME/istio-mesh-expansion-gatways.yaml
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify Istio is installed successfully
|
||||
1. Deploy Istio control plane into the cluster
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl verify-install -f $HOME/istio-mesh-expansion-gatways.yaml
|
||||
{{< /text >}}
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/vm/values-istio-meshexpansion-gateways.yaml \
|
||||
--set coreDNS.enabled=true
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/zh/docs/setup/install/istioctl/).
|
||||
|
||||
1. Create `vm` namespace for the VM services.
|
||||
|
||||
|
|
@ -74,7 +86,10 @@ cluster for mesh expansion, run the following commands on a machine with cluster
|
|||
-o jsonpath='{.data.cert-chain\.pem}' | base64 --decode > cert-chain.pem
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the mesh expansion machines access [Citadel](/docs/concepts/security/) and [Pilot](/docs/ops/architecture/#pilot) and workloads on cluster through this IP address.
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the
|
||||
VMs access [Citadel](/zh/docs/concepts/security/) and
|
||||
[Pilot](/zh/docs/ops/architecture/#pilot) and workloads on cluster through
|
||||
this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
|
|
@ -97,81 +112,16 @@ cluster for mesh expansion, run the following commands on a machine with cluster
|
|||
ISTIO_SERVICE_CIDR=172.21.0.0/16
|
||||
{{< /text >}}
|
||||
|
||||
1. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes
|
||||
to the `cluster.env` file with the following command. You can change the ports later if necessary.
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "ISTIO_INBOUND_PORTS=8888" >> cluster.env
|
||||
{{< /text >}}
|
||||
|
||||
### Setup DNS
|
||||
|
||||
Providing DNS resolution to allow services running on VM can access the
|
||||
services running in the cluster. Istio itself does not use the DNS for
|
||||
routing requests between services. Services local to a cluster share a
|
||||
common DNS suffix(e.g., `svc.cluster.local`). Kubernetes DNS provides
|
||||
DNS resolution for these services.
|
||||
|
||||
To provide a similar setup to allow services accessible from VMs, you name
|
||||
services in the clusters in the format
|
||||
`<name>.<namespace>.global`. Istio also ships with a CoreDNS server that
|
||||
will provide DNS resolution for these services. In order to utilize this
|
||||
DNS, Kubernetes' DNS must be configured to `stub a domain` for `.global`.
|
||||
|
||||
{{< warning >}}
|
||||
Some cloud providers have different specific `DNS domain stub` capabilities
|
||||
and procedures for their Kubernetes services. Reference the cloud provider's
|
||||
documentation to determine how to `stub DNS domains` for each unique
|
||||
environment. The objective of this bash is to stub a domain for `.global` on
|
||||
port `53` to reference or proxy the `istiocoredns` service in Istio's service
|
||||
namespace.
|
||||
{{< /warning >}}
|
||||
|
||||
Create one of the following ConfigMaps, or update an existing one, in each
|
||||
cluster that will be calling services in remote clusters
|
||||
(every cluster in the general case):
|
||||
|
||||
For clusters that use `kube-dns`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
data:
|
||||
stubDomains: |
|
||||
{"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
For clusters that use CoreDNS:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: coredns
|
||||
namespace: kube-system
|
||||
data:
|
||||
Corefile: |
|
||||
.:53 {
|
||||
errors
|
||||
health
|
||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :9153
|
||||
proxy . /etc/resolv.conf
|
||||
cache 30
|
||||
loop
|
||||
reload
|
||||
loadbalance
|
||||
}
|
||||
global:53 {
|
||||
errors
|
||||
cache 30
|
||||
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
|
||||
}
|
||||
EOF
|
||||
{{< /text >}}
|
||||
Reference [Setup DNS](/zh/docs/setup/install/multicluster/gateways/#setup-DNS) to set up DNS for the cluster.
|
||||
|
||||
### Setting up the VM
|
||||
|
||||
|
|
@ -212,14 +162,6 @@ The following example updates the `/etc/hosts` file with the Istio gateway addre
|
|||
$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
|
||||
{{< /text >}}
|
||||
|
||||
1. Verify the node agent works:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo node_agent
|
||||
....
|
||||
CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
|
||||
{{< /text >}}
|
||||
|
||||
1. Start Istio using `systemctl`.
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -229,14 +171,15 @@ The following example updates the `/etc/hosts` file with the Istio gateway addre
|
|||
|
||||
## Added Istio resources
|
||||
|
||||
Below Istio resources are added to support Mesh Expansion with gateways. This released the flat network requirement between the VM and cluster.
|
||||
The Istio resources below are added to support adding VMs to the mesh with
|
||||
gateways. These resources remove the flat network requirement between the VM and
|
||||
cluster.
|
||||
|
||||
| Resource Kind| Resource Name | Function |
|
||||
| ---------------------------- |--------------------------- | ----------------- |
|
||||
| `configmap` | `coredns` | Send *.global request to `istiocordns` service |
|
||||
| `service` | `istiocoredns` | Resolve *.global to Istio Ingress gateway |
|
||||
| `gateway.networking.istio.io` | `meshexpansion-gateway` | Open port for Pilot, Citadel and Mixer |
|
||||
| `gateway.networking.istio.io` | `istio-multicluster-egressgateway` | Open port 15443 for outbound *.global traffic|
|
||||
| `gateway.networking.istio.io` | `istio-multicluster-ingressgateway`| Open port 15443 for inbound *.global traffic |
|
||||
| `envoyfilter.networking.istio.io` | `istio-multicluster-ingressgateway`| Transform `*.global` to `*. svc.cluster.local` |
|
||||
| `destinationrule.networking.istio.io`| `istio-multicluster-destinationrule`| Set traffic policy for 15443 traffic |
|
||||
|
|
@ -251,7 +194,7 @@ Below Istio resources are added to support Mesh Expansion with gateways. This re
|
|||
Every service in the cluster that needs to be accessed from the VM requires a service entry configuration in the cluster. The host used in the service entry should be of the form `<name>.<namespace>.global` where name and namespace correspond to the service’s name and namespace respectively.
|
||||
|
||||
To demonstrate access from VM to cluster services, configure the
|
||||
the [httpbin service]({{<github_tree>}}/samples/httpbin)
|
||||
the [httpbin service]({{< github_tree >}}/samples/httpbin)
|
||||
in the cluster.
|
||||
|
||||
1. Deploy the `httpbin` service in the cluster
|
||||
|
|
@ -335,11 +278,13 @@ in the cluster.
|
|||
|
||||
After setup, the machine can access services running in the Kubernetes cluster.
|
||||
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a mesh expansion VM using
|
||||
`/etc/hosts/`, in this case using a service from the [httpbin service]({{<github_tree>}}/samples/httpbin).
|
||||
The following example shows accessing a service running in the Kubernetes
|
||||
cluster from a VM using `/etc/hosts/`, in this case using a
|
||||
service from the [httpbin service]({{<github_tree>}}/samples/httpbin).
|
||||
|
||||
1. On the mesh expansion machine, add the service name and address to its `/etc/hosts` file. You can then connect to
|
||||
the cluster service from the VM, as in the example below:
|
||||
1. On the added VM, add the service name and address to its `/etc/hosts` file.
|
||||
You can then connect to the cluster service from the VM, as in the example
|
||||
below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "127.255.0.3 httpbin.bar.global" | sudo tee -a /etc/hosts
|
||||
|
|
@ -354,7 +299,7 @@ $ curl -v httpbin.bar.global:8000
|
|||
|
||||
The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
||||
|
||||
## Running services on a mesh expansion machine
|
||||
## Running services on the added VM
|
||||
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8888:
|
||||
|
||||
|
|
@ -364,46 +309,14 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
|
||||
1. Determine the VM instance's IP address.
|
||||
|
||||
1. Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using a
|
||||
[service entry](/docs/reference/config/networking/service-entry/). Service entries let you manually add
|
||||
additional services to Pilot's abstract model of the mesh. Once VM services are part of the mesh's abstract model,
|
||||
other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports,
|
||||
and appropriate labels of all VMs exposing a particular service, for example:
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: vmhttp
|
||||
spec:
|
||||
hosts:
|
||||
- vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local
|
||||
ports:
|
||||
- number: 8888
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: STATIC
|
||||
endpoints:
|
||||
- address: ${VM_IP}
|
||||
ports:
|
||||
http: 8888
|
||||
labels:
|
||||
app: vmhttp
|
||||
version: "v1"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. To
|
||||
integrate the mapping with your own DNS system, use [`istioctl register`](/docs/reference/commands/istioctl/#istioctl-register) and creates a Kubernetes `selector-less`
|
||||
service, for example:
|
||||
1. Add VM services to the mesh
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP} 8888
|
||||
$ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8888 -n ${SERVICE_NAMESPACE}
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Ensure you have added the `istioctl` client to your path, as described in the [download page](/docs/setup/getting-started/#download).
|
||||
Ensure you have added the `istioctl` client to your path, as described in the [download page](/zh/docs/setup/getting-started/#download).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
|
||||
|
|
@ -412,7 +325,6 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
$ kubectl apply -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d
|
||||
sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
|
||||
...
|
||||
{{< /text >}}
|
||||
|
|
@ -448,9 +360,8 @@ Run the following commands to remove the expansion VM from the mesh's abstract
|
|||
model.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP}
|
||||
2019-02-21T22:12:22.023775Z info Deregistered service successfull
|
||||
$ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
|
||||
serviceentry.networking.istio.io "vmhttp" deleted
|
||||
$ istioctl experimental remove-from-mesh -n ${SERVICE_NAMESPACE} vmhttp
|
||||
Kubernetes Service "vmhttp.vm" has been deleted for external service "vmhttp"
|
||||
Service Entry "mesh-expansion-vmhttp" has been deleted for external service "vmhttp"
|
||||
{{< /text >}}
|
||||
|
||||
|
|
@ -1,84 +1,91 @@
|
|||
---
|
||||
title: Single-network Mesh Expansion
|
||||
description: Integrate VMs and bare metal hosts into an Istio mesh deployed on Kubernetes.
|
||||
title: Virtual Machines in Single-Network Meshes
|
||||
description: Learn how to add a service running on a virtual machine
|
||||
to your single network Istio mesh.
|
||||
weight: 20
|
||||
keywords: [kubernetes,vms]
|
||||
keywords:
|
||||
- kubernetes
|
||||
- vms
|
||||
- virtual-machines
|
||||
aliases:
|
||||
- /zh/docs/setup/kubernetes/additional-setup/mesh-expansion/
|
||||
- /zh/docs/setup/kubernetes/additional-setup/mesh-expansion/
|
||||
- /zh/docs/examples/mesh-expansion/single-network
|
||||
- /zh/docs/tasks/virtual-machines/single-network
|
||||
---
|
||||
|
||||
This example provides instructions to integrate VMs and bare metal hosts into
|
||||
an Istio mesh deployed on Kubernetes.
|
||||
This example shows how to integrate a VM or a bare metal host into a single-network
|
||||
Istio mesh deployed on Kubernetes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* You have already set up Istio on Kubernetes. If you haven't done so, you can find out how in the [Installation guide](/docs/setup/getting-started/).
|
||||
- You have already set up Istio on Kubernetes. If you haven't done so, you can
|
||||
find out how in the [Installation guide](/zh/docs/setup/getting-started/).
|
||||
|
||||
* Mesh expansion machines must have IP connectivity to the endpoints in the mesh. This
|
||||
typically requires a VPC or a VPN, as well as a container network that
|
||||
provides direct (without NAT or firewall deny) routing to the endpoints. The machine
|
||||
is not required to have access to the cluster IP addresses assigned by Kubernetes.
|
||||
- Virtual machines (VMs) must have IP connectivity to the endpoints in the mesh.
|
||||
This typically requires a VPC or a VPN, as well as a container network that
|
||||
provides direct (without NAT or firewall deny) routing to the endpoints. The
|
||||
machine is not required to have access to the cluster IP addresses assigned by
|
||||
Kubernetes.
|
||||
|
||||
* Mesh expansion VMs must have access to a DNS server that resolves names to cluster IP addresses. Options
|
||||
include exposing the Kubernetes DNS server through an internal load balancer, using a Core DNS
|
||||
server, or configuring the IPs in any other DNS server accessible from the VM.
|
||||
|
||||
* Install the [Helm client](https://helm.sh/docs/intro/using_helm/). Helm is needed to enable mesh expansion.
|
||||
- VMs must have access to a DNS server that resolves names to cluster IP
|
||||
addresses. Options include exposing the Kubernetes DNS server through an
|
||||
internal load balancer, using a [Core DNS](https://coredns.io/) server, or
|
||||
configuring the IPs in any other DNS server accessible from the VM.
|
||||
|
||||
The following instructions:
|
||||
|
||||
* Assume the expansion VM is running on GCE.
|
||||
* Use Google platform-specific commands for some steps.
|
||||
- Assume the expansion VM is running on GCE.
|
||||
- Use Google platform-specific commands for some steps.
|
||||
|
||||
## Installation steps
|
||||
|
||||
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
|
||||
|
||||
### Preparing the Kubernetes cluster for expansion
|
||||
### Preparing the Kubernetes cluster for VMs
|
||||
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and
|
||||
generate the configuration files that let mesh expansion VMs connect to the mesh. To prepare the
|
||||
cluster for mesh expansion, run the following commands on a machine with cluster admin privileges:
|
||||
The first step when adding non-Kubernetes services to an Istio mesh is to
|
||||
configure the Istio installation itself, and generate the configuration files
|
||||
that let VMs connect to the mesh. Prepare the cluster for the VM with the
|
||||
following commands on a machine with cluster admin privileges:
|
||||
|
||||
1. Ensure that mesh expansion is enabled for the cluster. If you didn't use
|
||||
the `--set global.meshExpansion.enabled=true` flag when installing Helm,
|
||||
you can use one of the following two options depending on how you originally installed
|
||||
Istio on the cluster:
|
||||
1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See [Certificate Authority (CA) certificates](/zh/docs/tasks/security/citadel-config/plugin-ca-cert/#plugging-in-the-existing-certificate-and-key) for more details.
|
||||
|
||||
* If you installed Istio with Helm and Tiller, run `helm upgrade` with the new option:
|
||||
|
||||
{{< text bash >}}
|
||||
$ cd install/kubernetes/helm/istio
|
||||
$ helm upgrade --set global.meshExpansion.enabled=true istio .
|
||||
$ cd -
|
||||
{{< /text >}}
|
||||
|
||||
* If you installed Istio without Helm and Tiller, use `helm template` to update your configuration with the option and reapply with `kubectl`:
|
||||
{{< warning >}}
|
||||
The root and intermediate certificate from the samples directory are widely
|
||||
distributed and known. Do **not** use these certificates in production as
|
||||
your clusters would then be open to security vulnerabilities and compromise.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl create namespace istio-system
|
||||
$ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
|
||||
$ cd install/kubernetes/helm/istio
|
||||
$ helm template --set global.meshExpansion.enabled=true --namespace istio-system . > istio.yaml
|
||||
$ kubectl apply -f istio.yaml
|
||||
$ cd -
|
||||
$ kubectl create secret generic cacerts -n istio-system \
|
||||
--from-file=@samples/certs/ca-cert.pem@ \
|
||||
--from-file=@samples/certs/ca-key.pem@ \
|
||||
--from-file=@samples/certs/root-cert.pem@ \
|
||||
--from-file=@samples/certs/cert-chain.pem@
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
When updating configuration with Helm, you can either set the option on the command line, as in our examples, or add
|
||||
it to a `.yaml` values file and pass it to
|
||||
the command with `--values`, which is the recommended approach when managing configurations with multiple options. You
|
||||
can see some sample values files in your Istio installation's `install/kubernetes/helm/istio` directory and find out
|
||||
more about customizing Helm charts in the [Helm documentation](https://helm.sh/docs/intro/using_helm/).
|
||||
{{< /tip >}}
|
||||
1. Deploy Istio control plane into the cluster
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE` environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest apply \
|
||||
-f install/kubernetes/operator/examples/vm/values-istio-meshexpansion.yaml
|
||||
{{< /text >}}
|
||||
|
||||
For further details and customization options, refer to the
|
||||
[installation instructions](/zh/docs/setup/install/istioctl/).
|
||||
|
||||
1. Define the namespace the VM joins. This example uses the `SERVICE_NAMESPACE`
|
||||
environment variable to store the namespace. The value of this variable must
|
||||
match the namespace you use in the configuration files later on.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export SERVICE_NAMESPACE="default"
|
||||
{{< /text >}}
|
||||
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the mesh expansion machines access [Citadel](/docs/concepts/security/) and [Pilot](/docs/ops/architecture/#pilot) through this IP address.
|
||||
1. Determine and store the IP address of the Istio ingress gateway since the VMs
|
||||
access [Citadel](/zh/docs/concepts/security/) and
|
||||
[Pilot](/zh/docs/ops/architecture/#pilot) through this IP address.
|
||||
|
||||
{{< text bash >}}
|
||||
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
|
||||
|
|
@ -141,7 +148,7 @@ Next, run the following commands on each machine that you want to add to the mes
|
|||
$ sudo dpkg -i istio-sidecar.deb
|
||||
{{< /text >}}
|
||||
|
||||
1. Add the IP address of the Istio gateway to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-expansion) section to learn how to obtain the IP address.
|
||||
1. Add the IP address of the Istio gateway to `/etc/hosts`. Revisit the [preparing the cluster](#preparing-the-kubernetes-cluster-for-vms) section to learn how to obtain the IP address.
|
||||
The following example updates the `/etc/hosts` file with the Istio gateway address:
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -185,10 +192,10 @@ The following example updates the `/etc/hosts` file with the Istio gateway addre
|
|||
## Send requests from VM workloads to Kubernetes services
|
||||
|
||||
After setup, the machine can access services running in the Kubernetes cluster
|
||||
or on other mesh expansion machines.
|
||||
or on other VMs.
|
||||
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a mesh expansion VM using
|
||||
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/docs/examples/bookinfo/).
|
||||
The following example shows accessing a service running in the Kubernetes cluster from a VM using
|
||||
`/etc/hosts/`, in this case using a service from the [Bookinfo example](/zh/docs/examples/bookinfo/).
|
||||
|
||||
1. First, on the cluster admin machine get the virtual IP address (`clusterIP`) for the service:
|
||||
|
||||
|
|
@ -197,8 +204,9 @@ The following example shows accessing a service running in the Kubernetes cluste
|
|||
10.55.246.247
|
||||
{{< /text >}}
|
||||
|
||||
1. Then on the mesh expansion machine, add the service name and address to its `etc/hosts` file. You can then connect to
|
||||
the cluster service from the VM, as in the example below:
|
||||
1. Then on the added VM, add the service name and address to its `etc/hosts`
|
||||
file. You can then connect to the cluster service from the VM, as in the
|
||||
example below:
|
||||
|
||||
{{< text bash >}}
|
||||
$ echo "10.55.246.247 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
|
||||
|
|
@ -212,7 +220,7 @@ $ curl -v productpage.default.svc.cluster.local:9080
|
|||
|
||||
The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
||||
|
||||
## Running services on a mesh expansion machine
|
||||
## Running services on the added VM
|
||||
|
||||
1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:
|
||||
|
||||
|
|
@ -229,46 +237,14 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
$ echo ${GCE_IP}
|
||||
{{< /text >}}
|
||||
|
||||
1. Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using a
|
||||
[service entry](/docs/reference/config/networking/service-entry/). Service entries let you manually add
|
||||
additional services to Pilot's abstract model of the mesh. Once VM services are part of the mesh's abstract model,
|
||||
other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports,
|
||||
and appropriate labels of all VMs exposing a particular service, for example:
|
||||
|
||||
{{< text bash yaml >}}
|
||||
$ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: ServiceEntry
|
||||
metadata:
|
||||
name: vmhttp
|
||||
spec:
|
||||
hosts:
|
||||
- vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local
|
||||
ports:
|
||||
- number: 8080
|
||||
name: http
|
||||
protocol: HTTP
|
||||
resolution: STATIC
|
||||
endpoints:
|
||||
- address: ${GCE_IP}
|
||||
ports:
|
||||
http: 8080
|
||||
labels:
|
||||
app: vmhttp
|
||||
version: "v1"
|
||||
EOF
|
||||
{{< /text >}}
|
||||
|
||||
1. The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. To
|
||||
integrate the mapping with your own DNS system, use [`istioctl register`](/docs/reference/commands/istioctl#istioctl-register) and creates a Kubernetes `selector-less`
|
||||
service, for example:
|
||||
1. Add VM services to the mesh
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP} 8080
|
||||
$ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8080 -n ${SERVICE_NAMESPACE}
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
Make sure you have already added the [`istioctl`](/docs/reference/commands/istioctl) client to your path, as described in the [download page](/docs/setup/getting-started/#download).
|
||||
Ensure you have added the `istioctl` client to your path, as described in the [download page](/zh/docs/setup/getting-started/#download).
|
||||
{{< /tip >}}
|
||||
|
||||
1. Deploy a pod running the `sleep` service in the Kubernetes cluster, and wait until it is ready:
|
||||
|
|
@ -277,7 +253,6 @@ The `server: envoy` header indicates that the sidecar intercepted the traffic.
|
|||
$ kubectl apply -f @samples/sleep/sleep.yaml@
|
||||
$ kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d
|
||||
sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
|
||||
...
|
||||
{{< /text >}}
|
||||
|
|
@ -313,20 +288,19 @@ Run the following commands to remove the expansion VM from the mesh's abstract
|
|||
model.
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${GCE_IP}
|
||||
2019-02-21T22:12:22.023775Z info Deregistered service successfull
|
||||
$ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
|
||||
serviceentry.networking.istio.io "vmhttp" deleted
|
||||
$ istioctl experimental remove-from-mesh -n ${SERVICE_NAMESPACE} vmhttp
|
||||
Kubernetes Service "vmhttp.vm" has been deleted for external service "vmhttp"
|
||||
Service Entry "mesh-expansion-vmhttp" has been deleted for external service "vmhttp"
|
||||
{{< /text >}}
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The following are some basic troubleshooting steps for common mesh expansion issues.
|
||||
The following are some basic troubleshooting steps for common VM-related issues.
|
||||
|
||||
* When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
|
||||
- When making requests from a VM to the cluster, ensure you don't run the requests as `root` or
|
||||
`istio-proxy` user. By default, Istio excludes both users from interception.
|
||||
|
||||
* Verify the machine can reach the IP of the all workloads running in the cluster. For example:
|
||||
- Verify the machine can reach the IP of the all workloads running in the cluster. For example:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
|
||||
|
|
@ -338,14 +312,14 @@ The following are some basic troubleshooting steps for common mesh expansion iss
|
|||
html output
|
||||
{{< /text >}}
|
||||
|
||||
* Check the status of the node agent and sidecar:
|
||||
- Check the status of the node agent and sidecar:
|
||||
|
||||
{{< text bash >}}
|
||||
$ sudo systemctl status istio-auth-node-agent
|
||||
$ sudo systemctl status istio
|
||||
{{< /text >}}
|
||||
|
||||
* Check that the processes are running. The following is an example of the processes you should see on the VM if you run
|
||||
- Check that the processes are running. The following is an example of the processes you should see on the VM if you run
|
||||
`ps`, filtered for `istio`:
|
||||
|
||||
{{< text bash >}}
|
||||
|
|
@ -356,7 +330,7 @@ The following are some basic troubleshooting steps for common mesh expansion iss
|
|||
istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.default~default.svc.cluster.local
|
||||
{{< /text >}}
|
||||
|
||||
* Check the Envoy access and error logs:
|
||||
- Check the Envoy access and error logs:
|
||||
|
||||
{{< text bash >}}
|
||||
$ tail /var/log/istio/istio.log
|
||||
|
|
@ -41,7 +41,7 @@ $ kubectl logs PODNAME -c istio-proxy -n NAMESPACE
|
|||
|
||||
在当前版本的 Envoy sidecar 实现中,加权版本分发被观测到至少需要 100 个请求。
|
||||
|
||||
如果路由规则在 [Bookinfo](/zh/docs/examples/bookinfo/) 这个例子中完美地运行,但在你自己的应用中相似版本的路由规则却没有生效,可能因为你的 Kubernetes service 需要被稍微地修改。为了利用 Istio 的七层路由特性 Kubernetes service 必须严格遵守某些限制。参考 [Pods 和 Services 的要求](/zh/docs/setup/additional-setup/requirements/)查看详细信息。
|
||||
如果路由规则在 [Bookinfo](/zh/docs/examples/bookinfo/) 这个例子中完美地运行,但在你自己的应用中相似版本的路由规则却没有生效,可能因为你的 Kubernetes service 需要被稍微地修改。为了利用 Istio 的七层路由特性 Kubernetes service 必须严格遵守某些限制。参考 [Pods 和 Services 的要求](/zh/docs/ops/prep/requirements/)查看详细信息。
|
||||
|
||||
另一个潜在的问题是路由规则可能只是生效比较慢。在 Kubernetes 上实现的 Istio 利用一个最终一致性算法来保证所有的 Envoy sidecar 有正确的配置包括所有的路由规则。一个配置变更需要花一些时间来传播到所有的 sidecar。在大型的集群部署中传播将会耗时更长并且可能有几秒钟的延迟时间。
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ Mixer 安装中默认包含一个 Prometheus 适配器,适配器会收到一
|
|||
|
||||
### (如果需要)验证 Istio CNI pod 正在运行{#verify-Istio-CNI-pods-are-running}
|
||||
|
||||
在 Kubernetes Pod 生命周期设置网络期间,Istio CNI 插件会对 Istio 网格 Pod 执行流量重定向,从而用户在 Istio 网格中部署 Pod 时不需要 [`NET_ADMIN`能力需求](/zh/docs/ops/setup/required-pod-capabilities/)。 Istio CNI 插件主要用来替代 `istio-init` 容器的一些功能。
|
||||
在 Kubernetes Pod 生命周期设置网络期间,Istio CNI 插件会对 Istio 网格 Pod 执行流量重定向,从而用户在 Istio 网格中部署 Pod 时不需要 [`NET_ADMIN`能力需求](/zh/docs/ops/prep/requirements/)。 Istio CNI 插件主要用来替代 `istio-init` 容器的一些功能。
|
||||
|
||||
1. 验证 `istio-cni-node` pods 正在运行:
|
||||
|
||||
|
|
@ -36,7 +36,7 @@ Mixer 安装中默认包含一个 Prometheus 适配器,适配器会收到一
|
|||
$ kubectl -n kube-system get pod -l k8s-app=istio-cni-node
|
||||
{{< /text >}}
|
||||
|
||||
1. 如果 `PodSecurityPolicy` 在您的集群上已经启用,请确保 `istio-cni` 服务账号可以使用具有 [`NET_ADMIN`能力需求](/zh/docs/ops/setup/required-pod-capabilities/)的 `PodSecurityPolicy`。
|
||||
1. 如果 `PodSecurityPolicy` 在您的集群上已经启用,请确保 `istio-cni` 服务账号可以使用具有 [`NET_ADMIN`能力需求](/zh/docs/ops/prep/requirements/)的 `PodSecurityPolicy`。
|
||||
|
||||
### 确认 Mixer 可以收到指标报告的调用{#verify-mixer-is-receiving-report-calls}
|
||||
|
||||
|
|
@ -144,7 +144,7 @@ istio-system tcpkubeattrgenrulerule 4h
|
|||
|
||||
如果存在某个指标值,请确认该指标值的最大配置 ID 是0。这可以验证 Mixer 在处理最近提供配置过程中没有发生任何错误。
|
||||
|
||||
### 验证 Mixer 可以将指标实例发送到 Prometheus 适配器{#verify-Mixer-is-sending-metric-instances-to-the-Prometheus-adapter}
|
||||
### 验证 Mixer 可以将指标实例发送到 Prometheus 适配器{#verify-Mixer-is-sending-Metric-instances-to-the-Prometheus-adapter}
|
||||
|
||||
1. 与`istio-telemetry` 自监控端点建立连接,按照上文[确认 Mixer 可以收到指标报告的调用](#verify-mixer-is-receiving-report-calls)的描述设置一个到 `istio-telemetry` 自监控端口的转发。
|
||||
|
||||
|
|
|
|||
|
|
@ -492,7 +492,7 @@ Certificate:
|
|||
|
||||
如果怀疑双向 TLS 出现了问题,首先要确认 [Citadel 健康](#repairing-citadel),接下来要查看的是[密钥和证书正确下发](#keys-and-certificates-errors) Sidecar.
|
||||
|
||||
如果上述检查都正确无误,下一步就应该验证[认证策略](/zh/docs/tasks/security/authn-policy/)已经创建,并且对应的目标规则是否正确应用。
|
||||
如果上述检查都正确无误,下一步就应该验证[认证策略](/zh/docs/tasks/security/authentication/authn-policy/)已经创建,并且对应的目标规则是否正确应用。
|
||||
|
||||
## Citadel 行为异常 {#repairing-citadel}
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ As an example, as of this writing, Mixer has 5 scopes, representing different fu
|
|||
- `default`
|
||||
- `grpcAdapter`
|
||||
|
||||
Pilot, Citadel, and Galley have their own scopes which you can discover by looking at their [reference documentation](/docs/reference/commands/).
|
||||
Pilot, Citadel, and Galley have their own scopes which you can discover by looking at their [reference documentation](/zh/docs/reference/commands/).
|
||||
|
||||
Each scope has a unique output level which is one of:
|
||||
|
||||
|
|
@ -46,7 +46,7 @@ $ mixs server --log_output_level attributes=debug,adapters=warning
|
|||
{{< /text >}}
|
||||
|
||||
In addition to controlling the output level from the command-line, you can also control the output level of a running component
|
||||
by using its [ControlZ](/docs/ops/diagnostic-tools/controlz) interface.
|
||||
by using its [ControlZ](/zh/docs/ops/diagnostic-tools/controlz) interface.
|
||||
|
||||
## Controlling output
|
||||
|
||||
|
|
|
|||
|
|
@ -118,7 +118,7 @@ the kind of information you should provide.
|
|||
|
||||
- **Where can I find out how to fix the errors I'm getting?**
|
||||
|
||||
The set of [configuration analysis messages](/docs/reference/config/analysis/) contains descriptions of each message along with suggested fixes.
|
||||
The set of [configuration analysis messages](/zh/docs/reference/config/analysis/) contains descriptions of each message along with suggested fixes.
|
||||
|
||||
## Enabling validation messages for resource status
|
||||
|
||||
|
|
|
|||
|
|
@ -26,12 +26,12 @@ $ istioctl experimental describe <pod-name>[.<namespace>]
|
|||
{{< /tip >}}
|
||||
|
||||
该指南假定您已经在您的网格中部署了 [Bookinfo](/zh/docs/examples/bookinfo/) 示例。
|
||||
如果您还没部署,先参考[启动应用服务](/zh/docs/examples/bookinfo/#start-the-application-services)和[确定 ingress 的 IP 和端口](/zh/docs/examples/bookinfo/#determine-the-ingress-i-p-and-port)。
|
||||
如果您还没部署,先参考[启动应用服务](/zh/docs/examples/bookinfo/#start-the-application-services)和[确定 ingress 的 IP 和端口](/zh/docs/examples/bookinfo/#determine-the-ingress-IP-and-port)。
|
||||
|
||||
## 验证 pod 是否在网格中{#verify-a-pod-is-in-the-mesh}
|
||||
|
||||
如果 pod 里没有 {{< gloss >}}Envoy{{< /gloss >}} 代理或者代理没启动,`istioctl describe` 命令会返回一个警告。
|
||||
另外,如果 [pods 的 Istio 需求](/zh/docs/setup/additional-setup/requirements/)未完全满足,该命令也会警告。
|
||||
另外,如果 [pods 的 Istio 需求](/zh/docs/ops/prep/requirements/)未完全满足,该命令也会警告。
|
||||
|
||||
例如,下面的命令发出的警告表示一个 `kubernetes-dashboard` pod 不被包含在服务网格内,因为它没有 sidecar:
|
||||
|
||||
|
|
@ -200,7 +200,7 @@ VirtualService: reviews
|
|||
|
||||
## 验证严格双向 TLS{#verifying-strict-mutual-TLS}
|
||||
|
||||
按照[双向 TLS 迁移](/zh/docs/tasks/security/mtls-migration/)的说明,您可以为 `ratings` 服务启用严格双向 TLS:
|
||||
按照[双向 TLS 迁移](/zh/docs/tasks/security/authentication/mtls-migration/)的说明,您可以为 `ratings` 服务启用严格双向 TLS:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f - <<EOF
|
||||
|
|
|
|||
|
|
@ -10,16 +10,16 @@ aliases:
|
|||
---
|
||||
|
||||
Istio provides two very valuable commands to help diagnose traffic management configuration problems,
|
||||
the [`proxy-status`](/docs/reference/commands/istioctl/#istioctl-proxy-status)
|
||||
and [`proxy-config`](/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command
|
||||
the [`proxy-status`](/zh/docs/reference/commands/istioctl/#istioctl-proxy-status)
|
||||
and [`proxy-config`](/zh/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command
|
||||
allows you to get an overview of your mesh and identify the proxy causing the problem. Then `proxy-config` can be used
|
||||
to inspect Envoy configuration and diagnose the issue.
|
||||
|
||||
If you want to try the commands described below, you can either:
|
||||
|
||||
* Have a Kubernetes cluster with Istio and Bookinfo installed (e.g use `istio.yaml` as described in
|
||||
[installation steps](/docs/setup/getting-started/) and
|
||||
[Bookinfo installation steps](/docs/examples/bookinfo/#deploying-the-application)).
|
||||
[installation steps](/zh/docs/setup/getting-started/) and
|
||||
[Bookinfo installation steps](/zh/docs/examples/bookinfo/#deploying-the-application)).
|
||||
|
||||
OR
|
||||
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ The CPU consumption scales with the following factors:
|
|||
|
||||
however this part is inherently horizontally scalable.
|
||||
|
||||
When [namespace isolation](/docs/reference/config/networking/sidecar/) is enabled,
|
||||
When [namespace isolation](/zh/docs/reference/config/networking/sidecar/) is enabled,
|
||||
a single Pilot instance can support 1000 services, 2000 sidecars with 1 vCPU and 1.5 GB of memory.
|
||||
You can increase the number of Pilot instances to reduce the amount of time it takes for the configuration
|
||||
to reach all proxies.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Prepare Your Deployment
|
||||
description: Learn about how to prepare an Istio deployment including the
|
||||
requirements for your pods and best practices.
|
||||
weight: 6
|
||||
keywords:
|
||||
- deployment-models
|
||||
- best-practices
|
||||
- pods
|
||||
- requirements
|
||||
- installation
|
||||
- configuration
|
||||
---
|
||||
|
|
@ -131,7 +131,7 @@ Multiple networks afford the following capabilities beyond that of single networ
|
|||
- Compliance with standards that require network segmentation
|
||||
|
||||
In this model, the workload instances in different networks can only reach each
|
||||
other through one or more [Istio gateways](/docs/concepts/traffic-management/#gateways).
|
||||
other through one or more [Istio gateways](/zh/docs/concepts/traffic-management/#gateways).
|
||||
Istio uses **partitioned service discovery** to provide consumers a different
|
||||
view of {{< gloss >}}service endpoint{{< /gloss >}}s. The view depends on the
|
||||
network of the consumers.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: Deployment Best Practices
|
||||
description: General best practices for your Istio deployments.
|
||||
weight: 2
|
||||
icon: best-practices
|
||||
keywords: [deployment-models, cluster, availability-zones, control-plane]
|
||||
---
|
||||
|
||||
We have identified the following general principles to help you get the most
|
||||
out of your Istio deployments. These best practices aim to limit the impact of
|
||||
bad configuration changes and make managing your deployments easier.
|
||||
|
||||
## Deploy fewer clusters
|
||||
|
||||
Deploy Istio across a small number of large clusters, rather than a large number
|
||||
of small clusters. Instead of adding clusters to your deployment, the best
|
||||
practice is to use [namespace tenancy](/zh/docs/ops/prep/deployment-models/#namespace-tenancy)
|
||||
to manage large clusters. Following this approach, you can deploy Istio across
|
||||
one or two clusters per zone or region. You can then deploy a control plane on
|
||||
one cluster per region or zone for added reliability.
|
||||
|
||||
## Deploy clusters near your users
|
||||
|
||||
Include clusters in your deployment across the globe for **geographic
|
||||
proximity to end-users**. Proximity helps your deployment have low latency.
|
||||
|
||||
## Deploy across multiple availability zones
|
||||
|
||||
Include clusters in your deployment **across multiple availability regions
|
||||
and zones** within each geographic region. This approach limits the size of the
|
||||
{{< gloss "failure domain" >}}failure domains{{< /gloss >}} of your deployment,
|
||||
and helps you avoid global failures.
|
||||
|
|
@ -1,13 +1,22 @@
|
|||
---
|
||||
title: Pod 和 Service
|
||||
description: 在启用了 Istio 的集群中运行 Kubernetes 的 Pod 和 Service,您需要做些准备。
|
||||
weight: 5
|
||||
weight: 3
|
||||
aliases:
|
||||
- /zh/docs/setup/kubernetes/spec-requirements/
|
||||
- /zh/docs/setup/kubernetes/prepare/spec-requirements/
|
||||
- /zh/docs/setup/kubernetes/prepare/requirements/
|
||||
- /zh/docs/setup/kubernetes/additional-setup/requirements/
|
||||
keywords: [kubernetes,sidecar,sidecar-injection]
|
||||
- /zh/docs/setup/kubernetes/spec-requirements/
|
||||
- /zh/docs/setup/kubernetes/prepare/spec-requirements/
|
||||
- /zh/docs/setup/kubernetes/prepare/requirements/
|
||||
- /zh/docs/setup/kubernetes/additional-setup/requirements/
|
||||
- /zh/docs/setup/additional-setup/requirements
|
||||
- /zh/docs/ops/setup/required-pod-capabilities
|
||||
- /help/ops/setup/required-pod-capabilities
|
||||
keywords:
|
||||
- kubernetes
|
||||
- sidecar
|
||||
- sidecar-injection
|
||||
- deployment-models
|
||||
- pods
|
||||
- setup
|
||||
---
|
||||
|
||||
作为 Istio 服务网格中的一部分,Kubernetes 集群中的 Pod 和 Service 必须满足以下要求:
|
||||
|
|
@ -27,7 +36,7 @@ keywords: [kubernetes,sidecar,sidecar-injection]
|
|||
- **应用 UID**: 确保你的 Pod 不会以用户 ID(UID)为 1337 的用户运行应用。
|
||||
|
||||
- **`NET_ADMIN` 功能**: 如果你的集群执行 Pod 安全策略,必须给 Pod 配置 `NET_ADMIN` 功能。如果你使用 [Istio CNI 插件](/zh/docs/setup/additional-setup/cni/)
|
||||
可以不配置。要了解更多 `NET_ADMIN` 功能的知识,请查看[需要的 Pod Capabilities](/zh/docs/ops/setup/required-pod-capabilities/)。
|
||||
可以不配置。要了解更多 `NET_ADMIN` 功能的知识,请查看[需要的 Pod Capabilities](#required-pod-capabilities)。
|
||||
|
||||
## Istio 使用的端口{#ports-used-by-Istio}
|
||||
|
||||
|
|
@ -56,3 +65,34 @@ Istio 使用了如下的端口和协议。请确保没有 TCP Headless Service
|
|||
| 15443 | TLS | Ingress and Egress Gateways | SNI |
|
||||
| 15090 | HTTP | Mixer | Proxy |
|
||||
| 42422 | TCP | Mixer | 遥测 - Prometheus |
|
||||
|
||||
## Required pod capabilities{#required-pod-capabilities}
|
||||
|
||||
If [pod security policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
|
||||
are [enforced](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies)
|
||||
in your cluster and unless you use the Istio CNI Plugin, your pods must have the
|
||||
`NET_ADMIN` capability allowed. The initialization containers of the Envoy
|
||||
proxies require this capability.
|
||||
|
||||
To check if the `NET_ADMIN` capability is allowed for your pods, you need to check if their
|
||||
[service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
|
||||
can use a pod security policy that allows the `NET_ADMIN` capability.
|
||||
If you haven't specified a service account in your pods' deployment, the pods run using
|
||||
the `default` service account in their deployment's namespace.
|
||||
|
||||
To list the capabilities for a service account, replace `<your namespace>` and `<your service account>`
|
||||
with your values in the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:<your namespace>:<your service account>) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
|
||||
{{< /text >}}
|
||||
|
||||
For example, to check for the `default` service account in the `default` namespace, run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:default:default) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
|
||||
{{< /text >}}
|
||||
|
||||
If you see `NET_ADMIN` or `*` in the list of capabilities of one of the allowed
|
||||
policies for your service account, your pods have permission to run the Istio init containers.
|
||||
Otherwise, you will need to [provide the permission](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies).
|
||||
|
|
@ -21,13 +21,13 @@ keywords: [security,health-check]
|
|||
|
||||
注意,无论是否启用了双向 TLS 认证,命令和 TCP 请求方式都可以与 Istio 一起使用。HTTP请求方式则要求启用了 TLS 的 Istio 使用不同的配置。
|
||||
|
||||
## 在学习本节之前
|
||||
## 在学习本节之前{#before-you-begin}
|
||||
|
||||
* 理解 Kubernetes 的 [Liveness 和 Readiness 探针](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/),Istio 的 [认证策略](/docs/concepts/security/#authentication-policies) 和 [双向 TLS 认证](/docs/concepts/security/#mutual-tls-authentication) 概念。
|
||||
* 理解 Kubernetes 的 [Liveness 和 Readiness 探针](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/),Istio 的 [认证策略](/zh/docs/concepts/security/#authentication-policies) 和 [双向 TLS 认证](/zh/docs/concepts/security/#mutual-TLS-authentication) 概念。
|
||||
|
||||
* 有一个安装了 Istio 的 Kubernetes 集群,并且未开启全局双向 TLS 认证。
|
||||
|
||||
## Liveness 和 Readiness 探针之命令方式
|
||||
## Liveness 和 Readiness 探针之命令方式{#liveness-and-readiness-probes-with-command-option}
|
||||
|
||||
首先,您需要配置健康检查并开启双向 TLS 认证。
|
||||
|
||||
|
|
@ -80,7 +80,7 @@ NAME READY STATUS RESTARTS AGE
|
|||
liveness-6857c8775f-zdv9r 2/2 Running 0 4m
|
||||
{{< /text >}}
|
||||
|
||||
## Liveness 和 Readiness 探针之 HTTP 请求方式
|
||||
## Liveness 和 Readiness 探针之 HTTP 请求方式{#liveness-and-readiness-probes-with-http-request-option}
|
||||
|
||||
本部分介绍,当双向 TLS 认证开启的时候,如何使用 HTTP 请求方式来做健康检查。
|
||||
|
||||
|
|
@ -88,14 +88,14 @@ Kubernetes 的 HTTP 健康检查是由 Kubelet 来发送的, 但是 Istio 并
|
|||
|
||||
有两种方式来解决此问题:探针重写和端口分离。
|
||||
|
||||
### 探针重写
|
||||
### 探针重写{#probe-rewrite}
|
||||
|
||||
这种方式重写了应用程序的 `PodSpec` Readiness 和 Liveness 探针, 以便将探针请求发送给
|
||||
[Pilot agent](/zh/docs/reference/commands/pilot-agent/). Pilot agent 将请求重定向到应用程序,剥离 response body ,只返回 response code 。
|
||||
|
||||
有两种方式来让 Istio 重写 Liveness 探针。
|
||||
|
||||
#### 通过安装参数,全局启用
|
||||
#### 通过安装参数,全局启用{#enable-globally-via-install-option}
|
||||
|
||||
[安装 Istio](/zh/docs/setup/install/istioctl/) 的时候使用 `--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=true`.
|
||||
|
||||
|
|
@ -111,7 +111,7 @@ $ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's/"rew
|
|||
上面更改的配置 (通过安装参数或注入的 map )会影响到所有 Istio 应用程序部署。
|
||||
{{< /warning >}}
|
||||
|
||||
#### 对 pod 使用 annotation
|
||||
#### 对 pod 使用 annotation{#use-annotations-on-pod}
|
||||
|
||||
<!-- Add samples YAML or kubectl patch? -->
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ spec:
|
|||
|
||||
这种方式可以使得在每个部署的应用上逐个启用健康检查并重写探针,而无需重新安装 Istio 。
|
||||
|
||||
#### 重新部署需要 Liveness 健康检查的应用程序
|
||||
#### 重新部署需要 Liveness 健康检查的应用程序{#re-deploy-the-liveness-health-check-app}
|
||||
|
||||
以下的说明假定您通过安装选项全局启用了该功能,Annotation 同样奏效。
|
||||
|
||||
|
|
@ -164,7 +164,7 @@ liveness-http-975595bb6-5b2z7c 2/2 Running 0 1m
|
|||
默认情况下未启用此功能。 我们希望 [收到您的反馈](https://github.com/istio/istio/issues/10357),
|
||||
是否应将其更改为 Istio 安装过程中的默认行为。
|
||||
|
||||
### 端口分离
|
||||
### 端口分离{#separate-port}
|
||||
|
||||
另一种方式是使用单独的端口来进行运行状态检查和常规流量检查。
|
||||
|
||||
|
|
@ -185,7 +185,7 @@ liveness-http-67d5db65f5-765bb 2/2 Running 0 1m
|
|||
|
||||
请注意,[liveness-http]({{< github_file >}}/samples/health-check/liveness-http.yaml) 的镜像公开了两个端口:8001 和 8002 ([源码]({{< github_file >}}/samples/health-check/server.go))。在这个部署方式里面,端口8001用于常规流量,而端口8002给 Liveness 探针使用。
|
||||
|
||||
### 清除
|
||||
### 清除{#cleanup}
|
||||
|
||||
请按照如下操作删除上述步骤中添加的双向 TLS 策略和相应的目标规则:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,35 +0,0 @@
|
|||
---
|
||||
title: Required Pod Capabilities
|
||||
description: Describes how to check which capabilities are allowed for your pods.
|
||||
weight: 9
|
||||
aliases:
|
||||
- /zh/help/ops/setup/required-pod-capabilities
|
||||
---
|
||||
|
||||
If [pod security policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) are [enforced](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies) in your
|
||||
cluster and unless you use Istio CNI Plugin, your pods must have the `NET_ADMIN` capability allowed.
|
||||
The initialization containers of the Envoy proxies require this capability. To check which capabilities are allowed for
|
||||
your pods, check if their
|
||||
[service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) can use a
|
||||
pod security policy that allows the `NET_ADMIN` capability.
|
||||
|
||||
If you don't specify a service account in your pods' deployment, the pods run as the `default` service account in
|
||||
their deployment's namespace.
|
||||
|
||||
To check which capabilities are allowed for the service account of your pods, run the
|
||||
following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:<your namespace>:<your service account>) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
|
||||
{{< /text >}}
|
||||
|
||||
For example, to check which capabilities are allowed for the `default` service account in the `default` namespace,
|
||||
run the following command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ for psp in $(kubectl get psp -o jsonpath="{range .items[*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:default:default) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done
|
||||
{{< /text >}}
|
||||
|
||||
If you see `NET_ADMIN` or `*` in the list of capabilities of one of the allowed policies for your service account,
|
||||
your pods have permission to run the Istio init containers. Otherwise, you must
|
||||
[provide such permission](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies).
|
||||
|
|
@ -1,196 +0,0 @@
|
|||
---
|
||||
title: Standalone Operator Quick Start Evaluation Install [Experimental]
|
||||
description: Instructions to install Istio in a Kubernetes cluster for evaluation.
|
||||
weight: 11
|
||||
keywords: [kubernetes, operator]
|
||||
aliases:
|
||||
---
|
||||
|
||||
This guide installs Istio using the standalone Istio operator. The only dependencies
|
||||
required are a supported Kubernetes cluster and the `kubectl` command. This
|
||||
installation method lets you quickly evaluate Istio in a Kubernetes cluster on
|
||||
any platform using a variety of profiles.
|
||||
|
||||
To install Istio for production use, we recommend using the [Helm Installation guide](/docs/setup/install/helm/)
|
||||
instead, which is a stable feature.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Perform any necessary [platform-specific setup](/docs/setup/platform-setup/).
|
||||
|
||||
1. Check the [Requirements for Pods and Services]/docs/ops/prep/requirements/).
|
||||
|
||||
## Installation steps
|
||||
|
||||
1. Install Istio using the operator with the demo profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f https://preliminary.istio.io/operator.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< warning >}}
|
||||
This profile is only for demo usage and should not be used in production.
|
||||
{{< /warning >}}
|
||||
|
||||
1. (Optionally) change profiles from the demo profile to one of the following profiles:
|
||||
|
||||
{{< tabset cookie-name="profile" >}}
|
||||
|
||||
{{< tab name="demo" cookie-value="permissive" >}}
|
||||
When using the [permissive mutual TLS mode](/docs/concepts/security/#permissive-mode), all services accept both plaintext and
|
||||
mutual TLS traffic. Clients send plaintext traffic unless configured for
|
||||
[mutual TLS migration](/docs/tasks/security/authentication/mtls-migration/). This profile is installed during the first step.
|
||||
|
||||
Choose this profile for:
|
||||
|
||||
* Clusters with existing applications, or
|
||||
* Applications where services with an Istio sidecar need to be able to
|
||||
communicate with other non-Istio Kubernetes services
|
||||
|
||||
Run the following command to switch to this profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f https://preliminary.istio.io/operator-profile-demo.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="SDS" cookie-value="sds" >}}
|
||||
This profile enables
|
||||
[Secret Discovery Service](/docs/tasks/security/citadel-config/auth-sds) between all clients and servers.
|
||||
|
||||
Use this profile to enhance startup performance of services in the Kubernetes cluster. Additionally
|
||||
improve security as Kubernetes secrets that contain known
|
||||
[risks](https://kubernetes.io/docs/concepts/configuration/secret/#risks) are not used.
|
||||
|
||||
Run the following command to switch to this profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f https://preliminary.istio.io/operator-profile-sds.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="default" cookie-value="default" >}}
|
||||
This profile enables Istio's default settings which contains recommended
|
||||
production settings. Run the following command to switch to this profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f https://preliminary.istio.io/operator-profile-default.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< tab name="minimal" cookie-value="minimal" >}}
|
||||
This profile deploys a Istio's minimum components to function.
|
||||
|
||||
Run the following command to switch to this profile:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl apply -f https://preliminary.istio.io/operator-profile-minimal.yaml
|
||||
{{< /text >}}
|
||||
|
||||
{{< /tab >}}
|
||||
|
||||
{{< /tabset >}}
|
||||
|
||||
## Verifying the installation
|
||||
|
||||
{{< warning >}}
|
||||
This document is a work in progress. Expect verification steps for each of the profiles to
|
||||
vary from these verification steps. Inconsistencies will be resolved prior to the publishing of
|
||||
Istio 1.4. Until that time, these verification steps only apply to the `profile-istio-demo.yaml` profile.
|
||||
{{< /warning >}}
|
||||
|
||||
1. Ensure the following Kubernetes services are deployed and verify they all have an appropriate `CLUSTER-IP` except the `jaeger-agent` service:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get svc -n istio-system
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
grafana ClusterIP 172.21.211.123 <none> 3000/TCP 2m
|
||||
istio-citadel ClusterIP 172.21.177.222 <none> 8060/TCP,15014/TCP 2m
|
||||
istio-egressgateway ClusterIP 172.21.113.24 <none> 80/TCP,443/TCP,15443/TCP 2m
|
||||
istio-galley ClusterIP 172.21.132.247 <none> 443/TCP,15014/TCP,9901/TCP 2m
|
||||
istio-ingressgateway LoadBalancer 172.21.144.254 52.116.22.242 15020:31831/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30318/TCP,15030:32645/TCP,15031:31933/TCP,15032:31188/TCP,15443:30838/TCP 2m
|
||||
istio-pilot ClusterIP 172.21.105.205 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 2m
|
||||
istio-policy ClusterIP 172.21.14.236 <none> 9091/TCP,15004/TCP,15014/TCP 2m
|
||||
istio-sidecar-injector ClusterIP 172.21.155.47 <none> 443/TCP,15014/TCP 2m
|
||||
istio-telemetry ClusterIP 172.21.196.79 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 2m
|
||||
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 2m
|
||||
jaeger-collector ClusterIP 172.21.135.51 <none> 14267/TCP,14268/TCP 2m
|
||||
jaeger-query ClusterIP 172.21.26.187 <none> 16686/TCP 2m
|
||||
kiali ClusterIP 172.21.155.201 <none> 20001/TCP 2m
|
||||
prometheus ClusterIP 172.21.63.159 <none> 9090/TCP 2m
|
||||
tracing ClusterIP 172.21.2.245 <none> 80/TCP 2m
|
||||
zipkin ClusterIP 172.21.182.245 <none> 9411/TCP 2m
|
||||
{{< /text >}}
|
||||
|
||||
{{< tip >}}
|
||||
If your cluster is running in an environment that does not
|
||||
support an external load balancer (e.g., minikube), the
|
||||
`EXTERNAL-IP` of `istio-ingressgateway` will say
|
||||
`<pending>`. To access the gateway, use the service's
|
||||
`NodePort`, or use port-forwarding instead.
|
||||
{{< /tip >}}
|
||||
|
||||
1. Ensure corresponding Kubernetes pods are deployed and have a `STATUS` of `Running`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl get pods -n istio-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
grafana-f8467cc6-rbjlg 1/1 Running 0 1m
|
||||
istio-citadel-78df5b548f-g5cpw 1/1 Running 0 1m
|
||||
istio-cleanup-secrets-release-1.1-20190308-09-16-8s2mp 0/1 Completed 0 2m
|
||||
istio-egressgateway-78569df5c4-zwtb5 1/1 Running 0 1m
|
||||
istio-galley-74d5f764fc-q7nrk 1/1 Running 0 1m
|
||||
istio-grafana-post-install-release-1.1-20190308-09-16-2p7m5 0/1 Completed 0 2m
|
||||
istio-ingressgateway-7ddcfd665c-dmtqz 1/1 Running 0 1m
|
||||
istio-pilot-f479bbf5c-qwr28 2/2 Running 0 1m
|
||||
istio-policy-6fccc5c868-xhblv 2/2 Running 2 1m
|
||||
istio-security-post-install-release-1.1-20190308-09-16-bmfs4 0/1 Completed 0 2m
|
||||
istio-sidecar-injector-78499d85b8-x44m6 1/1 Running 0 1m
|
||||
istio-telemetry-78b96c6cb6-ldm9q 2/2 Running 2 1m
|
||||
istio-tracing-69b5f778b7-s2zvw 1/1 Running 0 1m
|
||||
kiali-99f7467dc-6rvwp 1/1 Running 0 1m
|
||||
prometheus-67cdb66cbb-9w2hm 1/1 Running 0 1m
|
||||
{{< /text >}}
|
||||
|
||||
## Deploy your application
|
||||
|
||||
You can now deploy your own application or one of the sample applications
|
||||
provided with the installation like [Bookinfo](/docs/examples/bookinfo/).
|
||||
|
||||
{{< warning >}}
|
||||
The application must use either the HTTP/1.1 or HTTP/2.0 protocols for all its HTTP
|
||||
traffic; HTTP/1.0 is not supported.
|
||||
{{< /warning >}}
|
||||
|
||||
When you deploy your application using `kubectl apply`,
|
||||
the [Istio sidecar injector](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection)
|
||||
will automatically inject Envoy containers into your
|
||||
application pods if they are started in namespaces labeled with `istio-injection=enabled`:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl label namespace <namespace> istio-injection=enabled
|
||||
$ kubectl create -n <namespace> -f <your-app-spec>.yaml
|
||||
{{< /text >}}
|
||||
|
||||
In namespaces without the `istio-injection` label, you can use
|
||||
[`istioctl kube-inject`](/docs/reference/commands/istioctl/#istioctl-kube-inject)
|
||||
to manually inject Envoy containers in your application pods before deploying
|
||||
them:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl kube-inject -f <your-app-spec>.yaml | kubectl apply -f -
|
||||
{{< /text >}}
|
||||
|
||||
## Uninstall
|
||||
|
||||
Delete the Istio Operator and Istio deployment:
|
||||
|
||||
{{< text bash >}}
|
||||
$ kubectl -n istio-operator get IstioControlPlane example-istiocontrolplane -o=json | jq '.metadata.finalizers = null' | kubectl delete -f -
|
||||
$ kubectl delete ns istio-operator --grace-period=0 --force
|
||||
$ kubectl delete ns istio-system --grace-period=0 --force
|
||||
{{< /text >}}
|
||||
|
||||
|
|
@ -27,7 +27,7 @@ detailed documentation of the mutating and validating webhook configuration.
|
|||
|
||||
## Verify dynamic admission webhook prerequisites
|
||||
|
||||
See the [platform setup instructions](/docs/setup/platform-setup/)
|
||||
See the [platform setup instructions](/zh/docs/setup/platform-setup/)
|
||||
for Kubernetes provider specific setup instructions. Webhooks will not
|
||||
function properly if the cluster is misconfigured. You can follow
|
||||
these steps once the cluster has been configured and dynamic
|
||||
|
|
@ -51,7 +51,7 @@ webhooks and dependent features are not functioning properly.
|
|||
|
||||
1. Verify `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` plugins are
|
||||
listed in the `kube-apiserver --enable-admission-plugins`. Access
|
||||
to this flag is [provider specific](/docs/setup/platform-setup/).
|
||||
to this flag is [provider specific](/zh/docs/setup/platform-setup/).
|
||||
|
||||
1. Verify the Kubernetes api-server has network connectivity to the
|
||||
webhook pod. e.g. incorrect `http_proxy` settings can interfere
|
||||
|
|
|
|||
|
|
@ -220,7 +220,7 @@ spec:
|
|||
The downside of this kind of configuration is that other configuration (e.g., route rules) for any of the
|
||||
underlying microservices, will need to also be included in this single configuration file, instead of
|
||||
in separate resources associated with, and potentially owned by, the individual service teams.
|
||||
See [Route rules have no effect on ingress gateway requests](/docs/ops/common-problems/network-issues/#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
See [Route rules have no effect on ingress gateway requests](/zh/docs/ops/common-problems/network-issues/#route-rules-have-no-effect-on-ingress-gateway-requests)
|
||||
for details.
|
||||
|
||||
To avoid this problem, it may be preferable to break up the configuration of `myapp.com` into several
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ other content.
|
|||
When attempting to understand, monitor or troubleshoot the networking within
|
||||
an Istio deployment it is critical to understand the fundamental Istio
|
||||
concepts starting with the service mesh. The service mesh is described
|
||||
in [Architecture](/docs/ops/architecture/). As noted
|
||||
in [Architecture](/zh/docs/ops/architecture/). As noted
|
||||
in the architecture section Istio has a distinct control plane and a data
|
||||
plane and operationally it will be important to be able to monitor the
|
||||
network state of both. The service mesh is a fully interconnected set of
|
||||
|
|
@ -27,19 +27,19 @@ proxies that are utilized in both the control and data plane to provide
|
|||
the Istio features.
|
||||
|
||||
Another key concept to understand is how Istio performs traffic management.
|
||||
This is described in [Traffic Management Explained](/docs/concepts/traffic-management).
|
||||
This is described in [Traffic Management Explained](/zh/docs/concepts/traffic-management).
|
||||
Traffic management allows fine grained control with respect to what external
|
||||
traffic can enter or exit the mesh and how those requests are routed. The
|
||||
traffic management configuration also dictates how requests between
|
||||
microservices within the mesh are handled. Full details on how to
|
||||
configure the traffic management is available
|
||||
here: [Traffic Management Configuration](/docs/tasks/traffic-management).
|
||||
here: [Traffic Management Configuration](/zh/docs/tasks/traffic-management).
|
||||
|
||||
The final concept that is essential for the operator to understand is how
|
||||
Istio uses gateways to allow traffic into the mesh or control how requests originating
|
||||
in the mesh access external services. This is described with a
|
||||
configuration example here:
|
||||
[Istio Gateways](/docs/concepts/traffic-management/#gateways)
|
||||
[Istio Gateways](/zh/docs/concepts/traffic-management/#gateways)
|
||||
|
||||
## Network Layers Beneath the Mesh
|
||||
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ _地域优先负载均衡_ 是 _地域负载均衡_ 的默认行为。
|
|||
有时,当同一 region 中没有足够正常的 endpoints 时,您需要限制流量故障转移来避免跨全局的流量转发。
|
||||
当跨 region 的发送故障转移流量而不能改善服务运行状况或其他诸如监管政策等原因时,该行为是很有用的。
|
||||
为了将流量限制到某一个 region,请在安装时配置 `values.localityLbSetting` 选项。
|
||||
参考[地域负载均衡参考指南](/docs/reference/config/networking/destination-rule#LocalityLoadBalancerSetting)来获取更多选项。
|
||||
参考[地域负载均衡参考指南](/zh/docs/reference/config/networking/destination-rule#LocalityLoadBalancerSetting)来获取更多选项。
|
||||
|
||||
配置示例:
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
title: 授权
|
||||
description: 关于如何配置 Istio 授权特性的描述。
|
||||
weight: 30
|
||||
---
|
||||
|
|
@ -1,48 +0,0 @@
|
|||
---
|
||||
title: 约束和属性
|
||||
description: 描述所支持的约束和属性。
|
||||
weight: 10
|
||||
---
|
||||
|
||||
本节包含所支持的键和值格式,您可以用作服务角色和服务角色绑定配置对象中的约束和属性。约束和属性是可以在配置对象中作为字段添加的额外条件,通过 `ServiceRole` 和 `ServiceRoleBinding` 中的 `kind:` 值来指定详细的访问控制要求。
|
||||
|
||||
具体来说,您可以使用约束在服务角色的访问规则字段中指定额外条件。您可以使用属性在服务角色绑定的主题字段中指定额外条件。对于 HTTP 协议,Istio 支持此页面列出的所有键,但对于普通 TCP 协议仅支持其中一部分。
|
||||
|
||||
{{< warning >}}
|
||||
不支持的键和值将被默默忽略。
|
||||
{{< /warning >}}
|
||||
|
||||
了解更多信息,请参阅[授权概念页面](/zh/docs/concepts/security/#authorization).
|
||||
|
||||
## 支持的约束{#supported-constraints}
|
||||
|
||||
下表列出了 `constraints` 字段当前所支持的键:
|
||||
|
||||
| 名称 | 描述 | 是否支持 TCP 服务 | 键示例 | 值示例 |
|
||||
|------|-------------|----------------------------|-------------|----------------|
|
||||
| `destination.ip` | 目标工作负载实例 IP 地址,支持单个 IP 或 CIDR | YES | `destination.ip` | `["10.1.2.3", "10.2.0.0/16"]` |
|
||||
| `destination.port` | 服务器 IP 地址的接收端口,必须在[0, 65535]范围内 | YES | `destination.port` | `["80", "443"]` |
|
||||
| `destination.labels` | 附属于服务器实例的键值对映射 | YES | `destination.labels[version]` | `["v1", "v2"]` |
|
||||
| `destination.namespace` | 目标工作负载实例的命名空间 | YES | `destination.namespace` | `["default"]` |
|
||||
| `destination.user` | 目标工作负载的身份 | YES | `destination.user` | `["bookinfo-productpage"]` |
|
||||
| `experimental.envoy.filters.*` | 用于过滤器的实验性元数据匹配,包含在`[]`中的值作为列表被匹配 | YES | `experimental.envoy.filters.network.mysql_proxy[db.table]` | `["[update]"]` |
|
||||
| `request.headers` | HTTP 请求头,实际的头名称包含在括号中 | NO | `request.headers[X-Custom-Token]` | `["abc123"]` |
|
||||
|
||||
{{< warning >}}
|
||||
请注意,对于 `experimental.*` 的键不能保证向后兼容。它们可能随时被移除,建议用户自行承担使用它们的风险。
|
||||
{{< /warning >}}
|
||||
|
||||
## 支持的属性{#supported-properties}
|
||||
|
||||
下表列出了 `properties` 字段当前所支持的键:
|
||||
|
||||
| 名称 | 描述 | 是否支持 TCP 服务 | 键示例 | 值示例 |
|
||||
|------|-------------|----------------------------|-------------|---------------|
|
||||
| `source.ip` | 源工作负载实例 IP 地址,支持单个 IP 或 CIDR | YES | `source.ip` | `"10.1.2.3"` |
|
||||
| `source.namespace` | 源工作负载实例的命名空间 | YES | `source.namespace` | `"default"` |
|
||||
| `source.principal` | 源工作负载的身份 | YES | `source.principal` | `"cluster.local/ns/default/sa/productpage"` |
|
||||
| `request.headers` | HTTP 请求头,实际的头名称包含在括号中 | NO | `request.headers[User-Agent]` | `"Mozilla/*"` |
|
||||
| `request.auth.principal` | 请求的认证主体 | NO | `request.auth.principal` | `"accounts.my-svc.com/104958560606"` |
|
||||
| `request.auth.audiences` | 此认证信息的目标受众 | NO | `request.auth.audiences` | `"my-svc.com"` |
|
||||
| `request.auth.presenter` | 证书的合法授权人 | NO | `request.auth.presenter` | `"123456789012.my-svc.com"` |
|
||||
| `request.auth.claims` | 原始 JWT 断言。实际的断言名称包含在括号中 | NO | `request.auth.claims[iss]` | `"*@foo.com"` |
|
||||
|
|
@ -1,496 +0,0 @@
|
|||
---
|
||||
WARNING: THIS IS AN AUTO-GENERATED FILE, DO NOT EDIT. PLEASE MODIFY THE ORIGINAL SOURCE IN THE 'https://github.com/istio/api' REPO
|
||||
source_repo: https://github.com/istio/api
|
||||
title: RBAC
|
||||
description: Configuration for Role Based Access Control.
|
||||
location: https://istio.io/docs/reference/config/authorization/istio.rbac.v1alpha1.html
|
||||
layout: protoc-gen-docs
|
||||
generator: protoc-gen-docs
|
||||
number_of_entries: 9
|
||||
---
|
||||
<p>Istio RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding
|
||||
objects.</p>
|
||||
|
||||
<p>A ServiceRole specification includes a list of rules (permissions). Each rule has
|
||||
the following standard fields:</p>
|
||||
|
||||
<ul>
|
||||
<li>services: a list of services.</li>
|
||||
<li>methods: A list of HTTP methods. You can set the value to <code>\*</code> to include all HTTP methods.
|
||||
This field should not be set for TCP services. The policy will be ignored.
|
||||
For gRPC services, only <code>POST</code> is allowed; other methods will result in denying services.</li>
|
||||
<li>paths: HTTP paths or gRPC methods. Note that gRPC methods should be
|
||||
presented in the form of “/packageName.serviceName/methodName” and are case sensitive.</li>
|
||||
</ul>
|
||||
|
||||
<p>In addition to the standard fields, operators can also use custom keys in the <code>constraints</code> field,
|
||||
the supported keys are listed in the “constraints and properties” page.</p>
|
||||
|
||||
<p>Below is an example of ServiceRole object “product-viewer”, which has “read” (“GET” and “HEAD”)
|
||||
access to “products.svc.cluster.local” service at versions “v1” and “v2”. “path” is not specified,
|
||||
so it applies to any path in the service.</p>
|
||||
|
||||
<pre><code class="language-yaml">apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRole
|
||||
metadata:
|
||||
name: products-viewer
|
||||
namespace: default
|
||||
spec:
|
||||
rules:
|
||||
- services: ["products.svc.cluster.local"]
|
||||
methods: ["GET", "HEAD"]
|
||||
constraints:
|
||||
- key: "destination.labels[version]"
|
||||
values: ["v1", "v2"]
|
||||
</code></pre>
|
||||
|
||||
<p>A ServiceRoleBinding specification includes two parts:</p>
|
||||
|
||||
<ul>
|
||||
<li>The <code>roleRef</code> field that refers to a ServiceRole object in the same namespace.</li>
|
||||
<li>A list of <code>subjects</code> that are assigned the roles.</li>
|
||||
</ul>
|
||||
|
||||
<p>In addition to a simple <code>user</code> field, operators can also use custom keys in the <code>properties</code> field,
|
||||
the supported keys are listed in the “constraints and properties” page.</p>
|
||||
|
||||
<p>Below is an example of ServiceRoleBinding object “test-binding-products”, which binds two subjects
|
||||
to ServiceRole “product-viewer”:</p>
|
||||
|
||||
<ul>
|
||||
<li>User “alice@yahoo.com”</li>
|
||||
<li>Services in “abc” namespace.</li>
|
||||
</ul>
|
||||
|
||||
<pre><code class="language-yaml">apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ServiceRoleBinding
|
||||
metadata:
|
||||
name: test-binding-products
|
||||
namespace: default
|
||||
spec:
|
||||
subjects:
|
||||
- user: alice@yahoo.com
|
||||
- properties:
|
||||
source.namespace: "abc"
|
||||
roleRef:
|
||||
kind: ServiceRole
|
||||
name: "products-viewer"
|
||||
</code></pre>
|
||||
|
||||
<h2 id="AccessRule">AccessRule</h2>
|
||||
<section>
|
||||
<p>AccessRule defines a permission to access a list of services.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="AccessRule-services">
|
||||
<td><code>services</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>A list of service names.
|
||||
Exact match, prefix match, and suffix match are supported for service names.
|
||||
For example, the service name “bookstore.mtv.cluster.local” matches
|
||||
“bookstore.mtv.cluster.local” (exact match), or “bookstore*” (prefix match),
|
||||
or “*.mtv.cluster.local” (suffix match).
|
||||
If set to [”*”], it refers to all services in the namespace.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="AccessRule-paths">
|
||||
<td><code>paths</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>Optional. A list of HTTP paths or gRPC methods.
|
||||
gRPC methods must be presented as fully-qualified name in the form of
|
||||
“/packageName.serviceName/methodName” and are case sensitive.
|
||||
Exact match, prefix match, and suffix match are supported. For example,
|
||||
the path “/books/review” matches “/books/review” (exact match),
|
||||
or “/books/*” (prefix match), or “*/review” (suffix match).
|
||||
If not specified, it matches to any path.
|
||||
This field should not be set for TCP services. The policy will be ignored.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="AccessRule-methods">
|
||||
<td><code>methods</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>Optional. A list of HTTP methods (e.g., “GET”, “POST”).
|
||||
If not specified or specified as “*”, it matches to any methods.
|
||||
This field should not be set for TCP services. The policy will be ignored.
|
||||
For gRPC services, only <code>POST</code> is allowed; other methods will result in denying services.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="AccessRule-constraints">
|
||||
<td><code>constraints</code></td>
|
||||
<td><code><a href="#AccessRule-Constraint">Constraint[]</a></code></td>
|
||||
<td>
|
||||
<p>Optional. Extra constraints in the ServiceRole specification.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="AccessRule-Constraint">AccessRule.Constraint</h2>
|
||||
<section>
|
||||
<p>Definition of a custom constraint. The supported keys are listed in the “constraint and properties” page.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="AccessRule-Constraint-key">
|
||||
<td><code>key</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>Key of the constraint.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="AccessRule-Constraint-values">
|
||||
<td><code>values</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>List of valid values for the constraint.
|
||||
Exact match, prefix match, and suffix match are supported.
|
||||
For example, the value “v1alpha2” matches “v1alpha2” (exact match),
|
||||
or “v1*” (prefix match), or “*alpha2” (suffix match).</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="RbacConfig">RbacConfig</h2>
|
||||
<section>
|
||||
<p>RbacConfig implements the ClusterRbacConfig Custom Resource Definition for controlling Istio RBAC behavior.
|
||||
The ClusterRbacConfig Custom Resource is a singleton where only one ClusterRbacConfig should be created
|
||||
globally in the mesh and the namespace should be the same to other Istio components, which usually is <code>istio-system</code>.</p>
|
||||
|
||||
<p>Below is an example of an <code>ClusterRbacConfig</code> resource called <code>istio-rbac-config</code> which enables Istio RBAC for all
|
||||
services in the default namespace.</p>
|
||||
|
||||
<pre><code class="language-yaml">apiVersion: "rbac.istio.io/v1alpha1"
|
||||
kind: ClusterRbacConfig
|
||||
metadata:
|
||||
name: default
|
||||
namespace: istio-system
|
||||
spec:
|
||||
mode: ON_WITH_INCLUSION
|
||||
inclusion:
|
||||
namespaces: [ "default" ]
|
||||
</code></pre>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="RbacConfig-mode">
|
||||
<td><code>mode</code></td>
|
||||
<td><code><a href="#RbacConfig-Mode">Mode</a></code></td>
|
||||
<td>
|
||||
<p>Istio RBAC mode.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-inclusion">
|
||||
<td><code>inclusion</code></td>
|
||||
<td><code><a href="#RbacConfig-Target">Target</a></code></td>
|
||||
<td>
|
||||
<p>A list of services or namespaces that should be enforced by Istio RBAC policies. Note: This field have
|
||||
effect only when mode is ON<em>WITH</em>INCLUSION and will be ignored for any other modes.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-exclusion">
|
||||
<td><code>exclusion</code></td>
|
||||
<td><code><a href="#RbacConfig-Target">Target</a></code></td>
|
||||
<td>
|
||||
<p>A list of services or namespaces that should not be enforced by Istio RBAC policies. Note: This field have
|
||||
effect only when mode is ON<em>WITH</em>EXCLUSION and will be ignored for any other modes.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="RbacConfig-Mode">RbacConfig.Mode</h2>
|
||||
<section>
|
||||
<table class="enum-values">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Name</th>
|
||||
<th>Description</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="RbacConfig-Mode-OFF">
|
||||
<td><code>OFF</code></td>
|
||||
<td>
|
||||
<p>Disable Istio RBAC completely, Istio RBAC policies will not be enforced.</p>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-Mode-ON">
|
||||
<td><code>ON</code></td>
|
||||
<td>
|
||||
<p>Enable Istio RBAC for all services and namespaces. Note Istio RBAC is deny-by-default
|
||||
which means all requests will be denied if it’s not allowed by RBAC rules.</p>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-Mode-ON_WITH_INCLUSION">
|
||||
<td><code>ON_WITH_INCLUSION</code></td>
|
||||
<td>
|
||||
<p>Enable Istio RBAC only for services and namespaces specified in the inclusion field. Any other
|
||||
services and namespaces not in the inclusion field will not be enforced by Istio RBAC policies.</p>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-Mode-ON_WITH_EXCLUSION">
|
||||
<td><code>ON_WITH_EXCLUSION</code></td>
|
||||
<td>
|
||||
<p>Enable Istio RBAC for all services and namespaces except those specified in the exclusion field. Any other
|
||||
services and namespaces not in the exclusion field will be enforced by Istio RBAC policies.</p>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="RbacConfig-Target">RbacConfig.Target</h2>
|
||||
<section>
|
||||
<p>Target defines a list of services or namespaces.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="RbacConfig-Target-services">
|
||||
<td><code>services</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>A list of services.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RbacConfig-Target-namespaces">
|
||||
<td><code>namespaces</code></td>
|
||||
<td><code>string[]</code></td>
|
||||
<td>
|
||||
<p>A list of namespaces.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="RoleRef">RoleRef</h2>
|
||||
<section>
|
||||
<p>RoleRef refers to a role object.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="RoleRef-kind">
|
||||
<td><code>kind</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>The type of the role being referenced.
|
||||
Currently, “ServiceRole” is the only supported value for “kind”.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="RoleRef-name">
|
||||
<td><code>name</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>The name of the ServiceRole object being referenced.
|
||||
The ServiceRole object must be in the same namespace as the ServiceRoleBinding object.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="ServiceRole">ServiceRole</h2>
|
||||
<section>
|
||||
<p>ServiceRole specification contains a list of access rules (permissions).</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="ServiceRole-rules">
|
||||
<td><code>rules</code></td>
|
||||
<td><code><a href="#AccessRule">AccessRule[]</a></code></td>
|
||||
<td>
|
||||
<p>The set of access rules (permissions) that the role has.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="ServiceRoleBinding">ServiceRoleBinding</h2>
|
||||
<section>
|
||||
<p>ServiceRoleBinding assigns a ServiceRole to a list of subjects.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="ServiceRoleBinding-subjects">
|
||||
<td><code>subjects</code></td>
|
||||
<td><code><a href="#Subject">Subject[]</a></code></td>
|
||||
<td>
|
||||
<p>List of subjects that are assigned the ServiceRole object.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="ServiceRoleBinding-roleRef">
|
||||
<td><code>roleRef</code></td>
|
||||
<td><code><a href="#RoleRef">RoleRef</a></code></td>
|
||||
<td>
|
||||
<p>Reference to the ServiceRole object.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="Subject">Subject</h2>
|
||||
<section>
|
||||
<p>Subject defines an identity. The identity is either a user or identified by a set of <code>properties</code>.
|
||||
The supported keys in <code>properties</code> are listed in “constraint and properties” page.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="Subject-user">
|
||||
<td><code>user</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>Optional. The user name/ID that the subject represents.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="Subject-properties">
|
||||
<td><code>properties</code></td>
|
||||
<td><code>map<string, string></code></td>
|
||||
<td>
|
||||
<p>Optional. The set of properties that identify the subject.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
|
|
@ -1,17 +1,33 @@
|
|||
---
|
||||
title: Installation Options
|
||||
description: Describes the options available when installing Istio using the included Helm chart.
|
||||
weight: 30
|
||||
title: Installation Options (Helm)
|
||||
description: Describes the options available when installing Istio using Helm charts.
|
||||
weight: 15
|
||||
keywords: [kubernetes,helm]
|
||||
force_inline_toc: true
|
||||
---
|
||||
|
||||
{{< tip >}}
|
||||
Refer to [Installation Options Changes](/news/releases/1.3.x/announcing-1.3/helm-changes/)
|
||||
for a detailed summary of the option changes between release 1.2 and release 1.3.
|
||||
{{< /tip >}}
|
||||
{{< warning >}}
|
||||
Installing Istio with Helm is in the process of deprecation, however, you can use these Helm
|
||||
configuration options when [installing Istio with {{< istioctl >}}](/zh/docs/setup/install/istioctl/)
|
||||
by prepending the string "`values.`" to the option name. For example, instead of this `helm` command:
|
||||
|
||||
To customize Istio install using Helm, use the `--set <key>=<value>` option in Helm command to override one or more values. The set of supported keys is shown in the table below.
|
||||
{{< text bash >}}
|
||||
$ helm template ... --set global.mtls.enabled=true
|
||||
{{< /text >}}
|
||||
|
||||
You can use this `istioctl` command:
|
||||
|
||||
{{< text bash >}}
|
||||
$ istioctl manifest generate ... --set values.global.mtls.enabled=true
|
||||
{{< /text >}}
|
||||
|
||||
Refer to [customizing the configuration](/zh/docs/setup/install/istioctl/#customizing-the-configuration) for details.
|
||||
{{< /warning >}}
|
||||
|
||||
{{< warning >}}
|
||||
This document is unfortunately out of date with the latest changes in the set of supported options.
|
||||
To get the exact set of supported options, please see the [Helm charts]({{< github_tree >}}/install/kubernetes/helm/istio).
|
||||
{{< /warning >}}
|
||||
|
||||
<!-- Run python scripts/tablegen.py to generate this table -->
|
||||
|
||||
|
|
|
|||
|
|
@ -136,7 +136,7 @@ No
|
|||
</tr>
|
||||
<tr id="ConfigSource-tls_settings">
|
||||
<td><code>tlsSettings</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/destination-rule.html#TLSSettings">TLSSettings</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/destination-rule.html#TLSSettings">TLSSettings</a></code></td>
|
||||
<td>
|
||||
<p>Use the tls<em>settings to specify the tls mode to use. If the MCP server
|
||||
uses Istio mutual TLS and shares the root CA with Pilot, specify the TLS
|
||||
|
|
@ -488,7 +488,7 @@ No
|
|||
</tr>
|
||||
<tr id="MeshConfig-tcp_keepalive">
|
||||
<td><code>tcpKeepalive</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/destination-rule.html#ConnectionPoolSettings-TCPSettings-TcpKeepalive">TcpKeepalive</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/destination-rule.html#ConnectionPoolSettings-TCPSettings-TcpKeepalive">TcpKeepalive</a></code></td>
|
||||
<td>
|
||||
<p>If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.</p>
|
||||
|
||||
|
|
@ -1633,7 +1633,7 @@ No
|
|||
</tr>
|
||||
<tr id="RemoteService-tls_settings">
|
||||
<td><code>tlsSettings</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/destination-rule.html#TLSSettings">TLSSettings</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/destination-rule.html#TLSSettings">TLSSettings</a></code></td>
|
||||
<td>
|
||||
<p>Use the tls_settings to specify the tls mode to use. If the remote service
|
||||
uses Istio mutual TLS and shares the root CA with Pilot, specify the TLS
|
||||
|
|
@ -1646,7 +1646,7 @@ No
|
|||
</tr>
|
||||
<tr id="RemoteService-tcp_keepalive">
|
||||
<td><code>tcpKeepalive</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/destination-rule.html#ConnectionPoolSettings-TCPSettings-TcpKeepalive">TcpKeepalive</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/destination-rule.html#ConnectionPoolSettings-TCPSettings-TcpKeepalive">TcpKeepalive</a></code></td>
|
||||
<td>
|
||||
<p>If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.</p>
|
||||
|
||||
|
|
|
|||
|
|
@ -6,8 +6,9 @@ description: Configuration affecting load balancing, outlier detection, etc.
|
|||
location: https://istio.io/docs/reference/config/networking/destination-rule.html
|
||||
layout: protoc-gen-docs
|
||||
generator: protoc-gen-docs
|
||||
aliases: [/docs/reference/config/networking/v1alpha3/destination-rule.html]
|
||||
number_of_entries: 16
|
||||
schema: istio.networking.v1alpha3.DestinationRule
|
||||
aliases: [/zh/docs/reference/config/networking/v1alpha3/destination-rule.html]
|
||||
number_of_entries: 19
|
||||
---
|
||||
<p><code>DestinationRule</code> defines policies that apply to traffic intended for a
|
||||
service after routing has occurred. These rules specify configuration
|
||||
|
|
@ -157,7 +158,7 @@ No
|
|||
<td><code>http1MaxPendingRequests</code></td>
|
||||
<td><code>int32</code></td>
|
||||
<td>
|
||||
<p>Maximum number of pending HTTP requests to a destination. Default 1024.</p>
|
||||
<p>Maximum number of pending HTTP requests to a destination. Default 2^32-1.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
|
@ -168,7 +169,7 @@ No
|
|||
<td><code>http2MaxRequests</code></td>
|
||||
<td><code>int32</code></td>
|
||||
<td>
|
||||
<p>Maximum number of requests to a backend. Default 1024.</p>
|
||||
<p>Maximum number of requests to a backend. Default 2^32-1.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
|
@ -193,7 +194,7 @@ No
|
|||
<td><code>int32</code></td>
|
||||
<td>
|
||||
<p>Maximum number of retries that can be outstanding to all hosts in a
|
||||
cluster at a given time. Defaults to 1024.</p>
|
||||
cluster at a given time. Defaults to 2^32-1.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
|
@ -205,7 +206,7 @@ No
|
|||
<td><code><a href="https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#duration">Duration</a></code></td>
|
||||
<td>
|
||||
<p>The idle timeout for upstream connection pool connections. The idle timeout is defined as the period in which there are no active requests.
|
||||
If not set, there is no idle timeout. When the idle timeout is reached the connection will be closed.
|
||||
If not set, the default is 1 hour. When the idle timeout is reached the connection will be closed.
|
||||
Note that request based timeouts mean that HTTP/2 PINGs will not keep the connection alive. Applies to both HTTP1.1 and HTTP2 connections.</p>
|
||||
|
||||
</td>
|
||||
|
|
@ -283,7 +284,7 @@ This opt-in option overrides the default.</p>
|
|||
<td><code>maxConnections</code></td>
|
||||
<td><code>int32</code></td>
|
||||
<td>
|
||||
<p>Maximum number of HTTP1 /TCP connections to a destination host. Default 1024.</p>
|
||||
<p>Maximum number of HTTP1 /TCP connections to a destination host. Default 2^32-1.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
|
@ -393,7 +394,7 @@ after routing has occurred.</p>
|
|||
<p>The name of a service from the service registry. Service
|
||||
names are looked up from the platform’s service registry (e.g.,
|
||||
Kubernetes services, Consul services, etc.) and from the hosts
|
||||
declared by <a href="/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntries</a>. Rules defined for
|
||||
declared by <a href="/zh/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntries</a>. Rules defined for
|
||||
services that do not exist in the service registry will be ignored.</p>
|
||||
|
||||
<p><em>Note for Kubernetes users</em>: When short names are used (e.g. “reviews”
|
||||
|
|
@ -533,6 +534,18 @@ Yes
|
|||
Yes
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="LoadBalancerSettings-locality_lb_setting">
|
||||
<td><code>localityLbSetting</code></td>
|
||||
<td><code><a href="#LocalityLoadBalancerSetting">LocalityLoadBalancerSetting</a></code></td>
|
||||
<td>
|
||||
<p>Locality load balancer settings, this will override mesh wide settings in entirety, meaning no merging would be performed
|
||||
between this object and the object one in MeshConfig</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
|
|
@ -704,6 +717,189 @@ balancing. This option must be used with care. It is meant for
|
|||
advanced use cases. Refer to Original Destination load balancer in
|
||||
Envoy for further details.</p>
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="LocalityLoadBalancerSetting">LocalityLoadBalancerSetting</h2>
|
||||
<section>
|
||||
<p>Locality-weighted load balancing allows administrators to control the
|
||||
distribution of traffic to endpoints based on the localities of where the
|
||||
traffic originates and where it will terminate. These localities are
|
||||
specified using arbitrary labels that designate a hierarchy of localities in
|
||||
{region}/{zone}/{sub-zone} form. For additional detail refer to
|
||||
<a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/locality_weight">Locality Weight</a>
|
||||
The following example shows how to setup locality weights mesh-wide.</p>
|
||||
|
||||
<p>Given a mesh with workloads and their service deployed to “us-west/zone1/<em>”
|
||||
and “us-west/zone2/</em>”. This example specifies that when traffic accessing a
|
||||
service originates from workloads in “us-west/zone1/<em>”, 80% of the traffic
|
||||
will be sent to endpoints in “us-west/zone1/</em>”, i.e the same zone, and the
|
||||
remaining 20% will go to endpoints in “us-west/zone2/<em>”. This setup is
|
||||
intended to favor routing traffic to endpoints in the same locality.
|
||||
A similar setting is specified for traffic originating in “us-west/zone2/</em>”.</p>
|
||||
|
||||
<pre><code class="language-yaml"> distribute:
|
||||
- from: us-west/zone1/*
|
||||
to:
|
||||
"us-west/zone1/*": 80
|
||||
"us-west/zone2/*": 20
|
||||
- from: us-west/zone2/*
|
||||
to:
|
||||
"us-west/zone1/*": 20
|
||||
"us-west/zone2/*": 80
|
||||
</code></pre>
|
||||
|
||||
<p>If the goal of the operator is not to distribute load across zones and
|
||||
regions but rather to restrict the regionality of failover to meet other
|
||||
operational requirements an operator can set a ‘failover’ policy instead of
|
||||
a ‘distribute’ policy.</p>
|
||||
|
||||
<p>The following example sets up a locality failover policy for regions.
|
||||
Assume a service resides in zones within us-east, us-west & eu-west
|
||||
this example specifies that when endpoints within us-east become unhealthy
|
||||
traffic should failover to endpoints in any zone or sub-zone within eu-west
|
||||
and similarly us-west should failover to us-east.</p>
|
||||
|
||||
<pre><code class="language-yaml"> failover:
|
||||
- from: us-east
|
||||
to: eu-west
|
||||
- from: us-west
|
||||
to: us-east
|
||||
</code></pre>
|
||||
|
||||
<p>Locality load balancing settings.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="LocalityLoadBalancerSetting-distribute">
|
||||
<td><code>distribute</code></td>
|
||||
<td><code><a href="#LocalityLoadBalancerSetting-Distribute">Distribute[]</a></code></td>
|
||||
<td>
|
||||
<p>Optional: only one of distribute or failover can be set.
|
||||
Explicitly specify loadbalancing weight across different zones and geographical locations.
|
||||
Refer to <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/locality_weight">Locality weighted load balancing</a>
|
||||
If empty, the locality weight is set according to the endpoints number within it.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="LocalityLoadBalancerSetting-failover">
|
||||
<td><code>failover</code></td>
|
||||
<td><code><a href="#LocalityLoadBalancerSetting-Failover">Failover[]</a></code></td>
|
||||
<td>
|
||||
<p>Optional: only failover or distribute can be set.
|
||||
Explicitly specify the region traffic will land on when endpoints in local region becomes unhealthy.
|
||||
Should be used together with OutlierDetection to detect unhealthy endpoints.
|
||||
Note: if no OutlierDetection specified, this will not take effect.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="LocalityLoadBalancerSetting-Distribute">LocalityLoadBalancerSetting.Distribute</h2>
|
||||
<section>
|
||||
<p>Describes how traffic originating in the ‘from’ zone or sub-zone is
|
||||
distributed over a set of ‘to’ zones. Syntax for specifying a zone is
|
||||
{region}/{zone}/{sub-zone} and terminal wildcards are allowed on any
|
||||
segment of the specification. Examples:
|
||||
* - matches all localities
|
||||
us-west/* - all zones and sub-zones within the us-west region
|
||||
us-west/zone-1/* - all sub-zones within us-west/zone-1</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="LocalityLoadBalancerSetting-Distribute-from">
|
||||
<td><code>from</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>Originating locality, ‘/’ separated, e.g. ‘region/zone/sub_zone’.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="LocalityLoadBalancerSetting-Distribute-to">
|
||||
<td><code>to</code></td>
|
||||
<td><code>map<string, uint32></code></td>
|
||||
<td>
|
||||
<p>Map of upstream localities to traffic distribution weights. The sum of
|
||||
all weights should be == 100. Any locality not assigned a weight will
|
||||
receive no traffic.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</section>
|
||||
<h2 id="LocalityLoadBalancerSetting-Failover">LocalityLoadBalancerSetting.Failover</h2>
|
||||
<section>
|
||||
<p>Specify the traffic failover policy across regions. Since zone and sub-zone
|
||||
failover is supported by default this only needs to be specified for
|
||||
regions when the operator needs to constrain traffic failover so that
|
||||
the default behavior of failing over to any endpoint globally does not
|
||||
apply. This is useful when failing over traffic across regions would not
|
||||
improve service health or may need to be restricted for other reasons
|
||||
like regulatory controls.</p>
|
||||
|
||||
<table class="message-fields">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Field</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
<th>Required</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr id="LocalityLoadBalancerSetting-Failover-from">
|
||||
<td><code>from</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>Originating region.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
<tr id="LocalityLoadBalancerSetting-Failover-to">
|
||||
<td><code>to</code></td>
|
||||
<td><code>string</code></td>
|
||||
<td>
|
||||
<p>Destination region the traffic will fail over to when endpoints in
|
||||
the ‘from’ region becomes unhealthy.</p>
|
||||
|
||||
</td>
|
||||
<td>
|
||||
No
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
|
@ -834,7 +1030,7 @@ No
|
|||
<section>
|
||||
<p>A subset of endpoints of a service. Subsets can be used for scenarios
|
||||
like A/B testing, or routing to a specific version of a service. Refer
|
||||
to <a href="/docs/reference/config/networking/virtual-service/#VirtualService">VirtualService</a> documentation for examples of using
|
||||
to <a href="/zh/docs/reference/config/networking/virtual-service/#VirtualService">VirtualService</a> documentation for examples of using
|
||||
subsets in these scenarios. In addition, traffic policies defined at the
|
||||
service-level can be overridden at a subset-level. The following rule
|
||||
uses a round robin load balancing policy for all traffic going to a
|
||||
|
|
@ -1202,7 +1398,7 @@ No
|
|||
<tbody>
|
||||
<tr id="TrafficPolicy-PortTrafficPolicy-port">
|
||||
<td><code>port</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/virtual-service.html#PortSelector">PortSelector</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/virtual-service.html#PortSelector">PortSelector</a></code></td>
|
||||
<td>
|
||||
<p>Specifies the number of a port on the destination service
|
||||
on which this policy is being applied.</p>
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ destabilize the entire mesh. Unlike other Istio networking objects,
|
|||
EnvoyFilters are additively applied. Any number of EnvoyFilters can
|
||||
exist for a given workload in a specific namespace. The order of
|
||||
application of these EnvoyFilters is as follows: all EnvoyFilters
|
||||
in the config <a href="/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root
|
||||
in the config <a href="/zh/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root
|
||||
namespace</a>,
|
||||
followed by all matching EnvoyFilters in the workload’s namespace.</p>
|
||||
|
||||
|
|
@ -39,7 +39,7 @@ if multiple EnvoyFilter configurations conflict with each other.</p>
|
|||
|
||||
<p><strong>NOTE 4</strong>: *_To apply an EnvoyFilter resource to all workloads
|
||||
(sidecars and gateways) in the system, define the resource in the
|
||||
config <a href="/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root
|
||||
config <a href="/zh/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root
|
||||
namespace</a>,
|
||||
without a workloadSelector.</p>
|
||||
|
||||
|
|
@ -199,7 +199,7 @@ generated by Istio Pilot.</p>
|
|||
<tbody>
|
||||
<tr id="EnvoyFilter-workload_selector">
|
||||
<td><code>workloadSelector</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/sidecar.html#WorkloadSelector">WorkloadSelector</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/sidecar.html#WorkloadSelector">WorkloadSelector</a></code></td>
|
||||
<td>
|
||||
<p>Criteria used to select the specific set of pods/VMs on which
|
||||
this patch configuration should be applied. If omitted, the set
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ description: Configuration affecting service registry.
|
|||
location: https://istio.io/docs/reference/config/networking/service-entry.html
|
||||
layout: protoc-gen-docs
|
||||
generator: protoc-gen-docs
|
||||
aliases: [/docs/reference/config/networking/v1alpha3/service-entry.html]
|
||||
aliases: [/zh/docs/reference/config/networking/v1alpha3/service-entry.html]
|
||||
number_of_entries: 4
|
||||
---
|
||||
<p><code>ServiceEntry</code> enables adding additional entries into Istio’s internal
|
||||
|
|
@ -373,7 +373,7 @@ No
|
|||
</tr>
|
||||
<tr id="ServiceEntry-ports">
|
||||
<td><code>ports</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/gateway.html#Port">Port[]</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/gateway.html#Port">Port[]</a></code></td>
|
||||
<td>
|
||||
<p>The ports associated with the external service. If the
|
||||
Endpoints are Unix domain socket addresses, there must be exactly one
|
||||
|
|
@ -456,7 +456,7 @@ No
|
|||
<td>
|
||||
<p>The list of subject alternate names allowed for workload instances that
|
||||
implement this service. This information is used to enforce
|
||||
<a href="/docs/concepts/security/#secure-naming">secure-naming</a>.
|
||||
<a href="/zh/docs/concepts/security/#secure-naming">secure-naming</a>.
|
||||
If specified, the proxy will verify that the server
|
||||
certificate’s subject alternate name matches one of the specified values.</p>
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ description: Configuration affecting network reachability of a sidecar.
|
|||
location: https://istio.io/docs/reference/config/networking/sidecar.html
|
||||
layout: protoc-gen-docs
|
||||
generator: protoc-gen-docs
|
||||
aliases: [/docs/reference/config/networking/v1alpha3/sidecar.html]
|
||||
aliases: [/zh/docs/reference/config/networking/v1alpha3/sidecar.html]
|
||||
number_of_entries: 7
|
||||
---
|
||||
<p><code>Sidecar</code> describes the configuration of the sidecar proxy that mediates
|
||||
|
|
@ -37,7 +37,7 @@ behavior of the system is undefined if two or more <code>Sidecar</code> configur
|
|||
with a <code>workloadSelector</code> select the same workload instance.</p>
|
||||
|
||||
<p>NOTE 2: <em><em>A <code>Sidecar</code> configuration in the <code>MeshConfig</code>
|
||||
<a href="/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root namespace</a>
|
||||
<a href="/zh/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig">root namespace</a>
|
||||
will be applied by default to all namespaces without a <code>Sidecar</code>
|
||||
configuration</em></em>. This global default <code>Sidecar</code> configuration should not have
|
||||
any <code>workloadSelector</code>.</p>
|
||||
|
|
@ -263,7 +263,7 @@ listener on the sidecar proxy attached to a workload instance.</p>
|
|||
<tbody>
|
||||
<tr id="IstioEgressListener-port">
|
||||
<td><code>port</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/gateway.html#Port">Port</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/gateway.html#Port">Port</a></code></td>
|
||||
<td>
|
||||
<p>The port associated with the listener. If using Unix domain socket,
|
||||
use 0 as the port number, with a valid protocol. The port if
|
||||
|
|
@ -376,7 +376,7 @@ traffic listener on the sidecar proxy attached to a workload instance.</p>
|
|||
<tbody>
|
||||
<tr id="IstioIngressListener-port">
|
||||
<td><code>port</code></td>
|
||||
<td><code><a href="/docs/reference/config/networking/gateway.html#Port">Port</a></code></td>
|
||||
<td><code><a href="/zh/docs/reference/config/networking/gateway.html#Port">Port</a></code></td>
|
||||
<td>
|
||||
<p>The port associated with the listener.</p>
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ description: Configuration affecting label/content routing, sni routing, etc.
|
|||
location: https://istio.io/docs/reference/config/networking/virtual-service.html
|
||||
layout: protoc-gen-docs
|
||||
generator: protoc-gen-docs
|
||||
aliases: [/docs/reference/config/networking/v1alpha3/virtual-service.html]
|
||||
aliases: [/zh/docs/reference/config/networking/v1alpha3/virtual-service.html]
|
||||
number_of_entries: 23
|
||||
---
|
||||
<p>Configuration affecting traffic routing. Here are a few terms useful to define
|
||||
|
|
@ -226,7 +226,7 @@ destination.host should unambiguously refer to a service in the service
|
|||
registry. Istio’s service registry is composed of all the services found
|
||||
in the platform’s service registry (e.g., Kubernetes services, Consul
|
||||
services), as well as services declared through the
|
||||
<a href="/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntry</a> resource.</p>
|
||||
<a href="/zh/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntry</a> resource.</p>
|
||||
|
||||
<p><em>Note for Kubernetes users</em>: When short names are used (e.g. “reviews”
|
||||
instead of “reviews.default.svc.cluster.local”), Istio will interpret
|
||||
|
|
@ -361,7 +361,7 @@ spec:
|
|||
<p>The name of a service from the service registry. Service
|
||||
names are looked up from the platform’s service registry (e.g.,
|
||||
Kubernetes services, Consul services, etc.) and from the hosts
|
||||
declared by <a href="/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntry</a>. Traffic forwarded to
|
||||
declared by <a href="/zh/docs/reference/config/networking/service-entry/#ServiceEntry">ServiceEntry</a>. Traffic forwarded to
|
||||
destinations that are not found in either of the two, will be dropped.</p>
|
||||
|
||||
<p><em>Note for Kubernetes users</em>: When short names are used (e.g. “reviews”
|
||||
|
|
@ -1981,7 +1981,7 @@ properties of the corresponding hosts, including those for multiple
|
|||
HTTP and TCP ports. Alternatively, the traffic properties of a host
|
||||
can be defined using more than one VirtualService, with certain
|
||||
caveats. Refer to the
|
||||
<a href="/docs/ops/traffic-management/deploy-guidelines/#multiple-virtual-services-and-destination-rules-for-the-same-host">Operations Guide</a>
|
||||
<a href="/zh/docs/ops/traffic-management/deploy-guidelines/#multiple-virtual-services-and-destination-rules-for-the-same-host">Operations Guide</a>
|
||||
for details.</p>
|
||||
|
||||
<p><em>Note for Kubernetes users</em>: When short names are used (e.g. “reviews”
|
||||
|
|
|
|||
|
|
@ -13,6 +13,6 @@ aliases:
|
|||
|
||||
## 模板
|
||||
|
||||
下表显示了由每个支持的适配器实现的[模板](/docs/reference/config/policy-and-telemetry/templates)。
|
||||
下表显示了由每个支持的适配器实现的[模板](/zh/docs/reference/config/policy-and-telemetry/templates)。
|
||||
|
||||
{{< adapter_table >}}
|
||||
|
|
@ -32,10 +32,10 @@ See the License for the specific language governing permissions and
|
|||
limitations under the License. -->
|
||||
|
||||
<p>The SkyWalking adapter uses the <code>Istio bypass</code> adapter to collect metrics and make them available to
|
||||
<a href="https://skywalking.apache.org/">Apache SkyWalking</a>. SkyWalking provides a topology map and metrics graph
|
||||
<a href="https://skywalking.apache.org/">Apache SkyWalking</a>. SkyWalking provides a topology map and metrics graph
|
||||
to visualize the whole mesh.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
|
||||
<p>Follow the <a href="https://github.com/apache/skywalking/blob/master/docs/README.md">official Apache SkyWalking documentation</a>
|
||||
<p>Follow the <a href="https://github.com/apache/skywalking/blob/master/docs/README.md">official Apache SkyWalking documentation</a>
|
||||
and <a href="https://github.com/apache/skywalking-kubernetes">SkyWalking k8s documentation</a> for details on configuring SkyWalking and the Istio bypass adapter.</p>
|
||||
|
|
|
|||
|
|
@ -24,8 +24,8 @@ proper CRDs must be applied in order to use these features. Complete Apigee docu
|
|||
of this adapter is available on the <a href="https://docs.apigee.com/api-platform/istio-adapter/concepts">Apigee Adapter for Istio</a>
|
||||
site. For more information and product support, please <a href="https://apigee.com/about/support/portal">contact Apigee support</a>.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/authorization/">authorization template</a>
|
||||
and Apigee’s <a href="/docs/reference/config/policy-and-telemetry/templates/analytics/">analytics template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/authorization/">authorization template</a>
|
||||
and Apigee’s <a href="/zh/docs/reference/config/policy-and-telemetry/templates/analytics/">analytics template</a>.</p>
|
||||
|
||||
<p>Example config:</p>
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ number_of_entries: 3
|
|||
<p>The <code>circonus</code> adapter enables Istio to deliver metric data to the
|
||||
<a href="https://www.circonus.com">Circonus</a> monitoring backend.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
|
||||
<h2 id="Params">Params</h2>
|
||||
<section>
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ number_of_entries: 2
|
|||
<p>The handler configuration must contain the same metrics as the instance configuration.
|
||||
The metrics specified in both instance and handler configurations will be sent to CloudMonitor.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
|
||||
<h2 id="Params">Params</h2>
|
||||
<section>
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ number_of_entries: 4
|
|||
<p>The handler configuration must contain the same metrics as the instance configuration.
|
||||
The metrics specified in both instance and handler configurations will be sent to CloudWatch.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
|
||||
<h2 id="Params">Params</h2>
|
||||
<section>
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ number_of_entries: 3
|
|||
<p>The <code>dogstatsd</code> adapter is designed to deliver Istio metric instances to a
|
||||
listening <a href="https://www.datadoghq.com/">DataDog</a> agent.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/metric/">metric template</a>.</p>
|
||||
|
||||
<h2 id="Params">Params</h2>
|
||||
<section>
|
||||
|
|
|
|||
|
|
@ -14,9 +14,9 @@ number_of_entries: 2
|
|||
<p>The <code>denier</code> adapter is designed to always return a denial to precondition
|
||||
checks. You can specify the exact error to return for these denials.</p>
|
||||
|
||||
<p>This adapter supports the <a href="/docs/reference/config/policy-and-telemetry/templates/checknothing/">checknothing template</a>,
|
||||
the <a href="/docs/reference/config/policy-and-telemetry/templates/listentry/">listentry template</a>,
|
||||
and the <a href="/docs/reference/config/policy-and-telemetry/templates/quota/">quota template</a>.</p>
|
||||
<p>This adapter supports the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/checknothing/">checknothing template</a>,
|
||||
the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/listentry/">listentry template</a>,
|
||||
and the <a href="/zh/docs/reference/config/policy-and-telemetry/templates/quota/">quota template</a>.</p>
|
||||
|
||||
<h2 id="Params">Params</h2>
|
||||
<section>
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue