[zh] Resync concepts section (5)

This commit is contained in:
Qiming Teng 2021-04-27 22:18:18 +08:00
parent 2ce78eb94a
commit 37b120d0b9
2 changed files with 161 additions and 87 deletions

View File

@ -1,10 +1,5 @@
---
title: 服务拓扑Service Topology
feature:
title: 服务拓扑Service Topology
description: >
基于集群拓扑的服务流量路由。
title: 使用拓扑键实现拓扑感知的流量路由
content_type: concept
weight: 10
---
@ -12,19 +7,27 @@ weight: 10
reviewers:
- johnbelamaric
- imroc
title: Service Topology
feature:
title: Service Topology
description: >
Routing of service traffic based upon cluster topology.
title: Topology-aware traffic routing with topology keys
content_type: concept
weight: 10
-->
<!-- overview -->
{{< feature-state for_k8s_version="v1.17" state="alpha" >}}
{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
{{< note >}}
<!--
This feature, specifically the alpha `topologyKeys` API, is deprecated since
Kubernetes v1.21.
[Topology Aware Hints](/docs/concepts/services-networking/topology-aware-hints/),
introduced in Kubernetes v1.21, provide similar functionality.
-->
此功能特性,尤其是 Alpha 阶段的 `topologyKeys` API在 Kubernetes v1.21
版本中已被废弃。Kubernetes v1.21 版本中引入的
[拓扑感知的提示](/zh/docs/concepts/services-networking/topology-aware-hints/),
提供类似的功能。
{{</ note >}}
<!--
_Service Topology_ enables a service to route traffic based upon the Node
@ -38,45 +41,50 @@ in the same availability zone.
<!-- body -->
<!--
## Introduction
## Topology-aware traffic routing
By default, traffic sent to a `ClusterIP` or `NodePort` Service may be routed to
any backend address for the Service. Since Kubernetes 1.7 it has been possible
to route "external" traffic to the Pods running on the Node that received the
traffic, but this is not supported for `ClusterIP` Services, and more complex
topologies &mdash; such as routing zonally &mdash; have not been possible. The
_Service Topology_ feature resolves this by allowing the Service creator to
define a policy for routing traffic based upon the Node labels for the
originating and destination Nodes.
By using Node label matching between the source and destination, the operator
may designate groups of Nodes that are "closer" and "farther" from one another,
using whatever metric makes sense for that operator's requirements. For many
operators in public clouds, for example, there is a preference to keep service
traffic within the same zone, because interzonal traffic has a cost associated
with it, while intrazonal traffic does not. Other common needs include being able
to route traffic to a local Pod managed by a DaemonSet, or keeping traffic to
Nodes connected to the same top-of-rack switch for the lowest latency.
any backend address for the Service. Kubernetes 1.7 made it possible to
route "external" traffic to the Pods running on the same Node that received the
traffic. For `ClusterIP` Services, the equivalent same-node preference for
routing wasn't possible; nor could you configure your cluster to favor routing
to endpoints within the same zone.
By setting `topologyKeys` on a Service, you're able to define a policy for routing
traffic based upon the Node labels for the originating and destination Nodes.
-->
## 介绍 {#introduction}
## 拓扑感知的流量路由
默认情况下,发往 `ClusterIP` 或者 `NodePort` 服务的流量可能会被路由到任意一个服务后端的地址上。
从 Kubernetes 1.7 开始,可以将“外部”流量路由到节点上运行的 Pod 上,但不支持 `ClusterIP` 服务,
更复杂的拓扑 &mdash; 比如分区路由 &mdash; 也还不支持。
通过允许 `Service` 创建者根据源 `Node` 和目的 `Node` 的标签来定义流量路由策略,
服务拓扑特性实现了服务流量的路由。
默认情况下,发往 `ClusterIP` 或者 `NodePort` 服务的流量可能会被路由到
服务的任一后端的地址。Kubernetes 1.7 允许将“外部”流量路由到接收到流量的
节点上的 Pod。对于 `ClusterIP` 服务,无法完成同节点优先的路由,你也无法
配置集群优选路由到同一可用区中的端点。
通过在 Service 上配置 `topologyKeys`,你可以基于来源节点和目标节点的
标签来定义流量路由策略。
通过对源 `Node` 和目的 `Node` 标签的匹配,运营者可以使用任何符合运营者要求的度量值
来指定彼此“较近”和“较远”的节点组。
例如,对于在公有云上的运营者来说,更偏向于把流量控制在同一区域内,
因为区域间的流量是有费用成本的,而区域内的流量没有。
其它常用需求还包括把流量路由到由 `DaemonSet` 管理的本地 Pod 上,或者
把保持流量在连接同一机架交换机的 `Node` 上,以获得低延时。
<!--
The label matching between the source and destination lets you, as a cluster
operator, designate sets of Nodes that are "closer" and "farther" from one another.
You can define labels to represent whatever metric makes sense for your own
requirements.
In public clouds, for example, you might prefer to keep network traffic within the
same zone, because interzonal traffic has a cost associated with it (and intrazonal
traffic typically does not). Other common needs include being able to route traffic
to a local Pod managed by a DaemonSet, or directing traffic to Nodes connected to the
same top-of-rack switch for the lowest latency.
-->
通过对源和目的之间的标签匹配,作为集群操作者的你可以根据节点间彼此“较近”和“较远”
来定义节点集合。你可以基于符合自身需求的任何度量值来定义标签。
例如,在公有云上,你可能更偏向于把流量控制在同一区内,因为区间流量是有费用成本的,
而区内流量则没有。
其它常见需求还包括把流量路由到由 `DaemonSet` 管理的本地 Pod 上,或者
把将流量转发到连接在同一机架交换机的节点上,以获得低延时。
<!--
## Using Service Topology
If your cluster has Service Topology enabled, you can control Service traffic
If your cluster has the `ServiceTopology`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
enabled, you can control Service traffic
routing by specifying the `topologyKeys` field on the Service spec. This field
is a preference-order list of Node labels which will be used to sort endpoints
when accessing this Service. Traffic will be directed to a Node whose value for
@ -84,7 +92,7 @@ the first label matches the originating Node's value for that label. If there is
no backend for the Service on a matching Node, then the second label will be
considered, and so forth, until no labels remain.
If no match is found, the traffic will be rejected, just as if there were no
If no match is found, the traffic will be rejected, as if there were no
backends for the Service at all. That is, endpoints are chosen based on the first
topology key with available backends. If this field is specified and all entries
have no backends that match the topology of the client, the service has no
@ -95,8 +103,9 @@ as the last value in the list.
## 使用服务拓扑 {#using-service-topology}
如果集群启用了服务拓扑功能后,就可以在 `Service` 配置中指定 `topologyKeys` 字段,
从而控制 `Service` 的流量路由。
如果集群启用了 `ServiceTopology`
[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)
你就可以在 Service 规约中设定 `topologyKeys` 字段,从而控制其流量路由。
此字段是 `Node` 标签的优先顺序字段,将用于在访问这个 `Service` 时对端点进行排序。
流量会被定向到第一个标签值和源 `Node` 标签值相匹配的 `Node`
如果这个 `Service` 没有匹配的后端 `Node`,那么第二个标签会被使用做匹配,
@ -144,7 +153,7 @@ traffic as follows.
* Service topology is not compatible with `externalTrafficPolicy=Local`, and
therefore a Service cannot use both of these features. It is possible to use
both features in the same cluster on different Services, just not on the same
both features in the same cluster on different Services, only not on the same
Service.
* Valid topology keys are currently limited to `kubernetes.io/hostname`,
@ -156,11 +165,11 @@ traffic as follows.
* The catch-all value, `"*"`, must be the last value in the topology keys, if
it is used.
-->
## 约束条件 {#constraints}
* 服务拓扑和 `externalTrafficPolicy=Local` 是不兼容的,所以 `Service` 不能同时使用这两种特性。
但是在同一个集群的不同 `Service` 上是可以分别使用这两种特性的,只要不在同一个 `Service` 上就可以。
但是在同一个集群的不同 `Service` 上是可以分别使用这两种特性的,只要不在同一个
`Service` 上就可以。
* 有效的拓扑键目前只有:`kubernetes.io/hostname`、`topology.kubernetes.io/zone` 和
`topology.kubernetes.io/region`,但是未来会推广到其它的 `Node` 标签。
@ -171,23 +180,21 @@ traffic as follows.
<!--
## Examples
The following are common examples of using the Service Topology feature.
-->
## 示例
<!--
The following are common examples of using the Service Topology feature.
-->
以下是使用服务拓扑功能的常见示例。
<!--
### Only Node Local Endpoints
A Service that only routes to node local endpoints. If no endpoints exist on the node, traffic is dropped:
-->
### 仅节点本地端点
<!--
A Service that only routes to node local endpoints. If no endpoints exist on the node, traffic is dropped:
-->
仅路由到节点本地端点的一种服务。 如果节点上不存在端点,流量则被丢弃:
仅路由到节点本地端点的一种服务。如果节点上不存在端点,流量则被丢弃:
```yaml
apiVersion: v1
@ -207,12 +214,11 @@ spec:
<!--
### Prefer Node Local Endpoints
A Service that prefers node local Endpoints but falls back to cluster wide endpoints if node local endpoints do not exist:
-->
### 首选节点本地端点
<!--
A Service that prefers node local Endpoints but falls back to cluster wide endpoints if node local endpoints do not exist:
-->
首选节点本地端点,如果节点本地端点不存在,则回退到集群范围端点的一种服务:
```yaml
@ -234,13 +240,13 @@ spec:
<!--
### Only Zonal or Regional Endpoints
-->
### 仅地域或区域端点
<!--
A Service that prefers zonal then regional endpoints. If no endpoints exist in either, traffic is dropped.
-->
首选地域端点而不是区域端点的一种服务。 如果以上两种范围内均不存在端点,流量则被丢弃。
### 仅地域或区域端点
首选地域端点而不是区域端点的一种服务。 如果以上两种范围内均不存在端点,
流量则被丢弃。
```yaml
apiVersion: v1
@ -261,13 +267,13 @@ spec:
<!--
### Prefer Node Local, Zonal, then Regional Endpoints
-->
### 优先选择节点本地端点,地域端点,然后是区域端点
<!--
A Service that prefers node local, zonal, then regional endpoints but falls back to cluster wide endpoints.
-->
优先选择节点本地端点,地域端点,然后是区域端点,然后才是集群范围端点的一种服务。
### 优先选择节点本地端点、地域端点,然后是区域端点
优先选择节点本地端点,地域端点,然后是区域端点,最后才是集群范围端点的
一种服务。
```yaml
apiVersion: v1
@ -296,3 +302,4 @@ spec:
-->
* 阅读关于[启用服务拓扑](/zh/docs/tasks/administer-cluster/enabling-service-topology/)
* 阅读[用服务连接应用程序](/zh/docs/concepts/services-networking/connect-applications-service/)

View File

@ -312,12 +312,26 @@ selectors and uses DNS names instead. For more information, see the
ExternalName Service 是 Service 的特例,它没有选择算符,但是使用 DNS 名称。
有关更多信息,请参阅本文档后面的[ExternalName](#externalname)。
<!--
### Over Capacity Endpoints
If an Endpoints resource has more than 1000 endpoints then a Kubernetes v1.21 (or later)
cluster annotates that Endpoints with `endpoints.kubernetes.io/over-capacity: warning`.
This annotation indicates that the affected Endpoints object is over capacity.
-->
### 超出容量的 Endpoints {#over-capacity-endpoints}
如果某个 Endpoints 资源中包含的端点个数超过 1000则 Kubernetes v1.21 版本
(及更新版本)的集群会将为该 Endpoints 添加注解
`endpoints.kubernetes.io/over-capacity: warning`
这一注解表明所影响到的 Endpoints 对象已经超出容量。
<!--
### EndpointSlices
-->
### EndpointSlice
{{< feature-state for_k8s_version="v1.17" state="beta" >}}
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
<!--
Endpoint Slices are an API resource that can provide a more scalable alternative
@ -351,9 +365,10 @@ This field follows standard Kubernetes label syntax. Values should either be
[IANA standard service names](https://www.iana.org/assignments/service-names) or
domain prefixed names such as `mycompany.com/my-custom-protocol`.
-->
### 应用程序协议 {#application-protocol}
### 应用协议 {#application-protocol}
{{< feature-state for_k8s_version="v1.20" state="stable" >}}
`appProtocol` 字段提供了一种为每个 Service 端口指定应用协议的方式。
此字段的取值会被映射到对应的 Endpoints 和 EndpointSlices 对象。
@ -1077,11 +1092,15 @@ The set of protocols that can be used for LoadBalancer type of Services is still
{{< note >}}
可用于 LoadBalancer 类型服务的协议集仍然由云提供商决定。
{{< /note >}}
<!--
#### Disabling load balancer NodePort allocation {#load-balancer-nodeport-allocation}
-->
### 禁用负载均衡器节点端口分配 {#load-balancer-nodeport-allocation}
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
<!--
Starting in v1.20, you can optionally disable node port allocation for a Service Type=LoadBalancer by setting
the field `spec.allocateLoadBalancerNodePorts` to `false`. This should only be used for load balancer implementations
that route traffic directly to pods as opposed to using node ports. By default, `spec.allocateLoadBalancerNodePorts`
@ -1090,19 +1109,56 @@ is set to `false` on an existing Service with allocated node ports, those node p
You must explicitly remove the `nodePorts` entry in every Service port to de-allocate those node ports.
You must enable the `ServiceLBNodePortControl` feature gate to use this field.
-->
### 禁用负载均衡器节点端口分配 {#load-balancer-nodeport-allocation}
{{< feature-state for_k8s_version="v1.20" state="alpha" >}}
从 v1.20 版本开始, 你可以通过设置 `spec.allocateLoadBalancerNodePorts``false`
对类型为 LoadBalancer 的服务禁用节点端口分配。
这仅适用于直接将流量路由到 Pod 而不是使用节点端口的负载均衡器实现。
默认情况下,`spec.allocateLoadBalancerNodePorts` 为 `true`
LoadBalancer 类型的服务继续分配节点端口。
如果现有服务已被分配节点端口,将参数 `spec.allocateLoadBalancerNodePorts` 设置为 `false` 时,
这些服务上已分配置的节点端口不会被自动释放。
如果现有服务已被分配节点端口,将参数 `spec.allocateLoadBalancerNodePorts`
设置为 `false` 时,这些服务上已分配置的节点端口不会被自动释放。
你必须显式地在每个服务端口中删除 `nodePorts` 项以释放对应端口。
你必须启用 `ServiceLBNodePortControl` 特性门控才能使用该字段。
<!--
#### Specifying class of load balancer implementation {#load-balancer-class}
-->
#### 设置负载均衡器实现的类别 {#load-balancer-class}
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
<!--
Starting in v1.21, you can optionally specify the class of a load balancer implementation for
`LoadBalancer` type of Service by setting the field `spec.loadBalancerClass`.
By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses
the cloud provider's default load balancer implementation.
If `spec.loadBalancerClass` is specified, it is assumed that a load balancer
implementation that matches the specified class is watching for Services.
Any default load balancer implementation (for example, the one provided by
the cloud provider) will ignore Services that have this field set.
`spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only.
Once set, it cannot be changed.
-->
从 v1.21 开始,你可以有选择地为 `LoadBalancer` 类型的服务设置字段
`.spec.loadBalancerClass`,以指定其负载均衡器实现的类别。
默认情况下,`.spec.loadBalancerClass` 的取值是 `nil``LoadBalancer` 类型
服务会使用云提供商的默认负载均衡器实现。
如果设置了 `.spec.loadBalancerClass`,则假定存在某个与所指定的类相匹配的
负载均衡器实现在监视服务变化。
所有默认的负载均衡器实现(例如,由云提供商所提供的)都会忽略设置了此字段
的服务。`.spec.loadBalancerClass` 只能设置到类型为 `LoadBalancer` 的 Service
之上,而且一旦设置之后不可变更。
<!--
The value of `spec.loadBalancerClass` must be a label-style identifier,
with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`".
Unprefixed names are reserved for end-users.
You must enable the `ServiceLoadBalancerClass` feature gate to use this field.
-->
`.spec.loadBalancerClass` 的值必须是一个标签风格的标识符,
可以有选择地带有类似 "`internal-vip`" 或 "`example.com/internal-vip`" 这类
前缀。没有前缀的名字是保留给最终用户的。
你必须启用 `ServiceLoadBalancerClass` 特性门控才能使用此字段。
<!--
#### Internal load balancer
@ -1435,43 +1491,54 @@ There are other annotations to manage Classic Elastic Load Balancers that are de
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# 按秒计的时间,表示负载均衡器关闭连接之前连接可以保持空闲
# (连接上无数据传输)的时间长度
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# 指定该负载均衡器上是否启用跨区的负载均衡能力
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
# 逗号分隔列表值,每一项都是一个键-值耦对,会作为额外的标签记录于 ELB 中
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=prod,owner=devops"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
# 将某后端视为健康、可接收请求之前需要达到的连续成功健康检查次数。
# 默认为 2必须介于 2 和 10 之间
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: ""
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
# 将某后端视为不健康、不可接收请求之前需要达到的连续不成功健康检查次数。
# 默认为 6必须介于 2 和 10 之间
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
# 对每个实例进行健康检查时,连续两次检查之间的大致间隔秒数
# 默认为 10必须介于 5 和 300 之间
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "20"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
# 时长秒数,在此期间没有响应意味着健康检查失败
# 此值必须小于 service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
# 默认值为 5必须介于 2 和 60 之间
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
# 由已有的安全组所构成的列表,可以配置到所创建的 ELB 之上。
# 与注解 service.beta.kubernetes.io/aws-load-balancer-extra-security-groups 不同,
# 这一设置会替代掉之前指定给该 ELB 的所有其他安全组,也会覆盖掉为此
# ELB 所唯一创建的安全组。
# 此列表中的第一个安全组 ID 被用来作为决策源,以允许入站流量流入目标工作节点
# (包括服务流量和健康检查)。
# 如果多个 ELB 配置了相同的安全组 ID为工作节点安全组添加的允许规则行只有一个
# 这意味着如果你删除了这些 ELB 中的任何一个,都会导致该规则记录被删除,
# 以至于所有共享该安全组 ID 的其他 ELB 都无法访问该节点。
# 此注解如果使用不当,会导致跨服务的不可用状况。
service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f"
# 要添加到所创建的 ELB 之上的已有安全组列表。与注解
# service.beta.kubernetes.io/aws-load-balancer-extra-security-groups 不同,此
# 注解会替换掉之前指派给 ELB 的所有其他安全组
# 额外的安全组列表,将被添加到所创建的 ELB 之上。
# 添加时,会保留为 ELB 所专门创建的安全组。
# 这样会确保每个 ELB 都有一个唯一的安全组 ID 和与之对应的允许规则记录,
# 允许请求(服务流量和健康检查)发送到目标工作节点。
# 这里顶一个安全组可以被多个服务共享。
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e"
# 要添加到 ELB 上的额外安全组列表
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
# 用逗号分隔的一个键-值偶对列表,用来为负载均衡器选择目标节点
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api"
```
<!--