Merge pull request #22079 from ydcool/zh-trans-service

update zh trans for concepts/services-networking/service.md
This commit is contained in:
Kubernetes Prow Robot 2020-06-26 20:44:16 -07:00 committed by GitHub
commit 5a97e2b6a6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 80 additions and 10 deletions

View File

@ -5,7 +5,7 @@ title: Services
feature: feature:
title: 服务发现与负载均衡 title: 服务发现与负载均衡
description: > description: >
无需修改您的应用程序即可使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载衡。 无需修改您的应用程序即可使用陌生的服务发现机制。Kubernetes 为容器提供了自己的 IP 地址和一个 DNS 名称,并且可以在它们之间实现负载衡。
content_type: concept content_type: concept
weight: 10 weight: 10
@ -328,6 +328,27 @@ Endpoint 切片是一种 API 资源,可以为 Endpoint 提供更可扩展的
Endpoint 切片提供了附加的属性和功能,这些属性和功能在 [Endpoint 切片](/docs/concepts/services-networking/endpoint-slices/)中进行了详细描述。 Endpoint 切片提供了附加的属性和功能,这些属性和功能在 [Endpoint 切片](/docs/concepts/services-networking/endpoint-slices/)中进行了详细描述。
<!--
### Application protocol
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
The AppProtocol field provides a way to specify an application protocol to be
used for each Service port.
As an alpha feature, this field is not enabled by default. To use this field,
enable the `ServiceAppProtocol` [feature
gate](/docs/reference/command-line-tools-reference/feature-gates/).
-->
### 应用程序协议
{{< feature-state for_k8s_version="v1.18" state="alpha" >}}
AppProtocol 字段提供了一种为每个 Service 端口指定应用程序协议的方式。
作为一个 alpha 特性,该字段默认未启用。要使用该字段,请启用 `ServiceAppProtocol` [特性开关]
(/docs/reference/command-line-tools-reference/feature-gates/)。
<!-- <!--
## Virtual IPs and service proxies ## Virtual IPs and service proxies
@ -868,6 +889,25 @@ to just expose one or more nodes' IPs directly.
Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort` Note that this Service is visible as `<NodeIP>:spec.ports[*].nodePort`
and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).) and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in kube-proxy is set, <NodeIP> would be filtered NodeIP(s).)
For example:
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
```
--> -->
### NodePort 类型 ### NodePort 类型
@ -891,6 +931,25 @@ and `.spec.clusterIP:spec.ports[*].port`. (If the `--nodeport-addresses` flag in
需要注意的是Service 能够通过 `<NodeIP>:spec.ports[*].nodePort``spec.clusterIp:spec.ports[*].port` 而对外可见。 需要注意的是Service 能够通过 `<NodeIP>:spec.ports[*].nodePort``spec.clusterIp:spec.ports[*].port` 而对外可见。
例如:
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# 默认情况下,为了方便起见,`targetPort` 被设置为与 `port` 字段相同的值。
- port: 80
targetPort: 80
# 可选字段
# 默认情况下为了方便起见Kubernetes 控制平面会从某个范围内分配一个端口号默认30000-32767
nodePort: 30007
```
<!-- <!--
### Type LoadBalancer {#loadbalancer} ### Type LoadBalancer {#loadbalancer}
@ -1032,6 +1091,16 @@ metadata:
[...] [...]
``` ```
{{% /tab %}} {{% /tab %}}
{{% tab name="IBM Cloud" %}}
```yaml
[...]
metadata:
name: my-service
annotations:
service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private"
[...]
```
{{% /tab %}}
{{% tab name="OpenStack" %}} {{% tab name="OpenStack" %}}
```yaml ```yaml
[...] [...]
@ -1561,7 +1630,7 @@ worth understanding.
One of the primary philosophies of Kubernetes is that you should not be One of the primary philosophies of Kubernetes is that you should not be
exposed to situations that could cause your actions to fail through no fault exposed to situations that could cause your actions to fail through no fault
of your own. For the design of the Service resource, this means not making of your own. For the design of the Service resource, this means not making
you choose your own port number for a if that choice might collide with you choose your own port number if that choice might collide with
someone else's choice. That is an isolation failure. someone else's choice. That is an isolation failure.
In order to allow you to choose a port number for your Services, we must In order to allow you to choose a port number for your Services, we must
@ -1576,7 +1645,7 @@ fail with a message indicating an IP address could not be allocated.
In the control plane, a background controller is responsible for creating that In the control plane, a background controller is responsible for creating that
map (needed to support migrating from older versions of Kubernetes that used map (needed to support migrating from older versions of Kubernetes that used
in-memory locking). Kubernetes also uses controllers to checking for invalid in-memory locking). Kubernetes also uses controllers to check for invalid
assignments (eg due to administrator intervention) and for cleaning up allocated assignments (eg due to administrator intervention) and for cleaning up allocated
IP addresses that are no longer used by any Services. IP addresses that are no longer used by any Services.
@ -1585,15 +1654,16 @@ IP addresses that are no longer used by any Services.
### 避免冲突 ### 避免冲突
Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致他们操作失败、但又不是他们的过错的场景。 Kubernetes 最主要的哲学之一,是用户不应该暴露那些能够导致他们操作失败、但又不是他们的过错的场景。
这种场景下,让我们来看一下网络端口 —— 用户不应该必须选择一个端口号,而且该端口还有可能与其他用户的冲突 对于 Service 资源的设计,这意味着如果用户的选择有可能与他人冲突,那就不要让用户自行选择端口号
就是说,在彼此隔离状态下仍然会出现失败。 是一个隔离性的失败。
为了使用户能够为他们的 `Service` 选择一个端口号我们必须确保不能有2个 `Service` 发生冲突。 为了使用户能够为他们的 Service 选择一个端口号我们必须确保不能有2个 Service 发生冲突。
我们可以通过为每个 `Service` 分配它们自己的 IP 地址来实现。 Kubernetes 通过为每个 Service 分配它们自己的 IP 地址来实现。
为了保证每个 `Service` 被分配到一个唯一的 IP需要一个内部的分配器能够原子地更新 etcd 中的一个全局分配映射表,这个更新操作要先于创建每一个 `Service` 为了保证每个 Service 被分配到一个唯一的 IP需要一个内部的分配器能够原子地更新 {{< glossary_tooltip term_id="etcd" >}} 中的一个全局分配映射表,这个更新操作要先于创建每一个 Service。
为了使 `Service` 能够获取到 IP这个映射表对象必须在注册中心存在否则创建 `Service` 将会失败,指示一个 IP 不能被分配。 为了使 Service 能够获取到 IP这个映射表对象必须在注册中心存在否则创建 Service 将会失败,指示一个 IP 不能被分配。
一个后台 Controller 的职责是创建映射表(从 Kubernetes 的旧版本迁移过来,旧版本中是通过在内存中加锁的方式实现),并检查由于管理员干预和清除任意 IP 造成的不合理分配,这些 IP 被分配了但当前没有 `Service` 使用它们。
在控制平面中,一个后台 Controller 的职责是创建映射表(需要支持从使用了内存锁的 Kubernetes 的旧版本迁移过来)。同时 Kubernetes 会通过控制器检查不合理的分配(如管理员干预导致的)以及清理已被分配但不再被任何 Service 使用的 IP 地址。
<!-- <!--
### Service IP addresses {#ips-and-vips} ### Service IP addresses {#ips-and-vips}