[zh] Sync changes from English site (3)

This commit is contained in:
Qiming Teng 2020-11-12 21:03:41 +08:00
parent 7a44e1a820
commit 95ab5ac197
8 changed files with 104 additions and 57 deletions

View File

@ -38,10 +38,10 @@ apiserver 被配置为在一个安全的 HTTPS 端口443上监听远程连
或[服务账号令牌](/docs/reference/access-authn-authz/authentication/#service-account-tokens)的时候。
<!--
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. For example, on a default GKE deployment, the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates.
-->
应该使用集群的公共根证书开通节点,这样它们就能够基于有效的客户端凭据安全地连接 apiserver。
例如:在一个默认的 GCE 部署中,客户端凭据以客户端证书的形式提供给 kubelet。
一种好的方法是以客户端证书的形式将客户端凭据提供给 kubelet。
请查看 [kubelet TLS 启动引导](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
以了解如何自动提供 kubelet 客户端证书。
@ -103,16 +103,16 @@ To verify this connection, use the `--kubelet-certificate-authority` flag to pro
If that is not possible, use [SSH tunneling](/docs/concepts/architecture/master-node-communication/#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an
untrusted or public network.
Finally, [Kubelet authentication and/or authorization](/docs/admin/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.
Finally, [Kubelet authentication and/or authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) should be enabled to secure the kubelet API.
-->
为了对这个连接进行认证,使用 `--kubelet-certificate-authority` 标志给 apiserver
提供一个根证书包,用于 kubelet 的服务证书。
如果无法实现这点,又要求避免在非受信网络或公共网络上进行连接,可在 apiserver 和
kubelet 之间使用 [SSH 隧道](#ssh-tunnels)。
最后,应该启用 [Kubelet 用户认证和/或鉴权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
最后,应该启用
[kubelet 用户认证和/或鉴权](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
来保护 kubelet API。
<!--

View File

@ -155,6 +155,27 @@ nodes in your cluster. See
(实际上有一个控制器可以水平地扩展集群中的节点。请参阅
[集群自动扩缩容](/zh/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling))。
<!--
The important point here is that the controller makes some change to bring about
your desired state, and then reports current state back to your cluster's API server.
Other control loops can observe that reported data and take their own actions.
-->
这里,很重要的一点是,控制器做出了一些变更以使得事物更接近你的期望状态,
之后将当前状态报告给集群的 API 服务器。
其他控制回路可以观测到所汇报的数据的这种变化并采取其各自的行动。
<!--
In the thermostat example, if the room is very cold then a different controller
might also turn on a frost protection heater. With Kubernetes clusters, the control
plane indirectly works with IP address management tools, storage services,
cloud provider APIS, and other services by
[extending Kubernetes](/docs/concepts/extend-kubernetes/) to implement that.
-->
在温度计的例子中,如果房间很冷,那么某个控制器可能还会启动一个防冻加热器。
就 Kubernetes 集群而言,控制面间接地与 IP 地址管理工具、存储服务、云驱动
APIs 以及其他服务协作,通过[扩展 Kubernetes](/zh/docs/concepts/extend-kubernetes/)
来实现这点。
<!--
## Desired versus current state {#desired-vs-current}

View File

@ -487,7 +487,7 @@ a Lease object.
<!--
#### Reliability
In most cases, node controller limits the eviction rate to
In most cases, the node controller limits the eviction rate to
`-node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
-->

View File

@ -48,11 +48,11 @@ There are two hooks that are exposed to Containers:
`PostStart`
<!--
This hook executes immediately after a container is created.
This hook is executed immediately after a container is created.
However, there is no guarantee that the hook will execute before the container ENTRYPOINT.
No parameters are passed to the handler.
-->
这个回调在创建容器之后立即执行。
这个回调在容器被创建之后立即执行。
但是不能保证回调会在容器入口点ENTRYPOINT之前执行。
没有参数传递给处理程序。
@ -61,13 +61,13 @@ No parameters are passed to the handler.
<!--
This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
so it must complete before the call to delete the container can be sent.
so it must complete before the signal to stop the container can be sent.
No parameters are passed to the handler.
-->
在容器因 API 请求或者管理事件(诸如存活态探针失败、资源抢占、资源竞争等)而被终止之前,
此回调会被调用。
如果容器已经处于终止或者完成状态,则对 preStop 回调的调用将失败。
此调用是阻塞的,也是同步调用,因此必须在删除容器的调用之前完成。
此调用是阻塞的,也是同步调用,因此必须在发出删除容器的信号之前完成。
没有参数传递给处理程序。
<!--
@ -102,11 +102,13 @@ Resources consumed by the command are counted against the Container.
### Hook handler execution
When a Container lifecycle management hook is called,
the Kubernetes management system executes the handler in the Container registered for that hook. 
the Kubernetes management system execute the handler according to the hook action,
`exec` and `tcpSocket` are executed in the container, and `httpGet` is executed by the kubelet process.
-->
### 回调处理程序执行
当调用容器生命周期管理回调时Kubernetes 管理系统在注册了回调的容器中执行处理程序。
当调用容器生命周期管理回调时Kubernetes 管理系统根据回调动作执行其处理程序,
`exec``tcpSocket` 在容器内执行,而 `httpGet` 则由 kubelet 进程执行。
<!--
Hook handler calls are synchronous within the context of the Pod containing the Container.
@ -120,15 +122,35 @@ the Container cannot reach a `running` state.
但是,如果回调运行或挂起的时间太长,则容器无法达到 `running` 状态。
<!--
The behavior is similar for a `PreStop` hook.
If the hook hangs during execution,
the Pod phase stays in a `Terminating` state and is killed after `terminationGracePeriodSeconds` of pod ends.
If a `PostStart` or `PreStop` hook fails,
`PreStop` hooks are not executed asynchronously from the signal
to stop the Container; the hook must complete its execution before
the signal can be sent.
If a `PreStop` hook hangs during execution,
the Pod's phase will be `Terminating` and remain there until the Pod is
killed after its `terminationGracePeriodSeconds` expires.
This grace period applies to the total time it takes for both
the `PreStop` hook to execute and for the Container to stop normally.
If, for example, `terminationGracePeriodSeconds` is 60, and the hook
takes 55 seconds to complete, and the Container takes 10 seconds to stop
normally after receiving the signal, then the Container will be killed
before it can stop normally, since `terminationGracePeriodSeconds` is
less than the total time (55+10) it takes for these two things to happen.
-->
`PreStop` 回调并不会与停止容器的信号处理程序异步执行;回调必须在
可以发送信号之前完成执行。
如果 `PreStop` 回调在执行期间停滞不前Pod 的阶段会变成 `Terminating`
并且一致处于该状态,直到其 `terminationGracePeriodSeconds` 耗尽为止,
这时 Pod 会被杀死。
这一宽限期是针对 `PreStop` 回调的执行时间及容器正常停止时间的总和而言的。
例如,如果 `terminationGracePeriodSeconds` 是 60回调函数花了 55 秒钟
完成执行,而容器在收到信号之后花了 10 秒钟来正常结束,那么容器会在其
能够正常结束之前即被杀死,因为 `terminationGracePeriodSeconds` 的值
小于后面两件事情所花费的总时间55 + 10
<!--
If either a `PostStart` or `PreStop` hook fails,
it kills the Container.
-->
行为与 `PreStop` 回调的行为类似。
如果回调在执行过程中挂起Pod 阶段将保持在 `Terminating` 状态,
并在 Pod 结束的 `terminationGracePeriodSeconds` 之后被杀死。
如果 `PostStart``PreStop` 回调失败,它会杀死容器。
<!--
@ -147,10 +169,11 @@ which means that a hook may be called multiple times for any given event,
such as for `PostStart` or `PreStop`.
It is up to the hook implementation to handle this correctly.
-->
### 回调送保证
### 回调送保证
回调的寄送应该是 *至少一次*,这意味着对于任何给定的事件,例如 `PostStart``PreStop`,回调可以被调用多次。
如何正确处理,是回调实现所要考虑的问题。
回调的递送应该是 *至少一次*,这意味着对于任何给定的事件,
例如 `PostStart``PreStop`,回调可以被调用多次。
如何正确处理被多次调用的情况,是回调实现所要考虑的问题。
<!--
Generally, only single deliveries are made.
@ -160,9 +183,9 @@ In some rare cases, however, double delivery may occur.
For instance, if a kubelet restarts in the middle of sending a hook,
the hook might be resent after the kubelet comes back up.
-->
通常情况下,只会进行单次送。
通常情况下,只会进行单次送。
例如,如果 HTTP 回调接收器宕机,无法接收流量,则不会尝试重新发送。
然而,偶尔也会发生重复送的可能。
然而,偶尔也会发生重复送的可能。
例如,如果 kubelet 在发送回调的过程中重新启动,回调可能会在 kubelet 恢复后重新发送。
<!--

View File

@ -87,7 +87,7 @@ Instead, specify a meaningful tag such as `v1.42.0`.
{{< /caution >}}
<!--
## Updating Images
## Updating images
The default pull policy is `IfNotPresent` which causes the
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} to skip
@ -116,17 +116,18 @@ When `imagePullPolicy` is defined without a specific value, it is also set to `A
如果 `imagePullPolicy` 未被定义为特定的值,也会被设置为 `Always`
<!--
## Multi-architecture Images with Manifests
## Multi-architecture images with image indexes
As well as providing binary images, a container registry can also serve a [container image manifest](https://github.com/opencontainers/image-spec/blob/master/manifest.md). A manifest can reference image manifests for architecture-specific versions of an container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
As well as providing binary images, a container registry can also serve a [container image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md). An image index can point to multiple [image manifests](https://github.com/opencontainers/image-spec/blob/master/manifest.md) for architecture-specific versions of a container. The idea is that you can have a name for an image (for example: `pause`, `example/mycontainer`, `kube-apiserver`) and allow different systems to fetch the right binary image for the machine architecture they are using.
Kubernetes itself typically names container images with a suffix `-$(ARCH)`. For backward compatibility, please generate the older images with suffixes. The idea is to generate say `pause` image which has the manifest for all the arch(es) and say `pause-amd64` which is backwards compatible for older configurations or YAML files which may have hard coded the images with suffixes.
-->
## 使用清单manifest构建多架构镜像
## 带镜像索引的多架构镜像 {#multi-architecture-images-with-image-indexes}
除了提供二进制的镜像之外,容器仓库也可以提供
[容器镜像清单](https://github.com/opencontainers/image-spec/blob/master/manifest.md)。
清单文件Manifest可以为特定于体系结构的镜像版本引用其镜像清单。
[容器镜像索引](https://github.com/opencontainers/image-spec/blob/master/image-index.md)。
镜像索引可以根据特定于体系结构版本的容器指向镜像的多个
[镜像清单](https://github.com/opencontainers/image-spec/blob/master/manifest.md)。
这背后的理念是让你可以为镜像命名(例如:`pause`、`example/mycontainer`、`kube-apiserver`
的同时,允许不同的系统基于它们所使用的机器体系结构取回正确的二进制镜像。
@ -137,7 +138,7 @@ Kubernetes 自身通常在命名容器镜像时添加后缀 `-$(ARCH)`。
YAML 文件也能兼容。
<!--
## Using a Private Registry
## Using a private registry
Private registries may require keys to read images from them.
Credentials can be provided in several ways:
@ -179,7 +180,7 @@ These options are explaind in more detail below.
下面将详细描述每一项。
<!--
### Configuring Nodes to authenticate to a Private Registry
### Configuring nodes to authenticate to a private registry
If you run Docker on your nodes, you can configure the Docker container
runtime to authenticate to a private container registry.
@ -333,7 +334,7 @@ registry keys are added to the `.docker/config.json`.
`.docker/config.json` 中配置了私有仓库密钥后,所有 Pod 都将能读取私有仓库中的镜像。
<!--
### Pre-pulled Images
### Pre-pulled images
-->
### 提前拉取镜像 {#pre-pulled-images}
@ -371,7 +372,7 @@ All pods will have read access to any pre-pulled images.
所有的 Pod 都可以使用节点上提前拉取的镜像。
<!--
### Specifying ImagePullSecrets on a Pod
### Specifying imagePullSecrets on a Pod
-->
### 在 Pod 上指定 ImagePullSecrets {#specifying-imagepullsecrets-on-a-pod}
@ -389,7 +390,7 @@ Kubernetes supports specifying container image registry keys on a Pod.
Kubernetes 支持在 Pod 中设置容器镜像仓库的密钥。
<!--
#### Creating a Secret with a Docker Config
#### Creating a Secret with a Docker config
Run the following command, substituting the appropriate uppercase values:
-->
@ -491,12 +492,12 @@ will be merged.
来自不同来源的凭据会被合并。
<!--
### Use Cases
## Use cases
There are a number of solutions for configuring private registries. Here are some
common use cases and suggested solutions.
-->
### 使用案例 {#use-cases}
## 使用案例 {#use-cases}
配置私有仓库有多种方案,以下是一些常用场景和建议的解决方案。

View File

@ -313,14 +313,14 @@ Pod 开销通过 RuntimeClass 的 `overhead` 字段定义。
## {{% heading "whatsnext" %}}
<!--
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
- Read about the [Pod Overhead](/docs/concepts/configuration/pod-overhead/) concept
- [RuntimeClass Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [RuntimeClass Scheduling Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- Read about the [Pod Overhead](/docs/concepts/scheduling-eviction/pod-overhead/) concept
- [PodOverhead Feature Design](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
-->
- [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class.md)
- [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/runtime-class-scheduling.md)
- 阅读关于 [Pod 开销](/zh/docs/concepts/configuration/pod-overhead/) 的概念
- [RuntimeClass 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md)
- [RuntimeClass 调度设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/585-runtime-class/README.md#runtimeclass-scheduling)
- 阅读关于 [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/) 的概念
- [PodOverhead 特性设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)

View File

@ -204,7 +204,7 @@ Disallow privileged users | When constructing containers, consult your documenta
容器安全性不在本指南的探讨范围内。下面是一些探索此主题的建议和连接:
容器关注领域 | 建议 |
容器关注领域 | 建议 |
------------------------------ | -------------- |
容器漏洞扫描和操作系统依赖安全性 | 作为镜像构建的一部分,您应该扫描您的容器里的已知漏洞。
镜像签名和执行 | 对容器镜像进行签名,以维护对容器内容的信任。
@ -257,8 +257,8 @@ Learn about related Kubernetes security topics:
* [Pod security standards](/docs/concepts/security/pod-security-standards/)
* [Network policies for Pods](/docs/concepts/services-networking/network-policies/)
* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
* [API access control](/docs/reference/access-authn-authz/controlling-access/)
* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
@ -267,8 +267,9 @@ Learn about related Kubernetes security topics:
* [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)
* [Pod 的网络策略](/zh/docs/concepts/services-networking/network-policies/)
* [控制对 Kubernetes API 的访问](/zh/docs/concepts/security/controlling-access/)
* [保护您的集群](/zh/docs/tasks/administer-cluster/securing-a-cluster/)
* [API 访问控制](/zh/docs/reference/access-authn-authz/controlling-access/)
* [加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
* 为控制面[加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/)
* [加密静止状态的数据](/zh/docs/tasks/administer-cluster/encrypt-data/)
* [Kubernetes 的 Secret](/zh/docs/concepts/configuration/secret/)
* [Kubernetes 中的 Secret](/zh/docs/concepts/configuration/secret/)

View File

@ -278,7 +278,7 @@ Baseline/Default 策略的目标是便于常见的容器化应用采用,同时
net.ipv4.ip_local_port_range<br>
net.ipv4.tcp_syncookies<br>
net.ipv4.ping_group_range<br>
undefined/empty<br>
未定义/空值<br>
</td>
</tr>
</tbody>
@ -385,14 +385,15 @@ Restricted 策略旨在实施当前保护 Pod 的最佳实践,尽管这样作
<tr>
<td>Seccomp</td>
<td>
<!-- The 'runtime/default' seccomp profile must be required, or allow specific additional profiles. -->
必须要求使用 'runtime/default' seccomp profile 或者允许使用特定的 profiles。<br>
<!-- The RuntimeDefault seccomp profile must be required, or allow specific additional profiles. -->
必须要求使用 RuntimeDefault seccomp profile 或者允许使用特定的 profiles。<br>
<br><b>限制的字段:</b><br>
metadata.annotations['seccomp.security.alpha.kubernetes.io/pod']<br>
metadata.annotations['container.seccomp.security.alpha.kubernetes.io/*']<br>
spec.securityContext.seccompProfile.type<br>
spec.containers[*].securityContext.seccompProfile<br>
spec.initContainers[*].securityContext.seccompProfile<br>
<br><b>允许的值:</b><br>
'runtime/default'<br>
未定义(容器注解)<br>
未定义/nil<br>
</td>
</tr>
</tbody>
@ -462,7 +463,7 @@ in the Pod manifest, and represent parameters to the container runtime.
<!--
Security policies are control plane mechanisms to enforce specific settings in the Security Context,
as well as other parameters outside the Security Contex. As of February 2020, the current native
as well as other parameters outside the Security Context. As of February 2020, the current native
solution for enforcing these security policies is [Pod Security
Policy](/docs/concepts/policy/pod-security-policy/) - a mechanism for centrally enforcing security
policy on Pods across a cluster. Other alternatives for enforcing security policy are being
@ -503,7 +504,7 @@ restrict privileged permissions is lessened when the workload is isolated from t
kernel. This allows for workloads requiring heightened permissions to still be isolated.
Additionally, the protection of sandboxed workloads is highly dependent on the method of
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
sandboxing. As such, no single recommended policy is recommended for all sandboxed workloads.
-->
### 沙箱Sandboxed Pod 怎么处理?
@ -515,5 +516,5 @@ sandboxing. As such, no single recommended policy is recommended for all s
限制特权化操作的许可就不那么重要。这使得那些需要更多许可权限的负载仍能被有效隔离。
此外,沙箱化负载的保护高度依赖于沙箱化的实现方法。
因此,现在还没有针对所有沙箱化负载的建议策略。
因此,现在还没有针对所有沙箱化负载的建议策略。