Merge pull request #40801 from my-git9/path-6384
[zh-cn] sync configure-liveness-readiness-startup-probes.md security-context.md and assign-memory-resource.md
This commit is contained in:
commit
d147d57aea
|
|
@ -98,8 +98,8 @@ for the Pod:
|
|||
-->
|
||||
## 指定内存请求和限制 {#specify-a-memory-request-and-a-memory-limit}
|
||||
|
||||
要为容器指定内存请求,请在容器资源清单中包含 `resources:requests` 字段。
|
||||
同理,要指定内存限制,请包含 `resources:limits`。
|
||||
要为容器指定内存请求,请在容器资源清单中包含 `resources: requests` 字段。
|
||||
同理,要指定内存限制,请包含 `resources: limits`。
|
||||
|
||||
在本练习中,你将创建一个拥有一个容器的 Pod。
|
||||
容器将会请求 100 MiB 内存,并且内存会被限制在 200 MiB 以内。
|
||||
|
|
@ -544,6 +544,8 @@ kubectl delete namespace mem-example
|
|||
* [Configure a Pod Quota for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
|
||||
|
||||
* [Configure Quotas for API Objects](/docs/tasks/administer-cluster/quota-api-object/)
|
||||
|
||||
* [Resize CPU and Memory Resources assigned to Containers](/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
-->
|
||||
### 集群管理员扩展阅读 {#for-cluster-administrators}
|
||||
|
||||
|
|
@ -554,4 +556,5 @@ kubectl delete namespace mem-example
|
|||
* [为命名空间配置内存和 CPU 配额](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/)
|
||||
* [配置命名空间下 Pod 总数](/zh-cn/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/)
|
||||
* [配置 API 对象配额](/zh-cn/docs/tasks/administer-cluster/quota-api-object/)
|
||||
* [调整分配给容器的 CPU 和内存资源的大小](/docs/tasks/configure-pod-container/resize-container-resources/)
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,6 @@ A common pattern for liveness probes is to use the same low-cost HTTP endpoint
|
|||
s for readiness probes, but with a higher failureThreshold. This ensures that the pod
|
||||
is observed as not-ready for some period of time before it is hard killed.
|
||||
-->
|
||||
|
||||
存活探针的常见模式是为就绪探针使用相同的低成本 HTTP 端点,但具有更高的 failureThreshold。
|
||||
这样可以确保在硬性终止 Pod 之前,将观察到 Pod 在一段时间内处于非就绪状态。
|
||||
|
||||
|
|
@ -75,8 +74,9 @@ scalable; and increased workload on remaining pods due to some failed pods.
|
|||
Understand the difference between readiness and liveness probes and when to apply them for your app.
|
||||
-->
|
||||
错误的存活探针可能会导致级联故障。
|
||||
这会导致在高负载下容器重启;例如由于应用程序无法扩展,导致客户端请求失败;以及由于某些 Pod 失败而导致剩余 Pod 的工作负载增加。
|
||||
了解就绪探针和存活探针之间的区别,以及何时为应用程序配置使用它们非常重要。
|
||||
这会导致在高负载下容器重启;例如由于应用程序无法扩展,导致客户端请求失败;以及由于某些
|
||||
Pod 失败而导致剩余 Pod 的工作负载增加。了解就绪探针和存活探针之间的区别,
|
||||
以及何时为应用程序配置使用它们非常重要。
|
||||
{{< /note >}}
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
|
@ -247,7 +247,7 @@ and restarts it.
|
|||
`periodSeconds` 字段指定了 kubelet 每隔 3 秒执行一次存活探测。
|
||||
`initialDelaySeconds` 字段告诉 kubelet 在执行第一次探测前应该等待 3 秒。
|
||||
kubelet 会向容器内运行的服务(服务在监听 8080 端口)发送一个 HTTP GET 请求来执行探测。
|
||||
如果服务器上 `/healthz` 路径下的处理程序返回成功代码,则 kubelet 认为容器是健康存活的。
|
||||
如果服务器上 `/healthz` 路径下的处理程序返回成功代码,则 kubelet 认为容器是健康存活的。
|
||||
如果处理程序返回失败代码,则 kubelet 会杀死这个容器并将其重启。
|
||||
|
||||
<!--
|
||||
|
|
@ -262,7 +262,7 @@ returns a status of 200. After that, the handler returns a status of 500.
|
|||
-->
|
||||
返回大于或等于 200 并且小于 400 的任何代码都标示成功,其它返回代码都标示失败。
|
||||
|
||||
你可以访问 [server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/liveness/server.go)。
|
||||
你可以访问 [server.go](https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/liveness/server.go)
|
||||
阅读服务的源码。
|
||||
容器存活期间的最开始 10 秒中,`/healthz` 处理程序返回 200 的状态码。
|
||||
之后处理程序返回 500 的状态码。
|
||||
|
|
@ -380,11 +380,9 @@ kubectl describe pod goproxy
|
|||
{{< feature-state for_k8s_version="v1.24" state="beta" >}}
|
||||
|
||||
<!--
|
||||
If your application implements [gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
kubelet can be configured to use it for application liveness checks.
|
||||
You must enable the `GRPCContainerProbe`
|
||||
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
in order to configure checks that rely on gRPC.
|
||||
If your application implements the [gRPC Health Checking Protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md),
|
||||
this example shows how to configure Kubernetes to use it for application liveness checks.
|
||||
Similarly you can configure readiness and startup probes.
|
||||
|
||||
Here is an example manifest:
|
||||
-->
|
||||
|
|
@ -395,22 +393,40 @@ kubelet 可以配置为使用该协议来执行应用存活性检查。
|
|||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
|
||||
才能配置依赖于 gRPC 的检查机制。
|
||||
|
||||
这个例子展示了如何配置 Kubernetes 以将其用于应用程序的存活性检查。
|
||||
类似地,你可以配置就绪探针和启动探针。
|
||||
|
||||
下面是一个示例清单:
|
||||
|
||||
{{< codenew file="pods/probe/grpc-liveness.yaml" >}}
|
||||
|
||||
<!--
|
||||
To use a gRPC probe, `port` must be configured. If the health endpoint is configured
|
||||
on a non-default service, you must also specify the `service`.
|
||||
To use a gRPC probe, `port` must be configured. If you want to distinguish probes of different types
|
||||
and probes for different features you can use the `service` field.
|
||||
You can set `service` to the value `liveness` and make your gRPC Health Checking endpoint
|
||||
respond to this request differently then when you set `service` set to `readiness`.
|
||||
This lets you use the same endpoint for different kinds of container health check
|
||||
(rather than needing to listen on two different ports).
|
||||
If you want to specify your own custom service name and also specify a probe type,
|
||||
the Kubernetes project recommends that you use a name that concatenates
|
||||
those. For example: `myservice-liveness` (using `-` as a separator).
|
||||
-->
|
||||
要使用 gRPC 探针,必须配置 `port` 属性。如果健康状态端点配置在非默认服务之上,
|
||||
你还必须设置 `service` 属性。
|
||||
要使用 gRPC 探针,必须配置 `port` 属性。
|
||||
如果要区分不同类型的探针和不同功能的探针,可以使用 `service` 字段。
|
||||
你可以将 `service` 设置为 `liveness`,并使你的 gRPC
|
||||
健康检查端点对该请求的响应与将 `service` 设置为 `readiness` 时不同。
|
||||
这使你可以使用相同的端点进行不同类型的容器健康检查(而不需要在两个不同的端口上侦听)。
|
||||
如果你想指定自己的自定义服务名称并指定探测类型,Kubernetes
|
||||
项目建议你使用使用一个可以关联服务和探测类型的名称来命名。
|
||||
例如:`myservice-liveness`(使用 `-` 作为分隔符)。
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Unlike HTTP and TCP probes, named ports cannot be used and custom host cannot be configured.
|
||||
Unlike HTTP and TCP probes, you cannot specify the healthcheck port by name, and you
|
||||
cannot configure a custom hostname.
|
||||
-->
|
||||
与 HTTP 和 TCP 探针不同,gRPC 探测不能使用命名端口或定制主机。
|
||||
与 HTTP 和 TCP 探针不同,gRPC 探测不能使用按名称指定端口,
|
||||
也不能自定义主机名。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
|
@ -580,7 +596,7 @@ Readiness probes runs on the container during its whole lifecycle.
|
|||
<!--
|
||||
Liveness probes *do not* wait for readiness probes to succeed. If you want to wait before executing a liveness probe you should use initialDelaySeconds or a startupProbe.
|
||||
-->
|
||||
存活探针 **不等待** 就绪性探针成功。
|
||||
存活探针**不等待**就绪性探针成功。
|
||||
如果要在执行存活探针之前等待,应该使用 `initialDelaySeconds` 或 `startupProbe`。
|
||||
{{< /caution >}}
|
||||
|
||||
|
|
@ -751,8 +767,8 @@ in the range 1 to 65535.
|
|||
[HTTP Probes](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#httpgetaction-v1-core)
|
||||
允许针对 `httpGet` 配置额外的字段:
|
||||
|
||||
* `host`:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 “Host” 来代替。
|
||||
* `scheme` :用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 "HTTP"。
|
||||
* `host`:连接使用的主机名,默认是 Pod 的 IP。也可以在 HTTP 头中设置 "Host" 来代替。
|
||||
* `scheme`:用于设置连接主机的方式(HTTP 还是 HTTPS)。默认是 "HTTP"。
|
||||
* `path`:访问 HTTP 服务的路径。默认值为 "/"。
|
||||
* `httpHeaders`:请求中自定义的 HTTP 头。HTTP 头字段允许重复。
|
||||
* `port`:访问容器的端口号或者端口名。如果数字必须在 1~65535 之间。
|
||||
|
|
@ -840,7 +856,7 @@ to resolve it.
|
|||
-->
|
||||
### 探针层面的 `terminationGracePeriodSeconds`
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="beta" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="stable" >}}
|
||||
|
||||
<!--
|
||||
Prior to release 1.21, the pod-level `terminationGracePeriodSeconds` was used
|
||||
|
|
@ -871,7 +887,6 @@ by default. For users choosing to disable this feature, please note the followin
|
|||
* The `ProbeTerminationGracePeriod` feature gate is only available on the API Server.
|
||||
The kubelet always honors the probe-level `terminationGracePeriodSeconds` field if
|
||||
it is present on a Pod.
|
||||
|
||||
-->
|
||||
{{< note >}}
|
||||
从 Kubernetes 1.25 开始,默认启用 `ProbeTerminationGracePeriod` 特性。
|
||||
|
|
|
|||
|
|
@ -710,7 +710,7 @@ To assign SELinux labels, the SELinux security module must be loaded on the host
|
|||
-->
|
||||
### 高效重打 SELinux 卷标签
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
|
||||
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
|
||||
|
||||
<!--
|
||||
By default, the container runtime recursively assigns SELinux label to all
|
||||
|
|
@ -727,10 +727,11 @@ To benefit from this speedup, all these conditions must be met:
|
|||
要使用这项加速功能,必须满足下列条件:
|
||||
|
||||
<!--
|
||||
* Alpha feature gates `ReadWriteOncePod` and `SELinuxMountReadWriteOncePod` must
|
||||
be enabled.
|
||||
* The [feature gates](/docs/reference/command-line-tools-reference/feature-gates/) `ReadWriteOncePod`
|
||||
and `SELinuxMountReadWriteOncePod` must be enabled.
|
||||
-->
|
||||
* 必须启用 Alpha 特性门控 `ReadWriteOncePod` 和 `SELinuxMountReadWriteOncePod`。
|
||||
* 必须启用 `ReadWriteOncePod` 和 `SELinuxMountReadWriteOncePod`
|
||||
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)。
|
||||
|
||||
<!--
|
||||
* Pod must use PersistentVolumeClaim with `accessModes: ["ReadWriteOncePod"]`.
|
||||
|
|
@ -750,11 +751,18 @@ To benefit from this speedup, all these conditions must be met:
|
|||
* If you use a volume backed by a CSI driver, that CSI driver must announce that it
|
||||
supports mounting with `-o context` by setting `spec.seLinuxMount: true` in
|
||||
its CSIDriver instance.
|
||||
|
||||
* The corresponding PersistentVolume must be either:
|
||||
* A volume that uses the legacy in-tree `iscsi`, `rbd` or `fc` volume type.
|
||||
* Or a volume that uses a {{< glossary_tooltip text="CSI" term_id="csi" >}} driver.
|
||||
The CSI driver must announce that it supports mounting with `-o context` by setting
|
||||
`spec.seLinuxMount: true` in its CSIDriver instance.
|
||||
-->
|
||||
* 对应的 PersistentVolume 必须是使用 {< glossary_tooltip text="CSI" term_id="csi" >}}
|
||||
驱动程序的卷,或者是传统的 `iscsi` 卷类型的卷。
|
||||
* 如果使用基于 CSI 驱动程序的卷,CSI 驱动程序必须能够通过在 CSIDriver
|
||||
实例中设置 `spec.seLinuxMount: true` 以支持 `-o context` 挂载。
|
||||
* 对应的 PersistentVolume 必须是:
|
||||
* 使用传统树内(In-Tree) `iscsi`、`rbd` 或 `fs` 卷类型的卷。
|
||||
* 或者是使用 {< glossary_tooltip text="CSI" term_id="csi" >}} 驱动程序的卷
|
||||
CSI 驱动程序必须能够通过在 CSIDriver 实例中设置 `spec.seLinuxMount: true`
|
||||
以支持 `-o context` 挂载。
|
||||
|
||||
<!--
|
||||
For any other volume types, SELinux relabelling happens another way: the container
|
||||
|
|
@ -767,19 +775,18 @@ The more files and directories in the volume, the longer that relabelling takes.
|
|||
卷中的文件和目录越多,重打标签需要耗费的时间就越长。
|
||||
|
||||
{{< note >}}
|
||||
<!-- remove after Kubernetes v1.30 is released -->
|
||||
<!--
|
||||
In Kubernetes 1.25, the kubelet loses track of volume labels after restart. In
|
||||
other words, then kubelet may refuse to start Pods with errors similar to "conflicting
|
||||
SELinux labels of volume", while there are no conflicting labels in Pods. Make sure
|
||||
nodes are
|
||||
[fully drained](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
|
||||
before restarting kubelet.
|
||||
If you are running Kubernetes v1.25, refer to the v1.25 version of this task page:
|
||||
[Configure a Security Context for a Pod or Container](https://v1-25.docs.kubernetes.io/docs/tasks/configure-pod-container/security-context/) (v1.25).
|
||||
There is an important note in that documentation about a situation where the kubelet
|
||||
can lose track of volume labels after restart. This deficiency has been fixed
|
||||
in Kubernetes 1.26.
|
||||
-->
|
||||
在 Kubernetes 1.25 中,kubelet 在重启后会丢失对卷标签的追踪记录。
|
||||
换言之,kubelet 可能会拒绝启动 Pod,原因类似于 “conflicting
|
||||
SELinux labels of volume”,
|
||||
但实际上 Pod 中并没有冲突的标签。在重启 kubelet
|
||||
之前确保节点已被[完全腾空](/zh-cn/docs/tasks/administer-cluster/safely-drain-node/)。
|
||||
如果你的 Kubernetes 版本是 v1.25,请参阅此任务页面的 v1.25 版本:
|
||||
[为 Pod 或 Container 配置安全上下文](https://v1-25.docs.kubernetes.io/docs/tasks/configure-pod-container/security-context/)(v1.25)。
|
||||
该文档中有一个重要的说明:kubelet 在重启后会丢失对卷标签的追踪记录。
|
||||
这个缺陷已经在 Kubernetes 1.26 中修复。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
|
|
|
|||
Loading…
Reference in New Issue