diff --git a/content/zh/docs/concepts/architecture/cloud-controller.md b/content/zh/docs/concepts/architecture/cloud-controller.md index 33d660b243..c962b5f5f0 100644 --- a/content/zh/docs/concepts/architecture/cloud-controller.md +++ b/content/zh/docs/concepts/architecture/cloud-controller.md @@ -1,11 +1,11 @@ --- -title: 云控制器管理器的基础概念 +title: 云控制器管理器 content_type: concept weight: 40 --- diff --git a/content/zh/docs/concepts/architecture/control-plane-node-communication.md b/content/zh/docs/concepts/architecture/control-plane-node-communication.md index 0501507898..3e2beaafd9 100644 --- a/content/zh/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/zh/docs/concepts/architecture/control-plane-node-communication.md @@ -15,7 +15,7 @@ aliases: 本文列举控制面节点(确切说是 API 服务器)和 Kubernetes 集群之间的通信路径。 目的是为了让用户能够自定义他们的安装,以实现对网络配置的加固,使得集群能够在不可信的网络上 @@ -24,14 +24,15 @@ This document catalogs the communication paths between the control plane (really ## 节点到控制面 Kubernetes 采用的是中心辐射型(Hub-and-Spoke)API 模式。 -所有从集群(或所运行的 Pods)发出的 API 调用都终止于 apiserver(其它控制面组件都没有被设计为可暴露远程服务)。 -apiserver 被配置为在一个安全的 HTTPS 端口(443)上监听远程连接请求, +所有从集群(或所运行的 Pods)发出的 API 调用都终止于 apiserver。 +其它控制面组件都没有被设计为可暴露远程服务。 +apiserver 被配置为在一个安全的 HTTPS 端口(通常为 443)上监听远程连接请求, 并启用一种或多种形式的客户端[身份认证](/zh/docs/reference/access-authn-authz/authentication/)机制。 一种或多种客户端[鉴权机制](/zh/docs/reference/access-authn-authz/authorization/)应该被启用, 特别是在允许使用[匿名请求](/zh/docs/reference/access-authn-authz/authentication/#anonymous-requests) @@ -84,7 +85,7 @@ The connections from the apiserver to the kubelet are used for: * Attaching (through kubectl) to running pods. * Providing the kubelet's port-forwarding functionality. -These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks, and **unsafe** to run over untrusted and/or public networks. +These connections terminate at the kubelet's HTTPS endpoint. By default, the apiserver does not verify the kubelet's serving certificate, which makes the connection subject to man-in-the-middle attacks and **unsafe** to run over untrusted and/or public networks. --> ### API 服务器到 kubelet @@ -121,7 +122,6 @@ kubelet 之间使用 [SSH 隧道](#ssh-tunnels)。 The connections from the apiserver to a node, pod, or service default to plain HTTP connections and are therefore neither authenticated nor encrypted. They can be run over a secure HTTPS connection by prefixing `https:` to the node, pod, or service name in the API URL, but they will not validate the certificate provided by the HTTPS endpoint nor provide client credentials so while the connection will be encrypted, it will not provide any guarantees of integrity. These connections **are not currently safe** to run over untrusted and/or public networks. --> - ### apiserver 到节点、Pod 和服务 从 apiserver 到节点、Pod 或服务的连接默认为纯 HTTP 方式,因此既没有认证,也没有加密。 @@ -153,7 +153,7 @@ Konnectivity 服务是对此通信通道的替代品。 {{< feature-state for_k8s_version="v1.18" state="beta" >}} -As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server and the Konnectivity agents, running in the control plane network and the nodes network respectively. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. +As a replacement to the SSH tunnels, the Konnectivity service provides TCP level proxy for the control plane to cluster communication. The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. diff --git a/content/zh/docs/concepts/architecture/nodes.md b/content/zh/docs/concepts/architecture/nodes.md index 8536916268..280b705e84 100644 --- a/content/zh/docs/concepts/architecture/nodes.md +++ b/content/zh/docs/concepts/architecture/nodes.md @@ -17,9 +17,10 @@ weight: 10 Kubernetes 通过将容器放入在节点(Node)上运行的 Pod 中来执行你的工作负载。 节点可以是一个虚拟机或者物理机器,取决于所在的集群配置。 -每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务, -这些 Pods 由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。 +每个节点包含运行 {{< glossary_tooltip text="Pods" term_id="pod" >}} 所需的服务; +这些节点由 {{< glossary_tooltip text="控制面" term_id="control-plane" >}} 负责管理。 通常集群中会有若干个节点;而在一个学习用或者资源受限的环境中,你的集群中也可能 只有一个节点。 @@ -556,17 +557,6 @@ that the scheduler won't place Pods onto unhealthy nodes. {{< glossary_tooltip text="污点" term_id="taint" >}}。 这意味着调度器不会将 Pod 调度到不健康的节点上。 - -{{< caution>}} -`kubectl cordon` 会将节点标记为“不可调度(Unschedulable)”。 -此操作的副作用是,服务控制器会将该节点从负载均衡器中之前的目标节点列表中移除, -从而使得来自负载均衡器的网络请求不会到达被保护起来的节点。 -{{< /caution>}} - ## 节点体面关闭 {#graceful-node-shutdown} -{{< feature-state state="alpha" for_k8s_version="v1.20" >}} +{{< feature-state state="beta" for_k8s_version="v1.21" >}} -如果你启用了 `GracefulNodeShutdown` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/), -那么 kubelet 尝试检测节点的系统关闭事件并终止在节点上运行的 Pod。 +kubelet 会尝试检测节点系统关闭事件并终止在节点上运行的 Pods。 + 在节点终止期间,kubelet 保证 Pod 遵从常规的 [Pod 终止流程](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination)。 -当启用了 `GracefulNodeShutdown` 特性门控时, -kubelet 使用 [systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) -在给定的期限内延迟节点关闭。在关闭过程中,kubelet 分两个阶段终止 Pod: +体面节点关闭特性依赖于 systemd,因为它要利用 +[systemd 抑制器锁](https://www.freedesktop.org/wiki/Software/systemd/inhibit/) +在给定的期限内延迟节点关闭。 + + +体面节点关闭特性受 `GracefulNodeShutdown` +[特性门控](/docs/reference/command-line-tools-reference/feature-gates/) +控制,在 1.21 版本中是默认启用的。 + + +注意,默认情况下,下面描述的两个配置选项,`ShutdownGracePeriod` 和 +`ShutdownGracePeriodCriticalPods` 都是被设置为 0 的,因此不会激活 +体面节点关闭功能。 +要激活此功能特性,这两个 kubelet 配置选项要适当配置,并设置为非零值。 +在体面关闭节点过程中,kubelet 分两个阶段来终止 Pods: + 1. 终止在节点上运行的常规 Pod。 2. 终止在节点上运行的[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。 @@ -658,9 +675,11 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/ * `ShutdownGracePeriod`: * Specifies the total duration that the node should delay the shutdown by. This is the total grace period for pod termination for both regular and [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical). * `ShutdownGracePeriodCriticalPods`: - * Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This should be less than `ShutdownGracePeriod`. + * Specifies the duration used to terminate [critical pods](/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical) during a node shutdown. This value should be less than `ShutdownGracePeriod`. --> -节点体面关闭的特性对应两个 [`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项: +节点体面关闭的特性对应两个 +[`KubeletConfiguration`](/zh/docs/tasks/administer-cluster/kubelet-config-file/) 选项: + * `ShutdownGracePeriod`: * 指定节点应延迟关闭的总持续时间。此时间是 Pod 体面终止的时间总和,不区分常规 Pod 还是 [关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。 @@ -670,10 +689,16 @@ Graceful Node Shutdown feature is configured with two [`KubeletConfiguration`](/ 的持续时间。该值应小于 `ShutdownGracePeriod`。 -例如,如果设置了 `ShutdownGracePeriod=30s` 和 `ShutdownGracePeriodCriticalPods=10s`,则 kubelet 将延迟 30 秒关闭节点。 -在关闭期间,将保留前 20(30 - 10)秒用于体面终止常规 Pod,而保留最后 10 秒用于终止 +例如,如果设置了 `ShutdownGracePeriod=30s` 和 `ShutdownGracePeriodCriticalPods=10s`, +则 kubelet 将延迟 30 秒关闭节点。 +在关闭期间,将保留前 20(30 - 10)秒用于体面终止常规 Pod, +而保留最后 10 秒用于终止 [关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。 ## {{% heading "whatsnext" %}} @@ -685,8 +710,10 @@ For example, if `ShutdownGracePeriod=30s`, and `ShutdownGracePeriodCriticalPods= section of the architecture design document. * Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). --> -* 了解有关节点[组件](/zh/docs/concepts/overview/components/#node-components) -* 阅读[节点的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core) -* 阅读架构设计文档中有关[节点](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)的章节 -* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/) +* 了解有关节点[组件](/zh/docs/concepts/overview/components/#node-components)。 +* 阅读 [Node 的 API 定义](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core)。 +* 阅读架构设计文档中有关 + [节点](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) + 的章节。 +* 了解[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 diff --git a/content/zh/docs/concepts/overview/components.md b/content/zh/docs/concepts/overview/components.md index 6d3c0d7281..6f1051d142 100644 --- a/content/zh/docs/concepts/overview/components.md +++ b/content/zh/docs/concepts/overview/components.md @@ -52,18 +52,21 @@ The control plane's components make global decisions about the cluster (for exam --> ## 控制平面组件(Control Plane Components) {#control-plane-components} -控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 `replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 +控制平面的组件对集群做出全局决策(比如调度),以及检测和响应集群事件(例如,当不满足部署的 +`replicas` 字段时,启动新的 {{< glossary_tooltip text="pod" term_id="pod">}})。 控制平面组件可以在集群中的任何节点上运行。 -然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件,并且不会在此计算机上运行用户容器。 -请参阅[构建高可用性集群](/zh/docs/setup/production-environment/tools/kubeadm/high-availability/) -中对于多主机 VM 的设置示例。 +然而,为了简单起见,设置脚本通常会在同一个计算机上启动所有控制平面组件, +并且不会在此计算机上运行用户容器。 +请参阅[使用 kubeadm 构建高可用性集群](/zh/docs/setup/production-environment/tools/kubeadm/high-availability/) +中关于多 VM 控制平面设置的示例。 ### kube-apiserver @@ -203,7 +206,8 @@ Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列 --> ### Web 界面(仪表盘) -[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) 是Kubernetes 集群的通用的、基于 Web 的用户界面。 +[Dashboard](/zh/docs/tasks/access-application-cluster/web-ui-dashboard/) +是Kubernetes 集群的通用的、基于 Web 的用户界面。 它使用户可以管理集群中运行的应用程序以及集群本身并进行故障排除。