Merge pull request #36316 from tengqm/resync-impl-details

[zh] Resync kubeadm implementation details
This commit is contained in:
Kubernetes Prow Robot 2022-08-28 20:46:35 -07:00 committed by GitHub
commit 663fd971a5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 1340 additions and 1286 deletions

View File

@ -11,6 +11,7 @@ title: Implementation details
content_type: concept content_type: concept
weight: 100 weight: 100
--> -->
<!-- overview --> <!-- overview -->
{{< feature-state for_k8s_version="v1.10" state="stable" >}} {{< feature-state for_k8s_version="v1.10" state="stable" >}}
@ -20,7 +21,8 @@ weight: 100
best-practice but bare Kubernetes cluster from scratch. best-practice but bare Kubernetes cluster from scratch.
However, it might not be obvious _how_ kubeadm does that. However, it might not be obvious _how_ kubeadm does that.
--> -->
`kubeadm init``kubeadm join` 结合在一起提供了良好的用户体验,因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。 `kubeadm init``kubeadm join` 结合在一起提供了良好的用户体验,
因为从头开始创建实践最佳而配置最基本的 Kubernetes 集群。
但是kubeadm **如何** 做到这一点可能并不明显。 但是kubeadm **如何** 做到这一点可能并不明显。
<!-- <!--
@ -30,6 +32,7 @@ knowledge on Kubernetes cluster best practices.
本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。 本文档提供了更多幕后的详细信息,旨在分享有关 Kubernetes 集群最佳实践的知识。
<!-- body --> <!-- body -->
<!-- <!--
## Core design principles ## Core design principles
--> -->
@ -47,14 +50,6 @@ knowledge on Kubernetes cluster best practices.
- lock-down the kubelet API - lock-down the kubelet API
- locking down access to the API for system components like the kube-proxy and CoreDNS - locking down access to the API for system components like the kube-proxy and CoreDNS
- locking down what a Bootstrap Token can access - locking down what a Bootstrap Token can access
- **User-friendly**: The user should not have to run anything more than a couple of commands:
- `kubeadm init`
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
- `kubectl apply -f <network-of-choice.yaml>`
- `kubeadm join --token <token> <endpoint>:<port>`
- **Extendable**:
- It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
- It should provide the possibility to use a config file for customizing various parameters
--> -->
- **安全的**:它应采用最新的最佳实践,例如: - **安全的**:它应采用最新的最佳实践,例如:
- 实施 RBAC 访问控制 - 实施 RBAC 访问控制
@ -64,6 +59,16 @@ knowledge on Kubernetes cluster best practices.
- 锁定 kubelet API - 锁定 kubelet API
- 锁定对系统组件(例如 kube-proxy 和 CoreDNS的 API 的访问 - 锁定对系统组件(例如 kube-proxy 和 CoreDNS的 API 的访问
- 锁定启动引导令牌Bootstrap Token可以访问的内容 - 锁定启动引导令牌Bootstrap Token可以访问的内容
<!--
- **User-friendly**: The user should not have to run anything more than a couple of commands:
- `kubeadm init`
- `export KUBECONFIG=/etc/kubernetes/admin.conf`
- `kubectl apply -f <network-of-choice.yaml>`
- `kubeadm join --token <token> <endpoint>:<port>`
- **Extendable**:
- It should _not_ favor any particular network provider. Configuring the cluster network is out-of-scope
- It should provide the possibility to use a config file for customizing various parameters
-->
- **用户友好**:用户只需要运行几个命令即可: - **用户友好**:用户只需要运行几个命令即可:
- `kubeadm init` - `kubeadm init`
- `export KUBECONFIG=/etc/kubernetes/admin.conf` - `export KUBECONFIG=/etc/kubernetes/admin.conf`
@ -95,12 +100,15 @@ Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,
<!-- <!--
- `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests. - `/etc/kubernetes/manifests` as the path where kubelet should look for static Pod manifests.
Names of static Pod manifests are: Names of static Pod manifests are:
-->
- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
- `etcd.yaml` - `etcd.yaml`
- `kube-apiserver.yaml` - `kube-apiserver.yaml`
- `kube-controller-manager.yaml` - `kube-controller-manager.yaml`
- `kube-scheduler.yaml` - `kube-scheduler.yaml`
<!--
- `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane - `/etc/kubernetes/` as the path where kubeconfig files with identities for control plane
components are stored. Names of kubeconfig files are: components are stored. Names of kubeconfig files are:
@ -108,7 +116,14 @@ Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,
- `controller-manager.conf` - `controller-manager.conf`
- `scheduler.conf` - `scheduler.conf`
- `admin.conf` for the cluster admin and kubeadm itself - `admin.conf` for the cluster admin and kubeadm itself
-->
- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
- `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf`)
- `controller-manager.conf`
- `scheduler.conf`
- `admin.conf` 用于集群管理员和 kubeadm 本身
<!--
- Names of certificates and key files : - Names of certificates and key files :
- `ca.crt`, `ca.key` for the Kubernetes certificate authority - `ca.crt`, `ca.key` for the Kubernetes certificate authority
@ -119,18 +134,6 @@ Kubernetes 目录 `/etc/kubernetes` 在应用程序中是一个常量,
- `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority - `front-proxy-ca.crt`, `front-proxy-ca.key` for the front proxy certificate authority
- `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client - `front-proxy-client.crt`, `front-proxy-client.key` for the front proxy client
--> -->
- `/etc/kubernetes/manifests` 作为 kubelet 查找静态 Pod 清单的路径。静态 Pod 清单的名称为:
- `etcd.yaml`
- `kube-apiserver.yaml`
- `kube-controller-manager.yaml`
- `kube-scheduler.yaml`
- `/etc/kubernetes/` 作为带有控制平面组件身份标识的 kubeconfig 文件的路径。kubeconfig 文件的名称为:
- `kubelet.conf` (在 TLS 引导时名称为 `bootstrap-kubelet.conf`)
- `controller-manager.conf`
- `scheduler.conf`
- `admin.conf` 用于集群管理员和 kubeadm 本身
- 证书和密钥文件的名称: - 证书和密钥文件的名称:
- `ca.crt`、`ca.key` 用于 Kubernetes 证书颁发机构 - `ca.crt`、`ca.key` 用于 Kubernetes 证书颁发机构
- `apiserver.crt`、`apiserver.key` 用于 API 服务器证书 - `apiserver.crt`、`apiserver.key` 用于 API 服务器证书
@ -200,17 +203,6 @@ Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并
- [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew) - [error] if kubelet version is at least one minor higher than the required controlplane version (unsupported version skew)
- [warning] if kubelet service does not exist or if it is disabled - [warning] if kubelet service does not exist or if it is disabled
- [warning] if firewalld is active - [warning] if firewalld is active
- [error] if API server bindPort or ports 10250/10251/10252 are used
- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
- [Error] if swap is on
- [Error] if `conntrack`, `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path
- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
- [warning] if extra arg flags for API server, controller manager, scheduler contains some invalid options
- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
- [warning] if connection to services subnet goes through proxy (only first address checked)
- [warning] if connection to Pods subnet goes through proxy (only first address checked)
--> -->
- [错误] 如果用户不是 root 用户 - [错误] 如果用户不是 root 用户
- [错误] 如果机器主机名不是有效的 DNS 子域 - [错误] 如果机器主机名不是有效的 DNS 子域
@ -219,12 +211,27 @@ Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并
- [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小版本(不支持的版本偏差) - [错误] 如果 kubelet 版本比所需的控制平面板版本至少高一个小版本(不支持的版本偏差)
- [警告] 如果 kubelet 服务不存在或已被禁用 - [警告] 如果 kubelet 服务不存在或已被禁用
- [警告] 如果 firewalld 处于活动状态 - [警告] 如果 firewalld 处于活动状态
<!--
- [error] if API server bindPort or ports 10250/10251/10252 are used
- [Error] if `/etc/kubernetes/manifest` folder already exists and it is not empty
- [Error] if `/proc/sys/net/bridge/bridge-nf-call-iptables` file does not exist/does not contain 1
- [Error] if advertise address is ipv6 and `/proc/sys/net/bridge/bridge-nf-call-ip6tables` does not exist/does not contain 1.
- [Error] if swap is on
- [Error] if `conntrack`, `ip`, `iptables`, `mount`, `nsenter` commands are not present in the command path
-->
- [错误] 如果 API ​​服务器绑定的端口或 10250/10251/10252 端口已被占用 - [错误] 如果 API ​​服务器绑定的端口或 10250/10251/10252 端口已被占用
- [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空 - [错误] 如果 `/etc/kubernetes/manifest` 文件夹已经存在并且不为空
- [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1 - [错误] 如果 `/proc/sys/net/bridge/bridge-nf-call-iptables` 文件不存在或不包含 1
- [错误] 如果建议地址是 ipv6并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1 - [错误] 如果建议地址是 ipv6并且 `/proc/sys/net/bridge/bridge-nf-call-ip6tables` 不存在或不包含 1
- [错误] 如果启用了交换分区 - [错误] 如果启用了交换分区
- [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令 - [错误] 如果命令路径中没有 `conntrack`、`ip`、`iptables`、`mount`、`nsenter` 命令
<!--
- [warning] if `ebtables`, `ethtool`, `socat`, `tc`, `touch`, `crictl` commands are not present in the command path
- [warning] if extra arg flags for API server, controller manager, scheduler contains some invalid options
- [warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through proxy
- [warning] if connection to services subnet goes through proxy (only first address checked)
- [warning] if connection to Pods subnet goes through proxy (only first address checked)
-->
- [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令 - [警告] 如果命令路径中没有 `ebtables`、`ethtool`、`socat`、`tc`、`touch`、`crictl` 命令
- [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项 - [警告] 如果 API 服务器、控制器管理器、调度程序的其他参数标志包含一些无效选项
- [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理 - [警告] 如果与 https://API.AdvertiseAddress:API.BindPort 的连接通过代理
@ -253,7 +260,9 @@ Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并
- 如果授权方式为 Webhook - 如果授权方式为 Webhook
- [错误] 如果 webhook_authz.conf 不存在 - [错误] 如果 webhook_authz.conf 不存在
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -269,13 +278,17 @@ Kubeadm 在启动 init 之前执行一组预检,目的是验证先决条件并
--> -->
### 生成必要的证书 {#generate-the-necessary-certificate} ### 生成必要的证书 {#generate-the-necessary-certificate}
<!-- Kubeadm generates certificate and private key pairs for different purposes: -->
Kubeadm 生成用于不同目的的证书和私钥对:
<!-- <!--
Kubeadm generates certificate and private key pairs for different purposes:
- A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and - A self signed certificate authority for the Kubernetes cluster saved into `ca.crt` file and
`ca.key` private key file `ca.key` private key file
-->
Kubeadm 生成用于不同目的的证书和私钥对:
- Kubernetes 集群的自签名证书颁发机构会保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
<!--
- A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into - A serving certificate for the API server, generated using `ca.crt` as the CA, and saved into
`apiserver.crt` file with its private key `apiserver.key`. This certificate should contain `apiserver.crt` file with its private key `apiserver.key`. This certificate should contain
following alternative names: following alternative names:
@ -288,24 +301,9 @@ Kubeadm 生成用于不同目的的证书和私钥对:
- The node-name - The node-name
- The `--apiserver-advertise-address` - The `--apiserver-advertise-address`
- Additional alternative names specified by the user - Additional alternative names specified by the user
- A client certificate for the API server to connect to the kubelets securely, generated using
`ca.crt` as the CA and saved into `apiserver-kubelet-client.crt` file with its private key
`apiserver-kubelet-client.key`.
This certificate should be in the `system:masters` organization
- A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
- A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key
`front-proxy-ca.key`
- A client cert for the front proxy client, generate using `front-proxy-ca.crt` as the CA and
saved into `front-proxy-client.crt` file with its private key`front-proxy-client.key`
--> -->
- Kubernetes 集群的自签名证书颁发机构会保存到 `ca.crt` 文件和 `ca.key` 私钥文件中
- 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt` - 用于 API 服务器的服务证书,使用 `ca.crt` 作为 CA 生成,并将证书保存到 `apiserver.crt`
文件中,私钥保存到 `apiserver.key` 文件中。 文件中,私钥保存到 `apiserver.key` 文件中。该证书应包含以下备用名称:
该证书应包含以下备用名称:
- Kubernetes 服务的内部 clusterIP服务 CIDR 的第一个地址。 - Kubernetes 服务的内部 clusterIP服务 CIDR 的第一个地址。
例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1` 例如:如果服务的子网是 `10.96.0.0/12`,则为 `10.96.0.1`
@ -316,10 +314,26 @@ Kubeadm 生成用于不同目的的证书和私钥对:
- `--apiserver-advertise-address` - `--apiserver-advertise-address`
- 用户指定的其他备用名称 - 用户指定的其他备用名称
<!--
- A client certificate for the API server to connect to the kubelets securely, generated using
`ca.crt` as the CA and saved into `apiserver-kubelet-client.crt` file with its private key
`apiserver-kubelet-client.key`.
This certificate should be in the `system:masters` organization
- A private key for signing ServiceAccount Tokens saved into `sa.key` file along with its public key `sa.pub`
-->
- 用于 API 服务器安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成, - 用于 API 服务器安全连接到 kubelet 的客户端证书,使用 `ca.crt` 作为 CA 生成,
并保存到 `apiserver-kubelet-client.crt`,私钥保存到 `apiserver-kubelet-client.key` 并保存到 `apiserver-kubelet-client.crt`,私钥保存到 `apiserver-kubelet-client.key`
文件中。该证书应该在 `system:masters` 组织中。 文件中。该证书应该在 `system:masters` 组织中。
- 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中 - 用于签名 ServiceAccount 令牌的私钥保存到 `sa.key` 文件中,公钥保存到 `sa.pub` 文件中。
<!--
- A certificate authority for the front proxy saved into `front-proxy-ca.crt` file with its key
`front-proxy-ca.key`
- A client cert for the front proxy client, generate using `front-proxy-ca.crt` as the CA and
saved into `front-proxy-client.crt` file with its private key`front-proxy-client.key`
-->
- 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到 - 用于前端代理的证书颁发机构保存到 `front-proxy-ca.crt` 文件中,私钥保存到
`front-proxy-ca.key` 文件中 `front-proxy-ca.key` 文件中
- 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到 - 前端代理客户端的客户端证书,使用 `front-proxy-ca.crt` 作为 CA 生成,并保存到
@ -344,11 +358,6 @@ Please note that:
1. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files 1. Only for the CA, it is possible to provide the `ca.crt` file but not the `ca.key` file, if all other certificates and kubeconfig files
already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in already are in place kubeadm recognize this condition and activates the ExternalCA , which also implies the `csrsigner`controller in
controller-manager won't be started controller-manager won't be started
1. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
all the certificates must be provided by the user, because kubeadm cannot generate them by itself
1. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
1. Certificate generation can be invoked individually with the
[`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
--> -->
1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件, 1. 如果证书和私钥对都存在,并且其内容经过评估符合上述规范,将使用现有文件,
并且跳过给定证书的生成阶段。 并且跳过给定证书的生成阶段。
@ -359,8 +368,15 @@ Please note that:
而不提供 `ca.key` 文件。 而不提供 `ca.key` 文件。
kubeadm 能够识别出这种情况并启用 ExternalCA这也意味着了控制器管理器中的 kubeadm 能够识别出这种情况并启用 ExternalCA这也意味着了控制器管理器中的
`csrsigner` 控制器将不会启动。 `csrsigner` 控制器将不会启动。
3. 如果 kubeadm 在 -->
[外部 CA 模式](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode) <!--
1. If kubeadm is running in [external CA mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode);
all the certificates must be provided by the user, because kubeadm cannot generate them by itself
1. In case of kubeadm is executed in the `--dry-run` mode, certificates files are written in a temporary folder
1. Certificate generation can be invoked individually with the
[`kubeadm init phase certs all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) command
-->
3. 如果 kubeadm 在[外部 CA 模式](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#external-ca-mode)
下运行,所有证书必须由用户提供,因为 kubeadm 无法自行生成证书。 下运行,所有证书必须由用户提供,因为 kubeadm 无法自行生成证书。
4. 如果在 `--dry-run` 模式下执行 kubeadm证书文件将写入一个临时文件夹中。 4. 如果在 `--dry-run` 模式下执行 kubeadm证书文件将写入一个临时文件夹中。
5. 可以使用 [`kubeadm init phase certs all`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs) 5. 可以使用 [`kubeadm init phase certs all`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-certs)
@ -386,18 +402,8 @@ Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
- Be in the `system:nodes` organization, as required by the - Be in the `system:nodes` organization, as required by the
[Node Authorization](/docs/reference/access-authn-authz/node/) module [Node Authorization](/docs/reference/access-authn-authz/node/) module
- Have the Common Name (CN) `system:node:<hostname-lowercased>` - Have the Common Name (CN) `system:node:<hostname-lowercased>`
- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this
file is embedded a client certificate with controller-manager identity. This client cert should
have the CN `system:kube-controller-manager`, as defined by default
[RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded
a client certificate with scheduler identity.
This client cert should have the CN `system:kube-scheduler`, as defined by default
[RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
--> -->
- 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件 —— `/etc/kubernetes/bootstrap-kubelet.conf` - 供 kubelet 在 TLS 引导期间使用的 kubeconfig 文件——`/etc/kubernetes/bootstrap-kubelet.conf`。
在此文件中,有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。 在此文件中,有一个引导令牌或内嵌的客户端证书,向集群表明此节点身份。
此客户端证书应: 此客户端证书应:
@ -405,12 +411,24 @@ Kubeadm 生成具有用于控制平面组件身份标识的 kubeconfig 文件:
- 根据[节点鉴权](/zh-cn/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织 - 根据[节点鉴权](/zh-cn/docs/reference/access-authn-authz/node/)模块的要求,属于 `system:nodes` 组织
- 具有通用名称CN`system:node:<小写主机名>` - 具有通用名称CN`system:node:<小写主机名>`
<!--
- A kubeconfig file for controller-manager, `/etc/kubernetes/controller-manager.conf`; inside this
file is embedded a client certificate with controller-manager identity. This client cert should
have the CN `system:kube-controller-manager`, as defined by default
[RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
-->
- 控制器管理器的 kubeconfig 文件 —— `/etc/kubernetes/controller-manager.conf` - 控制器管理器的 kubeconfig 文件 —— `/etc/kubernetes/controller-manager.conf`
在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。 在此文件中嵌入了一个具有控制器管理器身份标识的客户端证书。
此客户端证书应具有 CN`system:kube-controller-manager` 此客户端证书应具有 CN`system:kube-controller-manager`
该 CN 由 [RBAC 核心组件角色](/zh-cn/docs/reference/access-authn-authz/rbac/#core-component-roles) 该 CN 由 [RBAC 核心组件角色](/zh-cn/docs/reference/access-authn-authz/rbac/#core-component-roles)
默认定义的。 默认定义的。
<!--
- A kubeconfig file for scheduler, `/etc/kubernetes/scheduler.conf`; inside this file is embedded
a client certificate with scheduler identity.
This client cert should have the CN `system:kube-scheduler`, as defined by default
[RBAC core components roles](/docs/reference/access-authn-authz/rbac/#core-component-roles)
-->
- 调度器的 kubeconfig 文件 —— `/etc/kubernetes/scheduler.conf` - 调度器的 kubeconfig 文件 —— `/etc/kubernetes/scheduler.conf`
此文件中嵌入了具有调度器身份标识的客户端证书。此客户端证书应具有 CN`system:kube-scheduler` 此文件中嵌入了具有调度器身份标识的客户端证书。此客户端证书应具有 CN`system:kube-scheduler`
该 CN 由 [RBAC 核心组件角色](/zh-cn/docs/reference/access-authn-authz/rbac/#core-component-roles) 该 CN 由 [RBAC 核心组件角色](/zh-cn/docs/reference/access-authn-authz/rbac/#core-component-roles)
@ -479,12 +497,6 @@ Kubelet 启动后会监视这个目录以便创建 Pod。
* The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1` * The `address` that the controller-manager and the scheduler use to refer the API server is `127.0.0.1`
* If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379` * If using a local etcd server, `etcd-servers` address will be set to `127.0.0.1:2379`
- Leader election is enabled for both the controller-manager and the scheduler
- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
- All static Pods get any extra flags specified by the user as described in
[passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
- All static Pods get any extra Volumes specified by the user (Host path)
--> -->
- 所有静态 Pod 都部署在 `kube-system` 名字空间 - 所有静态 Pod 都部署在 `kube-system` 名字空间
- 所有静态 Pod 都打上 `tier:control-plane``component:{组件名称}` 标签 - 所有静态 Pod 都打上 `tier:control-plane``component:{组件名称}` 标签
@ -494,13 +506,22 @@ Kubelet 启动后会监视这个目录以便创建 Pod。
* 控制器管理器和调度器用来调用 API 服务器的地址为 `127.0.0.1` * 控制器管理器和调度器用来调用 API 服务器的地址为 `127.0.0.1`
* 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379` * 如果使用本地 etcd 服务器,则 `etcd-servers` 地址将设置为 `127.0.0.1:2379`
<!--
- Leader election is enabled for both the controller-manager and the scheduler
- Controller-manager and the scheduler will reference kubeconfig files with their respective, unique identities
- All static Pods get any extra flags specified by the user as described in
[passing custom arguments to control plane components](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
- All static Pods get any extra Volumes specified by the user (Host path)
-->
- 同时为控制器管理器和调度器启用了领导者选举 - 同时为控制器管理器和调度器启用了领导者选举
- 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识 - 控制器管理器和调度器将引用 kubeconfig 文件及其各自的唯一标识
- 如[将自定义参数传递给控制平面组件](/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) - 如[将自定义参数传递给控制平面组件](/zh-cn/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)
中所述,所有静态 Pod 都会获得用户指定的额外标志 中所述,所有静态 Pod 都会获得用户指定的额外标志
- 所有静态 Pod 都会获得用户指定的额外卷(主机路径) - 所有静态 Pod 都会获得用户指定的额外卷(主机路径)
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -512,9 +533,8 @@ Kubelet 启动后会监视这个目录以便创建 Pod。
1. Static Pod manifest generation for control plane components can be invoked individually with 1. Static Pod manifest generation for control plane components can be invoked individually with
the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command the [`kubeadm init phase control-plane all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) command
--> -->
1. 所有镜像默认从 k8s.gcr.io 拉取。 1. 所有镜像默认从 k8s.gcr.io 拉取。关于自定义镜像仓库,
关于自定义镜像仓库,请参阅 请参阅[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)。
[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)。
2. 如果在 `--dry-run` 模式下执行 kubeadm则静态 Pod 文件写入一个临时文件夹中。 2. 如果在 `--dry-run` 模式下执行 kubeadm则静态 Pod 文件写入一个临时文件夹中。
3. 可以使用 [`kubeadm init phase control-plane all`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane) 3. 可以使用 [`kubeadm init phase control-plane all`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-control-plane)
命令分别生成主控组件的静态 Pod 清单。 命令分别生成主控组件的静态 Pod 清单。
@ -532,6 +552,11 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
- The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those - The `apiserver-advertise-address` and `apiserver-bind-port` to bind to; if not provided, those
value defaults to the IP address of the default network interface on the machine and port 6443 value defaults to the IP address of the default network interface on the machine and port 6443
- The `service-cluster-ip-range` to use for services - The `service-cluster-ip-range` to use for services
-->
- 要绑定的 `apiserver-advertise-address``apiserver-bind-port`
如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
- `service-cluster-ip-range` 给 service 使用
<!--
- If an external etcd server is specified, the `etcd-servers` address and related TLS settings - If an external etcd server is specified, the `etcd-servers` address and related TLS settings
(`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`); (`etcd-cafile`, `etcd-certfile`, `etcd-keyfile`);
if an external etcd server is not be provided, a local etcd will be used (via host network) if an external etcd server is not be provided, a local etcd will be used (via host network)
@ -539,16 +564,15 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
with the `--cloud-config` path if such file exists (this is experimental, alpha and will be with the `--cloud-config` path if such file exists (this is experimental, alpha and will be
removed in a future version) removed in a future version)
--> -->
- 要绑定的 `apiserver-advertise-address``apiserver-bind-port`
如果未提供,则这些值默认为机器上默认网络接口的 IP 地址和 6443 端口。
- `service-cluster-ip-range` 给 service 使用
- 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置 - 如果指定了外部 etcd 服务器,则应指定 `etcd-servers` 地址和相关的 TLS 设置
`etcd-cafile`、`etcd-certfile`、`etcd-keyfile` `etcd-cafile`、`etcd-certfile`、`etcd-keyfile`
如果未提供外部 etcd 服务器,则将使用本地 etcd通过主机网络 如果未提供外部 etcd 服务器,则将使用本地 etcd通过主机网络
- 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config` - 如果指定了云提供商,则配置相应的 `--cloud-provider`,如果该路径存在,则配置 `--cloud-config`
(这是实验性的,是 Alpha 版本,将在以后的版本中删除) (这是实验性的,是 Alpha 版本,将在以后的版本中删除)
<!-- Other API server flags that are set unconditionally are: --> <!--
Other API server flags that are set unconditionally are:
-->
无条件设置的其他 API 服务器标志有: 无条件设置的其他 API 服务器标志有:
<!-- <!--
@ -574,16 +598,6 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
to enforce limits on namespaces to enforce limits on namespaces
- [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) - [`ServiceAccount`](/docs/reference/access-authn-authz/admission-controllers/#serviceaccount)
to enforce service account automation to enforce service account automation
- [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel)
attaches region or zone labels to PersistentVolumes as defined by the cloud provider (This
admission controller is deprecated and will be removed in a future version.
It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into
using `gce` or `aws` as cloud providers)
- [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
to enforce default storage class on `PersistentVolumeClaim` objects
- [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
- [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
to limit what a kubelet can modify (e.g. only pods on this node)
--> -->
- `--enable-admission-plugins` 设为: - `--enable-admission-plugins` 设为:
- [`NamespaceLifecycle`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle) - [`NamespaceLifecycle`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#namespacelifecycle)
@ -593,20 +607,42 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
对名字空间实施限制 对名字空间实施限制
- [`ServiceAccount`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) - [`ServiceAccount`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#serviceaccount)
实施服务账户自动化 实施服务账户自动化
<!--
- [`PersistentVolumeLabel`](/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel)
attaches region or zone labels to PersistentVolumes as defined by the cloud provider (This
admission controller is deprecated and will be removed in a future version.
It is not deployed by kubeadm by default with v1.9 onwards when not explicitly opting into
using `gce` or `aws` as cloud providers)
-->
- [`PersistentVolumeLabel`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel) - [`PersistentVolumeLabel`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#persistentvolumelabel)
将区域Region或区Zone标签附加到由云提供商定义的 PersistentVolumes 将区域Region或区Zone标签附加到由云提供商定义的 PersistentVolumes
(此准入控制器已被弃用并将在以后的版本中删除)。 (此准入控制器已被弃用并将在以后的版本中删除)。
如果未明确选择使用 `gce``aws` 作为云提供商则默认情况下v1.9 以后的版本 kubeadm 都不会部署。 如果未明确选择使用 `gce``aws` 作为云提供商则默认情况下v1.9 以后的版本 kubeadm 都不会部署。
<!--
- [`DefaultStorageClass`](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
to enforce default storage class on `PersistentVolumeClaim` objects
- [`DefaultTolerationSeconds`](/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
- [`NodeRestriction`](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
to limit what a kubelet can modify (e.g. only pods on this node)
-->
- [`DefaultStorageClass`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass) - [`DefaultStorageClass`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
`PersistentVolumeClaim` 对象上强制使用默认存储类型 `PersistentVolumeClaim` 对象上强制使用默认存储类型
- [`DefaultTolerationSeconds`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds) - [`DefaultTolerationSeconds`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds)
- [`NodeRestriction`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction) - [`NodeRestriction`](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
限制 kubelet 可以修改的内容(例如,仅此节点上的 pod 限制 kubelet 可以修改的内容(例如,仅此节点上的 Pod
<!-- <!--
- `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl - `--kubelet-preferred-address-types` to `InternalIP,ExternalIP,Hostname;` this makes `kubectl
logs` and other API server-kubelet communication work in environments where the hostnames of the logs` and other API server-kubelet communication work in environments where the hostnames of the
nodes aren't resolvable nodes aren't resolvable
-->
- `--kubelet-preferred-address-types` 设为 `InternalIP,ExternalIP,Hostname;`
这使得在节点的主机名无法解析的环境中,`kubectl log` 和 API 服务器与 kubelet
的其他通信可以工作
<!--
- Flags for using certificates generated in previous steps: - Flags for using certificates generated in previous steps:
- `--client-ca-file` to `ca.crt` - `--client-ca-file` to `ca.crt`
@ -618,19 +654,7 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
- `--requestheader-client-ca-file` to`front-proxy-ca.crt` - `--requestheader-client-ca-file` to`front-proxy-ca.crt`
- `--proxy-client-cert-file` to `front-proxy-client.crt` - `--proxy-client-cert-file` to `front-proxy-client.crt`
- `--proxy-client-key-file` to `front-proxy-client.key` - `--proxy-client-key-file` to `front-proxy-client.key`
- Other flags for securing the front proxy
([API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/))
communications:
- `--requestheader-username-headers=X-Remote-User`
- `--requestheader-group-headers=X-Remote-Group`
- `--requestheader-extra-headers-prefix=X-Remote-Extra-`
- `--requestheader-allowed-names=front-proxy-client`
--> -->
- `--kubelet-preferred-address-types` 设为 `InternalIP,ExternalIP,Hostname;`
这使得在节点的主机名无法解析的环境中,`kubectl log` 和 API 服务器与 kubelet
的其他通信可以工作
- 使用在前面步骤中生成的证书的标志: - 使用在前面步骤中生成的证书的标志:
- `--client-ca-file` 设为 `ca.crt` - `--client-ca-file` 设为 `ca.crt`
@ -643,6 +667,16 @@ API 服务器的静态 Pod 清单会受到用户提供的以下参数的影响:
- `--proxy-client-cert-file` 设为 `front-proxy-client.crt` - `--proxy-client-cert-file` 设为 `front-proxy-client.crt`
- `--proxy-client-key-file` 设为 `front-proxy-client.key` - `--proxy-client-key-file` 设为 `front-proxy-client.key`
<!--
- Other flags for securing the front proxy
([API Aggregation](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/))
communications:
- `--requestheader-username-headers=X-Remote-User`
- `--requestheader-group-headers=X-Remote-Group`
- `--requestheader-extra-headers-prefix=X-Remote-Extra-`
- `--requestheader-allowed-names=front-proxy-client`
-->
- 其他用于保护前端代理( - 其他用于保护前端代理(
[API 聚合层](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) [API 聚合层](/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
通信的标志: 通信的标志:
@ -669,10 +703,6 @@ the users:
- `--allocate-node-cidrs=true` - `--allocate-node-cidrs=true`
- `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR - `--cluster-cidr` and `--node-cidr-mask-size` flags according to the given CIDR
- If a cloud provider is specified, the corresponding `--cloud-provider` is specified, together
with the `--cloud-config` path if such configuration file exists (this is experimental, alpha
and will be removed in a future version)
--> -->
- 如果调用 kubeadm 时指定了 `--pod-network-cidr` 参数, - 如果调用 kubeadm 时指定了 `--pod-network-cidr` 参数,
则可以通过以下方式启用某些 CNI 网络插件所需的子网管理器功能: 则可以通过以下方式启用某些 CNI 网络插件所需的子网管理器功能:
@ -680,10 +710,17 @@ the users:
- 设置 `--allocate-node-cidrs=true` - 设置 `--allocate-node-cidrs=true`
- 根据给定 CIDR 设置 `--cluster-cidr``--node-cidr-mask-size` 标志 - 根据给定 CIDR 设置 `--cluster-cidr``--node-cidr-mask-size` 标志
<!--
- If a cloud provider is specified, the corresponding `--cloud-provider` is specified, together
with the `--cloud-config` path if such configuration file exists (this is experimental, alpha
and will be removed in a future version)
-->
- 如果指定了云提供商,则指定相应的 `--cloud-provider`,如果存在这样的配置文件, - 如果指定了云提供商,则指定相应的 `--cloud-provider`,如果存在这样的配置文件,
则指定 `--cloud-config` 路径(此为试验性功能,是 Alpha 版本,将在以后的版本中删除)。 则指定 `--cloud-config` 路径(此为试验性功能,是 Alpha 版本,将在以后的版本中删除)。
<!-- Other flags that are set unconditionally are: --> <!--
Other flags that are set unconditionally are:
-->
其他无条件设置的标志包括: 其他无条件设置的标志包括:
<!-- <!--
@ -692,13 +729,6 @@ the users:
for more details for more details
- `--use-service-account-credentials` to `true` - `--use-service-account-credentials` to `true`
- Flags for using certificates generated in previous steps:
- `--root-ca-file` to `ca.crt`
- `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `""`
- `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `""`
- `--service-account-private-key-file` to `sa.key`
--> -->
- `--controllers` 为 TLS 引导程序启用所有默认控制器以及 `BootstrapSigner` - `--controllers` 为 TLS 引导程序启用所有默认控制器以及 `BootstrapSigner`
`TokenCleaner` 控制器。详细信息请参阅 `TokenCleaner` 控制器。详细信息请参阅
@ -706,9 +736,18 @@ the users:
- `--use-service-account-credentials` 设为 `true` - `--use-service-account-credentials` 设为 `true`
<!--
- Flags for using certificates generated in previous steps:
- `--root-ca-file` to `ca.crt`
- `--cluster-signing-cert-file` to `ca.crt`, if External CA mode is disabled, otherwise to `""`
- `--cluster-signing-key-file` to `ca.key`, if External CA mode is disabled, otherwise to `""`
- `--service-account-private-key-file` to `sa.key`
-->
- 使用先前步骤中生成的证书的标志: - 使用先前步骤中生成的证书的标志:
-`--root-ca-file` 设为 `ca.crt` - `--root-ca-file` 设为 `ca.crt`
- 如果禁用了 External CA 模式,则 `--cluster-signing-cert-file` 设为 `ca.crt`,否则设为 `""` - 如果禁用了 External CA 模式,则 `--cluster-signing-cert-file` 设为 `ca.crt`,否则设为 `""`
- 如果禁用了 External CA 模式,则 `--cluster-signing-key-file` 设为 `ca.key`,否则设为 `""` - 如果禁用了 External CA 模式,则 `--cluster-signing-key-file` 设为 `ca.key`,否则设为 `""`
- `--service-account-private-key-file` 设为 `sa.key` - `--service-account-private-key-file` 设为 `sa.key`
@ -728,11 +767,11 @@ The static Pod manifest for the scheduler is not affected by parameters provided
### 为本地 etcd 生成静态 Pod 清单 {#generate-static-pod-manifest-for-local-etcd} ### 为本地 etcd 生成静态 Pod 清单 {#generate-static-pod-manifest-for-local-etcd}
<!-- <!--
If the user specified an external etcd this step will be skipped, otherwise kubeadm generates a If you specified an external etcd this step will be skipped, otherwise kubeadm generates a
static Pod manifest file for creating a local etcd instance running in a Pod with following attributes: static Pod manifest file for creating a local etcd instance running in a Pod with following attributes:
--> -->
如果用户指定了外部 etcd则将跳过此步骤,否则 kubeadm 会生成静态 Pod 清单文件, 如果你指定的是外部 etcd则应跳过此步骤,否则 kubeadm 会生成静态 Pod 清单文件,
以创建在 Pod 中运行的具有以下属性的本地 etcd 实例: 以创建在 Pod 中运行的具有以下属性的本地 etcd 实例:
<!-- <!--
- listen on `localhost:2379` and use `HostNetwork=true` - listen on `localhost:2379` and use `HostNetwork=true`
@ -743,25 +782,26 @@ static Pod manifest file for creating a local etcd instance running in a Pod wit
- 将 `hostPath``dataDir` 挂载到主机的文件系统 - 将 `hostPath``dataDir` 挂载到主机的文件系统
- 用户指定的任何其他标志 - 用户指定的任何其他标志
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
1. The etcd image will be pulled from `k8s.gcr.io` by default. See 1. The etcd container image will be pulled from `registry.k8s.io` by default. See
[using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images) [using custom images](/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)
for customizing the image repository for customizing the image repository
2. In case of kubeadm is executed in the `--dry-run` mode, the etcd static Pod manifest is written 2. If you run kubeadm in `--dry-run` mode, the etcd static Pod manifest is written
in a temporary folder. into a temporary folder.
3. Static Pod manifest generation for local etcd can be invoked individually with the 3. You can directly invoke static Pod manifest generation for local etcd, using the
[`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) [`kubeadm init phase etcd local`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd)
command. command.
--> -->
1. etcd 镜像默认从 `k8s.gcr.io` 拉取。有关自定义镜像仓库,请参阅 1. etcd 容器镜像默认从 `registry.k8s.io` 拉取。有关自定义镜像仓库,
[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)。 请参阅[使用自定义镜像](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/#custom-images)。
2. 如果 kubeadm 以 `--dry-run` 模式执行etcd 静态 Pod 清单将写入一个临时文件夹。 2. 如果你以 `--dry-run` 模式执行 kubeadm 命令etcd 的静态 Pod 清单将被写入一个临时文件夹。
3. 可以使用 3. 你可以使用 ['kubeadm init phase etcd local'](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd)
['kubeadm init phase etcd local'](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-etcd) 命令为本地 etcd 直接调用静态 Pod 清单生成逻辑。
命令单独为本地 etcd 生成静态 Pod 清单。
<!-- <!--
### Wait for the control plane to come up ### Wait for the control plane to come up
@ -805,7 +845,9 @@ determine the actual/current cluster state and make new decisions based on that
这将确保将来执行的 kubeadm 操作(例如 `kubeadm upgrade`)将能够确定实际/当前集群状态, 这将确保将来执行的 kubeadm 操作(例如 `kubeadm upgrade`)将能够确定实际/当前集群状态,
并根据该数据做出新的决策。 并根据该数据做出新的决策。
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -814,8 +856,7 @@ determine the actual/current cluster state and make new decisions based on that
[`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config). [`kubeadm init phase upload-config`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config).
--> -->
1. 在保存 ClusterConfiguration 之前,从配置中删除令牌等敏感信息。 1. 在保存 ClusterConfiguration 之前,从配置中删除令牌等敏感信息。
2. 可以使用 2. 可以使用 [`kubeadm init phase upload-config`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config)
[`kubeadm init phase upload-config`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-upload-config)
命令单独上传主控节点配置。 命令单独上传主控节点配置。
<!-- <!--
@ -830,13 +871,22 @@ As soon as the control plane is available, kubeadm executes following actions:
<!-- <!--
- Labels the node as control-plane with `node-role.kubernetes.io/control-plane=""` - Labels the node as control-plane with `node-role.kubernetes.io/control-plane=""`
- Taints the node with `node-role.kubernetes.io/master:NoSchedule` and - Taints the node with `node-role.kubernetes.io/control-plane:NoSchedule`
`node-role.kubernetes.io/control-plane:NoSchedule`
Please note that the phase to mark the control-plane phase can be invoked
individually with the
[`kubeadm init phase mark-control-plane`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane) command.
--> -->
- 给节点打上 `node-role.kubernetes.io/control-plane=""` 标签,标记其为控制平面 - 给节点打上 `node-role.kubernetes.io/control-plane=""` 标签,标记其为控制平面
- 给节点打上 `node-role.kubernetes.io/master:NoSchedule``node-role.kubernetes.io/control-plane:NoSchedule` 污点 - 给节点打上 `node-role.kubernetes.io/control-plane:NoSchedule` 污点
<!-- Please note that: --> 请注意,标记控制面的这个阶段可以单独通过
[`kubeadm init phase mark-control-plane`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane)
命令来实现。
<!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -846,7 +896,7 @@ As soon as the control plane is available, kubeadm executes following actions:
--> -->
1. `node-role.kubernetes.io/master` 污点是已废弃的,将会在 kubeadm 1.25 版本中移除 1. `node-role.kubernetes.io/master` 污点是已废弃的,将会在 kubeadm 1.25 版本中移除
2. 可以使用 [`kubeadm init phase mark-control-plane`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane) 2. 可以使用 [`kubeadm init phase mark-control-plane`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-mark-control-plane)
命令单独触发控制平面标记 命令单独触发控制平面标记
<!-- <!--
### Configure TLS-Bootstrapping for node joining ### Configure TLS-Bootstrapping for node joining
@ -858,20 +908,21 @@ Kubeadm uses [Authenticating with Bootstrap Tokens](/docs/reference/access-authn
for joining new nodes to an existing cluster; for more details see also for joining new nodes to an existing cluster; for more details see also
[design proposal](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md). [design proposal](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md).
--> -->
Kubeadm 使用[引导令牌认证](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) Kubeadm 使用[引导令牌认证](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)
将新节点连接到现有集群; 将新节点连接到现有集群;更多的详细信息,
更多的详细信息,请参见 请参见[设计提案](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md)。
[设计提案](https://git.k8s.io/design-proposals-archive/cluster-lifecycle/bootstrap-discovery.md)。
<!-- <!--
`kubeadm init` ensures that everything is properly configured for this process, and this includes `kubeadm init` ensures that everything is properly configured for this process, and this includes
following steps as well as setting API server and controller flags as already described in following steps as well as setting API server and controller flags as already described in
previous paragraphs. previous paragraphs.
--> -->
`kubeadm init` 确保为该过程正确配置了所有内容,这包括以下步骤以及设置 API 服务器和控制器标志,如前几段所述。 `kubeadm init` 确保为该过程正确配置了所有内容,这包括以下步骤以及设置 API
服务器和控制器标志,如前几段所述。
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -880,8 +931,7 @@ previous paragraphs.
executing all the configuration steps described in following paragraphs; executing all the configuration steps described in following paragraphs;
alternatively, each step can be invoked individually alternatively, each step can be invoked individually
--> -->
1. 可以使用 1. 可以使用 [`kubeadm init phase bootstrap-token`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token)
[`kubeadm init phase bootstrap-token`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-bootstrap-token)
命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤; 命令配置节点的 TLS 引导,执行以下段落中描述的所有配置步骤;
或者每个步骤都单独触发。 或者每个步骤都单独触发。
@ -894,11 +944,11 @@ previous paragraphs.
`kubeadm init` create a first bootstrap token, either generated automatically or provided by the `kubeadm init` create a first bootstrap token, either generated automatically or provided by the
user with the `--token` flag; as documented in bootstrap token specification, token should be user with the `--token` flag; as documented in bootstrap token specification, token should be
saved as secrets with name `bootstrap-token-<token-id>` under `kube-system` namespace. saved as secrets with name `bootstrap-token-<token-id>` under `kube-system` namespace.
Please note that:
--> -->
`kubeadm init` 创建第一个引导令牌,该令牌是自动生成的或由用户提供的 `--token` `kubeadm init` 创建第一个引导令牌,该令牌是自动生成的或由用户提供的 `--token`
标志的值;如引导令牌规范中记录的那样, 标志的值;如引导令牌规范文档中所述,令牌应保存在 `kube-system` 名字空间下名为
令牌应保存在 `kube-system` 名字空间下名为 `bootstrap-token-<令牌-id>` `bootstrap-token-<令牌 ID>` 的 Secret 中。
的 Secret 中。
<!-- <!--
Please note that: Please note that:
@ -1039,8 +1089,7 @@ Please note that:
1. This phase can be invoked individually with the command 1. This phase can be invoked individually with the command
[`kubeadm init phase addon all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon). [`kubeadm init phase addon all`](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon).
--> -->
1. 此步骤可以调用 1. 此步骤可以调用 ['kubeadm init phase addon all'](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon)
['kubeadm init phase addon all'](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon)
命令单独执行。 命令单独执行。
<!-- <!--
@ -1061,8 +1110,7 @@ deployed as a DaemonSet:
--> -->
- 主控节点凭据(`ca.crt` 和 `token`)来自 ServiceAccount - 主控节点凭据(`ca.crt` 和 `token`)来自 ServiceAccount
- API 服务器节点的位置URL来自 ConfigMap - API 服务器节点的位置URL来自 ConfigMap
- `kube-proxy` 的 ServiceAccount 绑定了 `system:node-proxier` ClusterRole - `kube-proxy` 的 ServiceAccount 绑定了 `system:node-proxier` ClusterRole 中的特权
中的特权
#### DNS #### DNS
@ -1074,9 +1122,6 @@ deployed as a DaemonSet:
- A ServiceAccount for CoreDNS is created in the `kube-system` namespace. - A ServiceAccount for CoreDNS is created in the `kube-system` namespace.
- The `coredns` ServiceAccount is bound to the privileges in the `system:coredns` ClusterRole - The `coredns` ServiceAccount is bound to the privileges in the `system:coredns` ClusterRole
In Kubernetes version 1.21, support for using `kube-dns` with kubeadm was removed.
You can use CoreDNS with kubeadm even when the related Service is named `kube-dns`.
--> -->
- CoreDNS 服务的名称为 `kube-dns`。这样做是为了防止当用户将集群 DNS 从 kube-dns - CoreDNS 服务的名称为 `kube-dns`。这样做是为了防止当用户将集群 DNS 从 kube-dns
切换到 CoreDNS 时出现服务中断。`--config` 方法在 切换到 CoreDNS 时出现服务中断。`--config` 方法在
@ -1130,7 +1175,9 @@ preconditions and avoid common cluster startup problems.
--> -->
`kubeadm` 在开始执行之前执行一组预检,目的是验证先决条件,避免常见的集群启动问题。 `kubeadm` 在开始执行之前执行一组预检,目的是验证先决条件,避免常见的集群启动问题。
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -1175,7 +1222,9 @@ node basically retrieves the cluster CA certificates from the `cluster-info` Co
在这种情况下,节点基本上从 `kube-public` 名字空间中的 `cluster-info` ConfigMap 在这种情况下,节点基本上从 `kube-public` 名字空间中的 `cluster-info` ConfigMap
中检索集群 CA 证书。 中检索集群 CA 证书。
<!-- In order to prevent "man in the middle" attacks, several steps are taken: --> <!--
In order to prevent "man in the middle" attacks, several steps are taken:
-->
为了防止“中间人”攻击,采取了以下步骤: 为了防止“中间人”攻击,采取了以下步骤:
<!-- <!--
@ -1183,7 +1232,13 @@ node basically retrieves the cluster CA certificates from the `cluster-info` Co
`kubeadm init` granted access to `cluster-info` users for `system:unauthenticated` ) `kubeadm init` granted access to `cluster-info` users for `system:unauthenticated` )
- Then the CA certificate goes trough following validation steps: - Then the CA certificate goes trough following validation steps:
-->
- 首先,通过不安全连接检索 CA 证书(这是可能的,因为 `kubeadm init` 授予
`system:unauthenticated` 的用户对 `cluster-info` 访问权限)。
- 然后 CA 证书通过以下验证步骤:
<!--
- Basic validation: using the token ID against a JWT signature - Basic validation: using the token ID against a JWT signature
- Pub key validation: using provided `--discovery-token-ca-cert-hash`. This value is available - Pub key validation: using provided `--discovery-token-ca-cert-hash`. This value is available
in the output of `kubeadm init` or can be calculated using standard tools (the hash is in the output of `kubeadm init` or can be calculated using standard tools (the hash is
@ -1191,19 +1246,16 @@ node basically retrieves the cluster CA certificates from the `cluster-info` Co
`--discovery-token-ca-cert-hash flag` may be repeated multiple times to allow more than one public key. `--discovery-token-ca-cert-hash flag` may be repeated multiple times to allow more than one public key.
- As a additional validation, the CA certificate is retrieved via secure connection and then - As a additional validation, the CA certificate is retrieved via secure connection and then
compared with the CA retrieved initially compared with the CA retrieved initially
--> -->
- 首先,通过不安全连接检索 CA 证书(这是可能的,因为 `kubeadm init` 授予
`system:unauthenticated` 的用户对 `cluster-info` 访问权限)。
- 然后 CA 证书通过以下验证步骤:
- 基本验证:使用令牌 ID 而不是 JWT 签名 - 基本验证:使用令牌 ID 而不是 JWT 签名
- 公钥验证:使用提供的 `--discovery-token-ca-cert-hash`。这个值来自 `kubeadm init` 的输出, - 公钥验证:使用提供的 `--discovery-token-ca-cert-hash`。这个值来自 `kubeadm init` 的输出,
或者可以使用标准工具计算(哈希值是按 RFC7469 中主体公钥信息SPKI对象的字节计算的 或者可以使用标准工具计算(哈希值是按 RFC7469 中主体公钥信息SPKI对象的字节计算的
`--discovery-token-ca-cert-hash` 标志可以重复多次,以允许多个公钥。 `--discovery-token-ca-cert-hash` 标志可以重复多次,以允许多个公钥。
- 作为附加验证,通过安全连接检索 CA 证书,然后与初始检索的 CA 进行比较。 - 作为附加验证,通过安全连接检索 CA 证书,然后与初始检索的 CA 进行比较。
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--
@ -1266,7 +1318,9 @@ is deleted.
该请求会被自动批准,并且该操作保存 `ca.crt` 文件和 `kubelet.conf` 文件,用于 该请求会被自动批准,并且该操作保存 `ca.crt` 文件和 `kubelet.conf` 文件,用于
kubelet 加入集群,同时删除 `bootstrap-kubelet.conf` kubelet 加入集群,同时删除 `bootstrap-kubelet.conf`
<!-- Please note that: --> <!--
Please note that:
-->
请注意: 请注意:
<!-- <!--