Merge pull request #33670 from zaunist/setup-2
[zh]: Rsyc content/zh/docs/setup/production-environment/tools/kubeadm setup-2
This commit is contained in:
commit
603efae65e
|
|
@ -26,8 +26,6 @@ Kubernetes 需要 PKI 证书才能进行基于 TLS 的身份验证。如果你
|
|||
则会自动生成集群所需的证书。你还可以生成自己的证书。
|
||||
例如,不将私钥存储在 API 服务器上,可以让私钥更加安全。此页面说明了集群必需的证书。
|
||||
|
||||
|
||||
|
||||
<!-- body -->
|
||||
|
||||
<!--
|
||||
|
|
@ -41,6 +39,8 @@ Kubernetes 需要 PKI 才能执行以下操作:
|
|||
|
||||
<!--
|
||||
* Client certificates for the kubelet to authenticate to the API server
|
||||
* Kubelet [server certificates](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates)
|
||||
for the API server to talk to the kubelets
|
||||
* Server certificate for the API server endpoint
|
||||
* Client certificates for administrators of the cluster to authenticate to the API server
|
||||
* Client certificates for the API server to talk to the kubelets
|
||||
|
|
@ -50,6 +50,8 @@ Kubernetes 需要 PKI 才能执行以下操作:
|
|||
* Client and server certificates for the [front-proxy](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
|
||||
-->
|
||||
* Kubelet 的客户端证书,用于 API 服务器身份验证
|
||||
* Kubelet [服务端证书](/zh/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates),
|
||||
用于 API 服务器与 Kubelet 的会话
|
||||
* API 服务器端点的证书
|
||||
* 集群管理员的客户端证书,用于 API 服务器身份认证
|
||||
* API 服务器的客户端证书,用于和 Kubelet 的会话
|
||||
|
|
@ -75,20 +77,25 @@ etcd 还实现了双向 TLS 来对客户端和对其他对等节点进行身份
|
|||
<!--
|
||||
## Where certificates are stored
|
||||
|
||||
If you install Kubernetes with kubeadm, certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory.
|
||||
If you install Kubernetes with kubeadm, most certificates are stored in `/etc/kubernetes/pki`. All paths in this documentation are relative to that directory, with the exception of user account certificates which kubeadm places in `/etc/kubernetes`.
|
||||
-->
|
||||
## 证书存放的位置
|
||||
|
||||
如果你是通过 kubeadm 安装的 Kubernetes,所有证书都存放在 `/etc/kubernetes/pki` 目录下。本文所有相关的路径都是基于该路径的相对路径。
|
||||
假如通过 kubeadm 安装 Kubernetes,大多数证书都存储在 `/etc/kubernetes/pki`。
|
||||
本文档中的所有路径都是相对于该目录的,但用户账户证书除外,kubeadm 将其放在 `/etc/kubernetes` 中。
|
||||
|
||||
<!--
|
||||
## Configure certificates manually
|
||||
|
||||
If you don't want kubeadm to generate the required certificates, you can create them in either of the following ways.
|
||||
If you don't want kubeadm to generate the required certificates, you can create them using a single root CA or by providing all certificates. See [Certificates](/docs/tasks/administer-cluster/certificates/) for details on creating your own certificate authority.
|
||||
See [Certificate Management with kubeadm](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/) for more on managing certificates.
|
||||
-->
|
||||
## 手动配置证书
|
||||
|
||||
如果你不想通过 kubeadm 生成这些必需的证书,你可以通过下面两种方式之一来手动创建他们。
|
||||
如果你不想通过 kubeadm 生成这些必需的证书,你可以使用一个单一的根 CA
|
||||
来创建这些证书或者直接提供所有证书。
|
||||
参见[证书](/zh/docs/tasks/administer-cluster/certificates/)以进一步了解创建自己的证书机构。
|
||||
关于管理证书的更多信息,请参见[使用 kubeadm 进行证书管理](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)。
|
||||
|
||||
<!--
|
||||
### Single root CA
|
||||
|
|
@ -120,6 +127,20 @@ On top of the above CAs, it is also necessary to get a public/private key pair f
|
|||
|
||||
上面的 CA 之外,还需要获取用于服务账户管理的密钥对,也就是 `sa.key` 和 `sa.pub`。
|
||||
|
||||
<!--
|
||||
The following example illustrates the CA key and certificate files shown in the previous table:
|
||||
-->
|
||||
下面的例子说明了上表中所示的 CA 密钥和证书文件。
|
||||
|
||||
```console
|
||||
/etc/kubernetes/pki/ca.crt
|
||||
/etc/kubernetes/pki/ca.key
|
||||
/etc/kubernetes/pki/etcd/ca.crt
|
||||
/etc/kubernetes/pki/etcd/ca.key
|
||||
/etc/kubernetes/pki/front-proxy-ca.crt
|
||||
/etc/kubernetes/pki/front-proxy-ca.key
|
||||
```
|
||||
|
||||
<!--
|
||||
### All certificates
|
||||
|
||||
|
|
@ -135,7 +156,7 @@ Required certificates:
|
|||
|
||||
| 默认 CN | 父级 CA | O (位于 Subject 中) | 类型 | 主机 (SAN) |
|
||||
|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
|
||||
| kube-etcd | etcd-ca | | server, client | `localhost`, `127.0.0.1` |
|
||||
| kube-etcd | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
|
||||
| kube-etcd-peer | etcd-ca | | server, client | `<hostname>`, `<Host_IP>`, `localhost`, `127.0.0.1` |
|
||||
| kube-etcd-healthcheck-client | etcd-ca | | client | |
|
||||
| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
|
||||
|
|
@ -147,14 +168,14 @@ Required certificates:
|
|||
[1]: any other IP or DNS name you contact your cluster on (as used by [kubeadm](/docs/reference/setup-tools/kubeadm/) the load balancer stable IP and/or DNS name, `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`,
|
||||
`kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`)
|
||||
|
||||
where `kind` maps to one or more of the [x509 key usage](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage) types:
|
||||
where `kind` maps to one or more of the [x509 key usage](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage) types:
|
||||
-->
|
||||
[1]: 用来连接到集群的不同 IP 或 DNS 名
|
||||
(就像 [kubeadm](/zh/docs/reference/setup-tools/kubeadm/) 为负载均衡所使用的固定
|
||||
IP 或 DNS 名,`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`、
|
||||
`kubernetes.default.svc.cluster`、`kubernetes.default.svc.cluster.local`)。
|
||||
|
||||
其中,`kind` 对应一种或多种类型的 [x509 密钥用途](https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage):
|
||||
其中,`kind` 对应一种或多种类型的 [x509 密钥用途](https://pkg.go.dev/k8s.io/api/certificates/v1beta1#KeyUsage):
|
||||
|
||||
<!--
|
||||
| kind | Key usage |
|
||||
|
|
@ -167,7 +188,6 @@ IP 或 DNS 名,`kubernetes`、`kubernetes.default`、`kubernetes.default.svc`
|
|||
| server | 数字签名、密钥加密、服务端认证 |
|
||||
| client | 数字签名、密钥加密、客户端认证 |
|
||||
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
Hosts/SAN listed above are the recommended ones for getting a working cluster; if required by a specific setup, it is possible to add additional SANs on all the server certificates.
|
||||
|
|
@ -226,6 +246,37 @@ Same considerations apply for the service account key pair:
|
|||
| sa.key | | kube-controller-manager | --service-account-private-key-file |
|
||||
| | sa.pub | kube-apiserver | --service-account-key-file |
|
||||
|
||||
<!--
|
||||
The following example illustrates the file paths [from the previous tables](/docs/setup/best-practices/certificates/#certificate-paths) you need to provide if you are generating all of your own keys and certificates:
|
||||
-->
|
||||
下面的例子展示了自行生成所有密钥和证书时所需要提供的文件路径。
|
||||
这些路径基于[前面的表格](/zh/docs/setup/best-practices/certificates/#certificate-paths)。
|
||||
|
||||
```console
|
||||
/etc/kubernetes/pki/etcd/ca.key
|
||||
/etc/kubernetes/pki/etcd/ca.crt
|
||||
/etc/kubernetes/pki/apiserver-etcd-client.key
|
||||
/etc/kubernetes/pki/apiserver-etcd-client.crt
|
||||
/etc/kubernetes/pki/ca.key
|
||||
/etc/kubernetes/pki/ca.crt
|
||||
/etc/kubernetes/pki/apiserver.key
|
||||
/etc/kubernetes/pki/apiserver.crt
|
||||
/etc/kubernetes/pki/apiserver-kubelet-client.key
|
||||
/etc/kubernetes/pki/apiserver-kubelet-client.crt
|
||||
/etc/kubernetes/pki/front-proxy-ca.key
|
||||
/etc/kubernetes/pki/front-proxy-ca.crt
|
||||
/etc/kubernetes/pki/front-proxy-client.key
|
||||
/etc/kubernetes/pki/front-proxy-client.crt
|
||||
/etc/kubernetes/pki/etcd/server.key
|
||||
/etc/kubernetes/pki/etcd/server.crt
|
||||
/etc/kubernetes/pki/etcd/peer.key
|
||||
/etc/kubernetes/pki/etcd/peer.crt
|
||||
/etc/kubernetes/pki/etcd/healthcheck-client.key
|
||||
/etc/kubernetes/pki/etcd/healthcheck-client.crt
|
||||
/etc/kubernetes/pki/sa.key
|
||||
/etc/kubernetes/pki/sa.pub
|
||||
```
|
||||
|
||||
<!--
|
||||
## Configure certificates for user accounts
|
||||
|
||||
|
|
@ -285,3 +336,14 @@ These files are used as follows:
|
|||
| controller-manager.conf | kube-controller-manager | 必需添加到 `manifests/kube-controller-manager.yaml` 清单中 |
|
||||
| scheduler.conf | kube-scheduler | 必需添加到 `manifests/kube-scheduler.yaml` 清单中 |
|
||||
|
||||
<!--
|
||||
The following files illustrate full paths to the files listed in the previous table:
|
||||
-->
|
||||
下面是前表中所列文件的完整路径。
|
||||
|
||||
```console
|
||||
/etc/kubernetes/admin.conf
|
||||
/etc/kubernetes/kubelet.conf
|
||||
/etc/kubernetes/controller-manager.conf
|
||||
/etc/kubernetes/scheduler.conf
|
||||
```
|
||||
|
|
|
|||
|
|
@ -49,11 +49,8 @@ as Ansible or Terraform.
|
|||
一组云服务器,Raspberry Pi 等。无论是部署到云还是本地,
|
||||
你都可以将 `kubeadm` 集成到预配置系统中,例如 Ansible 或 Terraform。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
<!--
|
||||
To follow this guide, you need:
|
||||
|
||||
|
|
@ -77,13 +74,12 @@ of Kubernetes that you want to use in your new cluster.
|
|||
-->
|
||||
你还需要使用可以在新集群中部署特定 Kubernetes 版本对应的 `kubeadm`。
|
||||
|
||||
|
||||
<!--
|
||||
[Kubernetes' version and version skew support policy](/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
|
||||
Check that policy to learn about what versions of Kubernetes and `kubeadm`
|
||||
are supported. This page is written for Kubernetes {{< param "version" >}}.
|
||||
-->
|
||||
[Kubernetes 版本及版本倾斜支持策略](/zh/docs/setup/release/version-skew-policy/#supported-versions) 适用于 `kubeadm` 以及整个 Kubernetes。
|
||||
[Kubernetes 版本及版本偏差策略](/zh/docs/setup/release/version-skew-policy/#supported-versions) 适用于 `kubeadm` 以及整个 Kubernetes。
|
||||
查阅该策略以了解支持哪些版本的 Kubernetes 和 `kubeadm`。
|
||||
该页面是为 Kubernetes {{< param "version" >}} 编写的。
|
||||
|
||||
|
|
@ -102,8 +98,6 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
|||
根据定义,在 `kubeadm alpha` 下的所有命令均在 alpha 级别上受支持。
|
||||
{{< /note >}}
|
||||
|
||||
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
<!--
|
||||
|
|
@ -125,14 +119,16 @@ Any commands under `kubeadm alpha` are, by definition, supported on an alpha lev
|
|||
## 操作指南
|
||||
|
||||
<!--
|
||||
### Installing kubeadm on your hosts
|
||||
### Preparing the hosts
|
||||
-->
|
||||
### 在你的主机上安装 kubeadm
|
||||
### 主机准备
|
||||
|
||||
<!--
|
||||
See ["Installing kubeadm"](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
Install a {{< glossary_tooltip term_id="container-runtime" text="container runtime" >}} and kubeadm on all the hosts.
|
||||
For detailed instructions and other prerequisites, see [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
|
||||
-->
|
||||
查看 ["安装 kubeadm"](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。
|
||||
在所有主机上安装 {{< glossary_tooltip term_id="container-runtime" text="容器运行时" >}} 和 kubeadm。
|
||||
详细说明和其他前提条件,请参见[安装 kubeadm](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)。
|
||||
|
||||
<!--
|
||||
If you have already installed kubeadm, run `apt-get update &&
|
||||
|
|
@ -203,30 +199,31 @@ for all control-plane nodes. Such an endpoint can be either a DNS name or an IP
|
|||
be passed to `kubeadm init`. Depending on which
|
||||
third-party provider you choose, you might need to set the `--pod-network-cidr` to
|
||||
a provider-specific value. See [Installing a Pod network add-on](#pod-network).
|
||||
1. (Optional) Since version 1.14, `kubeadm` tries to detect the container runtime on Linux
|
||||
by using a list of well known domain socket paths. To use different container runtime or
|
||||
if there are more than one installed on the provisioned node, specify the `--cri-socket`
|
||||
argument to `kubeadm init`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||
-->
|
||||
1. (推荐)如果计划将单个控制平面 kubeadm 集群升级成高可用,
|
||||
你应该指定 `--control-plane-endpoint` 为所有控制平面节点设置共享端点。
|
||||
端点可以是负载均衡器的 DNS 名称或 IP 地址。
|
||||
1. 选择一个 Pod 网络插件,并验证是否需要为 `kubeadm init` 传递参数。
|
||||
根据你选择的第三方网络插件,你可能需要设置 `--pod-network-cidr` 的值。
|
||||
请参阅 [安装Pod网络附加组件](#pod-network)。
|
||||
|
||||
<!--
|
||||
1. (Optional) `kubeadm` tries to detect the container runtime by using a list of well
|
||||
known endpoints. To use different container runtime or if there are more than one installed
|
||||
on the provisioned node, specify the `--cri-socket` argument to `kubeadm`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
|
||||
1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
|
||||
with the default gateway to set the advertise address for this particular control-plane node's API server.
|
||||
To use a different network interface, specify the `--apiserver-advertise-address=<ip-address>` argument
|
||||
to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
|
||||
must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
|
||||
-->
|
||||
1. (推荐)如果计划将单个控制平面 kubeadm 集群升级成高可用,
|
||||
你应该指定 `--control-plane-endpoint` 为所有控制平面节点设置共享端点。
|
||||
端点可以是负载均衡器的 DNS 名称或 IP 地址。
|
||||
1. 选择一个 Pod 网络插件,并验证是否需要为 `kubeadm init` 传递参数。
|
||||
根据你选择的第三方网络插件,你可能需要设置 `--pod-network-cidr` 的值。
|
||||
请参阅 [安装Pod网络附加组件](#pod-network)。
|
||||
1. (可选)从版本1.14开始,`kubeadm` 尝试使用一系列众所周知的域套接字路径来检测 Linux 上的容器运行时。
|
||||
要使用不同的容器运行时,
|
||||
或者如果在预配置的节点上安装了多个容器,请为 `kubeadm init` 指定 `--cri-socket` 参数。
|
||||
请参阅[安装运行时](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime)。
|
||||
1. (可选)`kubeadm` 试图通过使用已知的端点列表来检测容器运行时。
|
||||
或者如果在预配置的节点上安装了多个容器,请为 `kubeadm init` 指定 `--cri-socket` 参数。
|
||||
请参阅[安装运行时](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime)。
|
||||
1. (可选)除非另有说明,否则 `kubeadm` 使用与默认网关关联的网络接口来设置此控制平面节点 API server 的广播地址。
|
||||
要使用其他网络接口,请为 `kubeadm init` 设置 `--apiserver-advertise-address=<ip-address>` 参数。
|
||||
要部署使用 IPv6 地址的 Kubernetes 集群,
|
||||
必须指定一个 IPv6 地址,例如 `--apiserver-advertise-address=fd00::101`
|
||||
要使用其他网络接口,请为 `kubeadm init` 设置 `--apiserver-advertise-address=<ip-address>` 参数。
|
||||
要部署使用 IPv6 地址的 Kubernetes 集群,
|
||||
必须指定一个 IPv6 地址,例如 `--apiserver-advertise-address=fd00::101`
|
||||
|
||||
<!--
|
||||
To initialize the control-plane node run:
|
||||
|
|
@ -262,7 +259,7 @@ Here is an example mapping:
|
|||
-->
|
||||
这是一个示例映射:
|
||||
|
||||
```
|
||||
```console
|
||||
192.168.0.102 cluster-endpoint
|
||||
```
|
||||
|
||||
|
|
@ -302,6 +299,13 @@ To customize control plane components, including optional IPv6 assignment to liv
|
|||
-->
|
||||
要自定义控制平面组件,包括可选的对控制平面组件和 etcd 服务器的活动探针提供 IPv6 支持,请参阅[自定义参数](/zh/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)。
|
||||
|
||||
<!--
|
||||
To reconfigure a cluster that has already been created see
|
||||
[Reconfiguring a kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure).
|
||||
-->
|
||||
要重新配置一个已经创建的集群,请参见
|
||||
[重新配置一个 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-reconfigure)。
|
||||
|
||||
<!--
|
||||
To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
|
||||
-->
|
||||
|
|
@ -314,7 +318,6 @@ have container image support for this architecture.
|
|||
如果将具有不同架构的节点加入集群,
|
||||
请确保已部署的 DaemonSet 对这种体系结构具有容器镜像支持。
|
||||
|
||||
|
||||
<!--
|
||||
`kubeadm init` first runs a series of prechecks to ensure that the machine
|
||||
is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
|
||||
|
|
@ -352,7 +355,6 @@ also part of the `kubeadm init` output:
|
|||
要使非 root 用户可以运行 kubectl,请运行以下命令,
|
||||
它们也是 `kubeadm init` 输出的一部分:
|
||||
|
||||
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
|
|
@ -373,13 +375,15 @@ export KUBECONFIG=/etc/kubernetes/admin.conf
|
|||
Kubeadm signs the certificate in the `admin.conf` to have `Subject: O = system:masters, CN = kubernetes-admin`.
|
||||
`system:masters` is a break-glass, super user group that bypasses the authorization layer (e.g. RBAC).
|
||||
Do not share the `admin.conf` file with anyone and instead grant users custom permissions by generating
|
||||
them a kubeconfig file using the `kubeadm kubeconfig user` command.
|
||||
them a kubeconfig file using the `kubeadm kubeconfig user` command. For more details see
|
||||
[Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users).
|
||||
-->
|
||||
kubeadm 对 `admin.conf` 中的证书进行签名时,将其配置为
|
||||
`Subject: O = system:masters, CN = kubernetes-admin`。
|
||||
`system:masters` 是一个例外的、超级用户组,可以绕过鉴权层(例如 RBAC)。
|
||||
不要将 `admin.conf` 文件与任何人共享,应该使用 `kubeadm kubeconfig user`
|
||||
命令为其他用户生成 kubeconfig 文件,完成对他们的定制授权。
|
||||
更多细节请参见[为其他用户生成 kubeconfig 文件](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs#kubeconfig-additional-users)。
|
||||
{{< /warning >}}
|
||||
|
||||
<!--
|
||||
|
|
@ -402,7 +406,6 @@ created, and deleted with the `kubeadm token` command. See the
|
|||
可以使用 `kubeadm token` 命令列出,创建和删除这些令牌。
|
||||
请参阅 [kubeadm 参考指南](/zh/docs/reference/setup-tools/kubeadm/kubeadm-token/)。
|
||||
|
||||
|
||||
<!--
|
||||
### Installing a Pod network add-on {#pod-network}
|
||||
-->
|
||||
|
|
@ -517,6 +520,26 @@ for `kubeadm`.
|
|||
如果您的网络无法正常工作或 CoreDNS 不在“运行中”状态,请查看 `kubeadm` 的
|
||||
[故障排除指南](/zh/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)。
|
||||
|
||||
<!--
|
||||
### Managed node labels
|
||||
-->
|
||||
|
||||
### 托管节点标签 {#managed-node-labels}
|
||||
|
||||
<!--
|
||||
By default, kubeadm enables the [NodeRestriction](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
|
||||
admission controller that restricts what labels can be self-applied by kubelets on node registration.
|
||||
The admission controller documentation covers what labels are permitted to be used with the kubelet `--node-labels` option.
|
||||
The `node-role.kubernetes.io/control-plane` label is such a restricted label and kubeadm manually applies it using
|
||||
a privileged client after a node has been created. To do that manually you can do the same by using `kubectl label`
|
||||
and ensure it is using a privileged kubeconfig such as the kubeadm managed `/etc/kubernetes/admin.conf`.
|
||||
-->
|
||||
默认情况下,kubeadm 启用 [NodeRestriction](/zh/docs/reference/access-authn-authz/admissiontrollers/#noderestriction)
|
||||
准入控制器来限制 kubelets 在节点注册时可以应用哪些标签。准入控制器文档描述 kubelet `--node-labels` 选项允许使用哪些标签。
|
||||
其中 `node-role.kubernetes.io/control-plane` 标签就是这样一个受限制的标签,
|
||||
kubeadm 在节点创建后使用特权客户端应用此标签。
|
||||
你可以使用一个有特权的 kubeconfig, 比如由 kubeadm 管理的 `/etc/kubernetes/admin.conf`,
|
||||
通过执行 `kubectl label` 来手动完成操作。
|
||||
|
||||
<!--
|
||||
### Control plane node isolation
|
||||
|
|
@ -524,36 +547,43 @@ for `kubeadm`.
|
|||
### 控制平面节点隔离
|
||||
|
||||
<!--
|
||||
By default, your cluster will not schedule Pods on the control-plane node for security
|
||||
reasons. If you want to be able to schedule Pods on the control-plane node, for example for a
|
||||
single-machine Kubernetes cluster for development, run:
|
||||
By default, your cluster will not schedule Pods on the control plane nodes for security
|
||||
reasons. If you want to be able to schedule Pods on the control plane nodes,
|
||||
for example for a single machine Kubernetes cluster, run:
|
||||
-->
|
||||
默认情况下,出于安全原因,你的集群不会在控制平面节点上调度 Pod。
|
||||
如果你希望能够在控制平面节点上调度 Pod,
|
||||
例如用于开发的单机 Kubernetes 集群,请运行:
|
||||
如果你希望能够在控制平面节点上调度 Pod,例如单机 Kubernetes 集群,请运行:
|
||||
|
||||
```bash
|
||||
kubectl taint nodes --all node-role.kubernetes.io/master-
|
||||
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
|
||||
```
|
||||
<!--
|
||||
With output looking something like:
|
||||
The output will look something like:
|
||||
-->
|
||||
输出看起来像:
|
||||
|
||||
```
|
||||
```console
|
||||
node "test-01" untainted
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
```
|
||||
|
||||
<!--
|
||||
This will remove the `node-role.kubernetes.io/master` taint from any nodes that
|
||||
have it, including the control-plane node, meaning that the scheduler will then be able
|
||||
This will remove the `node-role.kubernetes.io/control-plane` and
|
||||
`node-role.kubernetes.io/master` taints from any nodes that have them,
|
||||
including the control plane nodes, meaning that the scheduler will then be able
|
||||
to schedule Pods everywhere.
|
||||
-->
|
||||
这将从任何拥有 `node-role.kubernetes.io/master` taint 标记的节点中移除该标记,
|
||||
包括控制平面节点,这意味着调度程序将能够在任何地方调度 Pods。
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
The `node-role.kubernetes.io/master` taint is deprecated and kubeadm will stop using it in version 1.25.
|
||||
{{< /note >}}
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
`node-role.kubernetes.io/master` 污点已被废弃,kubeadm 将在 1.25 版本中停止使用它。
|
||||
{{< /note >}}
|
||||
|
||||
<!--
|
||||
### Joining your nodes {#join-nodes}
|
||||
|
|
@ -583,7 +613,6 @@ If you do not have the token, you can get it by running the following command on
|
|||
-->
|
||||
如果没有令牌,可以通过在控制平面节点上运行以下命令来获取令牌:
|
||||
|
||||
|
||||
```bash
|
||||
kubeadm token list
|
||||
```
|
||||
|
|
@ -608,7 +637,6 @@ you can create a new token by running the following command on the control-plane
|
|||
默认情况下,令牌会在24小时后过期。如果要在当前令牌过期后将节点加入集群,
|
||||
则可以通过在控制平面节点上运行以下命令来创建新令牌:
|
||||
|
||||
|
||||
```bash
|
||||
kubeadm token create
|
||||
```
|
||||
|
|
@ -652,7 +680,7 @@ The output should look something like:
|
|||
-->
|
||||
输出应类似于:
|
||||
|
||||
```
|
||||
```console
|
||||
[preflight] Running pre-flight checks
|
||||
|
||||
... (log output of join workflow) ...
|
||||
|
|
@ -672,11 +700,10 @@ nodes` when run on the control-plane node.
|
|||
几秒钟后,当你在控制平面节点上执行 `kubectl get nodes`,你会注意到该节点出现在输出中。
|
||||
|
||||
<!--
|
||||
As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run
|
||||
on the first control-plane node. To provide higher availability, please rebalance the CoreDNS Pods
|
||||
As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run
|
||||
on the first control-plane node. To provide higher availability, please rebalance the CoreDNS Pods
|
||||
with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined.
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
由于集群节点通常是按顺序初始化的,CoreDNS Pods 很可能都运行在第一个控制面节点上。
|
||||
为了提供更高的可用性,请在加入至少一个新节点后
|
||||
|
|
@ -766,7 +793,6 @@ and make sure that the node is empty, then deconfigure the node.
|
|||
则应首先[清空节点](/docs/reference/generated/kubectl/kubectl-commands#drain)并确保该节点为空,
|
||||
然后取消配置该节点。
|
||||
|
||||
|
||||
<!--
|
||||
### Remove the node
|
||||
-->
|
||||
|
|
@ -840,7 +866,6 @@ options.
|
|||
-->
|
||||
有关此子命令及其选项的更多信息,请参见[`kubeadm reset`](/zh/docs/reference/setup-tools/kubeadm/kubeadm-reset/)参考文档。
|
||||
|
||||
|
||||
<!-- discussion -->
|
||||
|
||||
<!--
|
||||
|
|
@ -853,7 +878,7 @@ options.
|
|||
* <a id="lifecycle" />See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
for details about upgrading your cluster using `kubeadm`.
|
||||
* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm)
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
|
||||
* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
|
||||
* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
|
||||
of Pod network add-ons.
|
||||
* <a id="other-addons" />See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
|
||||
|
|
@ -867,7 +892,7 @@ options.
|
|||
* 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行。
|
||||
* <a id="lifecycle"/>有关使用 kubeadm 升级集群的详细信息,请参阅[升级 kubeadm 集群](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
|
||||
* 在 [kubeadm 参考文档](/zh/docs/reference/setup-tools/kubeadm)中了解有关高级 `kubeadm` 用法的信息。
|
||||
* 了解有关 Kubernetes [概念](/zh/docs/concepts/)和 [`kubectl`](/zh/docs/reference/kubectl/overview/) 的更多信息。
|
||||
* 了解有关 Kubernetes [概念](/zh/docs/concepts/)和 [`kubectl`](/zh/docs/reference/kubectl/) 的更多信息。
|
||||
* 有关 Pod 网络附加组件的更多列表,请参见[集群网络](/zh/docs/concepts/cluster-administration/networking/)页面。
|
||||
* <a id="other-addons" />请参阅[附加组件列表](/zh/docs/concepts/cluster-administration/addons/)以探索其他附加组件,
|
||||
包括用于 Kubernetes 集群的日志记录,监视,网络策略,可视化和控制的工具。
|
||||
|
|
@ -899,35 +924,122 @@ options.
|
|||
* SIG 集群生命周期邮件列表:
|
||||
[kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
|
||||
|
||||
|
||||
<!--
|
||||
## Version skew policy {#version-skew-policy}
|
||||
-->
|
||||
## 版本倾斜政策 {#version-skew-policy}
|
||||
## 版本偏差策略 {#version-skew-policy}
|
||||
|
||||
<!--
|
||||
The `kubeadm` tool of version v{{< skew latestVersion >}} may deploy clusters with a control plane of version v{{< skew latestVersion >}} or v{{< skew prevMinorVersion >}}.
|
||||
`kubeadm` v{{< skew latestVersion >}} can also upgrade an existing kubeadm-created cluster of version v{{< skew prevMinorVersion >}}.
|
||||
While kubeadm allows version skew against some components that it manages, it is recommended that you
|
||||
match the kubeadm version with the versions of the control plane components, kube-proxy and kubelet.
|
||||
-->
|
||||
版本 v{{< skew latestVersion >}} 的kubeadm 工具可以使用版本 v{{< skew latestVersion >}} 或 v{{< skew prevMinorVersion >}} 的控制平面部署集群。kubeadm v{{< skew latestVersion >}} 还可以升级现有的 kubeadm 创建的 v{{< skew prevMinorVersion >}} 版本的集群。
|
||||
虽然 kubeadm 允许所管理的组件有一定程度的版本偏差,
|
||||
但是建议你将 kubeadm 的版本与控制平面组件、kube-proxy 和 kubelet 的版本相匹配。
|
||||
|
||||
<!--
|
||||
Due to that we can't see into the future, kubeadm CLI v{{< skew latestVersion >}} may or may not be able to deploy v{{< skew nextMinorVersion >}} clusters.
|
||||
### kubeadm's skew against the Kubernetes version
|
||||
-->
|
||||
由于我们不能预见未来,kubeadm CLI v{{< skew latestVersion >}} 可能会或可能无法部署 v{{< skew nextMinorVersion >}} 集群。
|
||||
|
||||
### kubeadm 中的 Kubernetes 版本偏差
|
||||
|
||||
<!--
|
||||
These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
|
||||
kubeadm can be used with Kubernetes components that are the same version as kubeadm
|
||||
or one version older. The Kubernetes version can be specified to kubeadm by using the
|
||||
`--kubernetes-version` flag of `kubeadm init` or the
|
||||
[`ClusterConfiguration.kubernetesVersion`](/docs/reference/config-api/kubeadm-config.v1beta3/)
|
||||
field when using `--config`. This option will control the versions
|
||||
of kube-apiserver, kube-controller-manager, kube-scheduler and kube-proxy.
|
||||
-->
|
||||
这些资源提供了有关 kubelet 与控制平面以及其他 Kubernetes 组件之间受支持的版本倾斜的更多信息:
|
||||
kubeadm 可以与 Kubernetes 组件一起使用,这些组件的版本与 kubeadm 相同,或者比它大一个版本。
|
||||
Kubernetes 版本可以通过使用 `--kubeadm init` 的 `--kubernetes-version` 标志或使用 `--config` 时的
|
||||
[`ClusterConfiguration.kubernetesVersion`](/zh/docs/reference/configapi/kubeadm-config.v1beta3/)
|
||||
字段指定给 kubeadm。
|
||||
这个选项将控制 kube-apiserver、kube-controller-manager、kube-scheduler 和 kube-proxy 的版本。
|
||||
|
||||
<!--
|
||||
* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
|
||||
* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
||||
Example:
|
||||
* kubeadm is at {{< skew latestVersion >}}
|
||||
* `kubernetesVersion` must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}}
|
||||
-->
|
||||
* Kubernetes [版本和版本偏斜政策](/zh/docs/setup/release/version-skew-policy/)
|
||||
* Kubeadm-specific [安装指南](/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
|
||||
例子:
|
||||
* kubeadm 的版本为 {{< skew latestVersion >}}。
|
||||
* `kubernetesVersion` 必须为 {{< skew latestVersion >}} 或者 {{< skew prevMinorVersion >}}。
|
||||
|
||||
<!--
|
||||
### kubeadm's skew against the kubelet
|
||||
-->
|
||||
### kubeadm 中 kubelet 的版本偏差
|
||||
|
||||
<!--
|
||||
Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is the same
|
||||
version as kubeadm or one version older.
|
||||
-->
|
||||
与 Kubernetes 版本类似,kubeadm 可以使用与 kubeadm 相同版本的 kubelet,
|
||||
或者比 kubeadm 老一个版本的 kubelet。
|
||||
|
||||
<!--
|
||||
Example:
|
||||
* kubeadm is at {{< skew latestVersion >}}
|
||||
* kubelet on the host must be at {{< skew latestVersion >}} or {{< skew prevMinorVersion >}}
|
||||
-->
|
||||
例子:
|
||||
* kubeadm 的版本为 {{< skew latestVersion >}}
|
||||
* 主机上的 kubelet 版本必须为 {{< skew latestVersion >}} 或者 {{< skew prevMinorVersion >}}
|
||||
|
||||
<!--
|
||||
### kubeadm's skew against kubeadm
|
||||
-->
|
||||
### kubeadm 支持的 kubeadm 的版本偏差
|
||||
|
||||
<!--
|
||||
There are certain limitations on how kubeadm commands can operate on existing nodes or whole clusters
|
||||
managed by kubeadm.
|
||||
-->
|
||||
kubeadm 命令在现有节点或由 kubeadm 管理的整个集群上的操作有一定限制。
|
||||
|
||||
<!--
|
||||
If new nodes are joined to the cluster, the kubeadm binary used for `kubeadm join` must match
|
||||
the last version of kubeadm used to either create the cluster with `kubeadm init` or to upgrade
|
||||
the same node with `kubeadm upgrade`. Similar rules apply to the rest of the kubeadm commands
|
||||
with the exception of `kubeadm upgrade`.
|
||||
-->
|
||||
如果新的节点加入到集群中,用于 `kubeadm join` 的 kubeadm 二进制文件必须与用 `kubeadm init`
|
||||
创建集群或用 `kubeadm upgrade` 升级同一节点时所用的 kubeadm 版本一致。
|
||||
类似的规则适用于除了 `kubeadm upgrade` 以外的其他 kubeadm 命令。
|
||||
|
||||
<!--
|
||||
Example for `kubeadm join`:
|
||||
* kubeadm version {{< skew latestVersion >}} was used to create a cluster with `kubeadm init`
|
||||
* Joining nodes must use a kubeadm binary that is at version {{< skew latestVersion >}}
|
||||
-->
|
||||
`kubeadm join` 的例子:
|
||||
* 使用 `kubeadm init` 创建集群时使用版本为 {{< skew latestVersion >}} 的 kubeadm。
|
||||
* 加入的节点必须使用版本为 {{< skew latestVersion >}} 的 kubeadm 二进制文件。
|
||||
|
||||
<!--
|
||||
Nodes that are being upgraded must use a version of kubeadm that is the same MINOR
|
||||
version or one MINOR version newer than the version of kubeadm used for managing the
|
||||
node.
|
||||
-->
|
||||
对于正在升级的节点,所使用的的 kubeadm 必须与管理该节点的 kubeadm 具有相同的
|
||||
MINOR 版本或比后者新一个 MINOR 版本。
|
||||
|
||||
<!--
|
||||
Example for `kubeadm upgrade`:
|
||||
* kubeadm version {{< skew prevMinorVersion >}} was used to create or upgrade the node
|
||||
* The version of kubeadm used for upgrading the node must be at {{< skew prevMinorVersion >}}
|
||||
or {{< skew latestVersion >}}
|
||||
-->
|
||||
`kubeadm upgrade`的例子:
|
||||
* 用于创建或升级节点的 kubeadm 版本为 {{< skew prevMinorVersion >}}。
|
||||
* 用于升级节点的 kubeadm 版本必须为 {{< skew prevMinorVersion >}} 或 {{< skew latestVersion >}}。
|
||||
|
||||
<!--
|
||||
To learn more about the version skew between the different Kubernetes component see
|
||||
the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
|
||||
-->
|
||||
要了解更多关于不同 Kubernetes 组件之间的版本偏差,请参见
|
||||
[版本偏差策略](https://kubernetes.io/releases/version-skew-policy/)。
|
||||
|
||||
<!--
|
||||
## Limitations {#limitations}
|
||||
|
|
@ -944,7 +1056,6 @@ The cluster created here has a single control-plane node, with a single etcd dat
|
|||
running on it. This means that if the control-plane node fails, your cluster may lose
|
||||
data and may need to be recreated from scratch.
|
||||
-->
|
||||
|
||||
此处创建的集群具有单个控制平面节点,运行单个 etcd 数据库。
|
||||
这意味着如果控制平面节点发生故障,你的集群可能会丢失数据并且可能需要从头开始重新创建。
|
||||
|
||||
|
|
@ -1001,5 +1112,4 @@ supports your chosen platform.
|
|||
<!--
|
||||
If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
|
||||
-->
|
||||
|
||||
如果你在使用 kubeadm 时遇到困难,请查阅我们的[故障排除文档](/zh/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)。
|
||||
|
|
|
|||
|
|
@ -15,6 +15,8 @@ weight: 80
|
|||
|
||||
<!-- overview -->
|
||||
|
||||
{{% dockershim-removal %}}
|
||||
|
||||
{{< feature-state for_k8s_version="1.11" state="stable" >}}
|
||||
|
||||
<!--
|
||||
|
|
@ -63,7 +65,7 @@ using kubeadm, rather than managing the kubelet configuration for each Node manu
|
|||
### Propagating cluster-level configuration to each kubelet
|
||||
|
||||
You can provide the kubelet with default values to be used by `kubeadm init` and `kubeadm join`
|
||||
commands. Interesting examples include using a different CRI runtime or setting the default subnet
|
||||
commands. Interesting examples include using a different container runtime or setting the default subnet
|
||||
used by services.
|
||||
|
||||
If you want your services to use the subnet `10.96.0.0/12` as the default for services, you can pass
|
||||
|
|
@ -94,7 +96,7 @@ For more details on the `KubeletConfiguration` have a look at [this section](#co
|
|||
### 将集群级配置传播到每个 kubelet 中
|
||||
|
||||
你可以通过使用 `kubeadm init` 和 `kubeadm join` 命令为 kubelet 提供默认值。
|
||||
有趣的示例包括使用其他 CRI 运行时或通过服务器设置不同的默认子网。
|
||||
有趣的示例包括使用其他容器运行时或通过服务器设置不同的默认子网。
|
||||
|
||||
如果你想使用子网 `10.96.0.0/12` 作为services的默认网段,你可以给 kubeadm 传递 `--service-cidr` 参数:
|
||||
|
||||
|
|
@ -133,14 +135,12 @@ networking, or other host-specific parameters. The following list provides a few
|
|||
unless you are using a cloud provider. You can use the `--hostname-override` flag to override the
|
||||
default behavior if you need to specify a Node name different from the machine's hostname.
|
||||
|
||||
- Currently, the kubelet cannot automatically detect the cgroup driver used by the CRI runtime,
|
||||
but the value of `--cgroup-driver` must match the cgroup driver used by the CRI runtime to ensure
|
||||
- Currently, the kubelet cannot automatically detect the cgroup driver used by the container runtime,
|
||||
but the value of `--cgroup-driver` must match the cgroup driver used by the container runtime to ensure
|
||||
the health of the kubelet.
|
||||
|
||||
- Depending on the CRI runtime your cluster uses, you may need to specify different flags to the kubelet.
|
||||
For instance, when using Docker, you need to specify flags such as `--network-plugin=cni`, but if you
|
||||
are using an external runtime, you need to specify `--container-runtime=remote` and specify the CRI
|
||||
endpoint using the `--container-runtime-endpoint=<path>`.
|
||||
- To specify the container runtime you must set its endpoint with the
|
||||
`--container-runtime-endpoint=<path>` flag.
|
||||
|
||||
You can specify these flags by configuring an individual kubelet's configuration in your service manager,
|
||||
such as systemd.
|
||||
|
|
@ -157,12 +157,10 @@ such as systemd.
|
|||
- 除非你使用云驱动,否则默认情况下 Node API 对象的 `.metadata.name` 会被设置为计算机的主机名。
|
||||
如果你需要指定一个与机器的主机名不同的节点名称,你可以使用 `--hostname-override` 标志覆盖默认值。
|
||||
|
||||
- 当前,kubelet 无法自动检测 CRI 运行时使用的 cgroup 驱动程序,
|
||||
但是值 `--cgroup-driver` 必须与 CRI 运行时使用的 cgroup 驱动程序匹配,以确保 kubelet 的健康运行状况。
|
||||
- 当前,kubelet 无法自动检测容器运行时使用的 cgroup 驱动程序,
|
||||
但是值 `--cgroup-driver` 必须与容器运行时使用的 cgroup 驱动程序匹配,以确保 kubelet 的健康运行状况。
|
||||
|
||||
- 取决于你的集群所使用的 CRI 运行时,你可能需要为 kubelet 指定不同的标志。
|
||||
例如,当使用 Docker 时,你需要指定如 `--network-plugin=cni` 这类标志;但是如果你使用的是外部运行时,
|
||||
则需要指定 `--container-runtime=remote` 并使用 `--container-runtime-endpoint=<path>` 指定 CRI 端点。
|
||||
- 要指定容器运行时,你必须用 `--container-runtime-endpoint=<path>` 标志来指定端点。
|
||||
|
||||
你可以在服务管理器(例如 systemd)中设定某个 kubelet 的配置来指定这些参数。
|
||||
|
||||
|
|
@ -193,10 +191,9 @@ for more information on the individual fields.
|
|||
### Workflow when using `kubeadm init`
|
||||
|
||||
When you call `kubeadm init`, the kubelet configuration is marshalled to disk
|
||||
at `/var/lib/kubelet/config.yaml`, and also uploaded to a ConfigMap in the cluster. The ConfigMap
|
||||
is named `kubelet-config-1.X`, where `X` is the minor version of the Kubernetes version you are
|
||||
initializing. A kubelet configuration file is also written to `/etc/kubernetes/kubelet.conf` with the
|
||||
baseline cluster-wide configuration for all kubelets in the cluster. This configuration file
|
||||
at `/var/lib/kubelet/config.yaml`, and also uploaded to a `kubelet-config` ConfigMap in the `kube-system`
|
||||
namespace of the cluster. A kubelet configuration file is also written to `/etc/kubernetes/kubelet.conf`
|
||||
with the baseline cluster-wide configuration for all kubelets in the cluster. This configuration file
|
||||
points to the client certificates that allow the kubelet to communicate with the API server. This
|
||||
addresses the need to
|
||||
[propagate cluster-level configuration to each kubelet](#propagating-cluster-level-configuration-to-each-kubelet).
|
||||
|
|
@ -211,7 +208,7 @@ KUBELET_KUBEADM_ARGS="--flag1=value1 --flag2=value2 ..."
|
|||
```
|
||||
|
||||
In addition to the flags used when starting the kubelet, the file also contains dynamic
|
||||
parameters such as the cgroup driver and whether to use a different CRI runtime socket
|
||||
parameters such as the cgroup driver and whether to use a different container runtime socket
|
||||
(`--cri-socket`).
|
||||
|
||||
After marshalling these two files to disk, kubeadm attempts to run the following two
|
||||
|
|
@ -225,10 +222,9 @@ If the reload and restart are successful, the normal `kubeadm init` workflow con
|
|||
-->
|
||||
### 当使用 `kubeadm init`时的工作流程
|
||||
|
||||
当调用 `kubeadm init` 时,kubelet 配置被编组到磁盘上的 `/var/lib/kubelet/config.yaml` 中,
|
||||
并且上传到集群中的 ConfigMap。
|
||||
ConfigMap 名为 `kubelet-config-1.X`,其中 `X` 是你正在初始化的 kubernetes 版本的次版本。
|
||||
在集群中所有 kubelet 的基准集群范围内配置,将 kubelet 配置文件写入 `/etc/kubernetes/kubelet.conf` 中。
|
||||
当调用 `kubeadm init` 时,kubelet 的配置会被写入磁盘 `/var/lib/kubelet/config.yaml`,
|
||||
并上传到集群 `kubee-system` 命名空间的 `kubelet-config` ConfigMap。
|
||||
kubelet 配置信息也被写入 `/etc/kubernetes/kubelet.conf`,其中包含集群内所有 kubelet 的基线配置。
|
||||
此配置文件指向允许 kubelet 与 API 服务器通信的客户端证书。
|
||||
这解决了[将集群级配置传播到每个 kubelet](#propagating-cluster-level-configuration-to-each-kubelet) 的需求。
|
||||
|
||||
|
|
@ -240,7 +236,7 @@ kubeadm 将环境文件写入 `/var/lib/kubelet/kubeadm-flags.env`,其中包
|
|||
KUBELET_KUBEADM_ARGS="--flag1=value1 --flag2=value2 ..."
|
||||
```
|
||||
|
||||
除了启动 kubelet 时使用该标志外,该文件还包含动态参数,例如 cgroup 驱动程序以及是否使用其他 CRI 运行时 socket(`--cri-socket`)。
|
||||
除了启动 kubelet 时使用该标志外,该文件还包含动态参数,例如 cgroup 驱动程序以及是否使用其他容器运行时 socket(`--cri-socket`)。
|
||||
|
||||
将这两个文件编组到磁盘后,如果使用 systemd,则 kubeadm 尝试运行以下两个命令:
|
||||
|
||||
|
|
@ -255,14 +251,13 @@ systemctl daemon-reload && systemctl restart kubelet
|
|||
|
||||
When you run `kubeadm join`, kubeadm uses the Bootstrap Token credential to perform
|
||||
a TLS bootstrap, which fetches the credential needed to download the
|
||||
`kubelet-config-1.X` ConfigMap and writes it to `/var/lib/kubelet/config.yaml`. The dynamic
|
||||
`kubelet-config` ConfigMap and writes it to `/var/lib/kubelet/config.yaml`. The dynamic
|
||||
environment file is generated in exactly the same way as `kubeadm init`.
|
||||
-->
|
||||
|
||||
### 当使用 `kubeadm join`时的工作流程
|
||||
|
||||
当运行 `kubeadm join` 时,kubeadm 使用 Bootstrap Token 证书执行 TLS 引导,该引导会获取一份证书,
|
||||
该证书需要下载 `kubelet-config-1.X` ConfigMap 并把它写入 `/var/lib/kubelet/config.yaml` 中。
|
||||
该证书需要下载 `kubelet-config` ConfigMap 并把它写入 `/var/lib/kubelet/config.yaml` 中。
|
||||
动态环境文件的生成方式恰好与 `kubeadm init` 完全相同。
|
||||
|
||||
<!--
|
||||
|
|
@ -280,7 +275,6 @@ After the kubelet loads the new configuration, kubeadm writes the
|
|||
Token. These are used by the kubelet to perform the TLS Bootstrap and obtain a unique
|
||||
credential, which is stored in `/etc/kubernetes/kubelet.conf`.
|
||||
-->
|
||||
|
||||
在 kubelet 加载新配置后,kubeadm 将写入 `/etc/kubernetes/bootstrap-kubelet.conf` KubeConfig 文件中,
|
||||
该文件包含 CA 证书和引导程序令牌。
|
||||
kubelet 使用这些证书执行 TLS 引导程序并获取唯一的凭据,该凭据被存储在 `/etc/kubernetes/kubelet.conf` 中。
|
||||
|
|
@ -289,11 +283,9 @@ kubelet 使用这些证书执行 TLS 引导程序并获取唯一的凭据,该
|
|||
When the `/etc/kubernetes/kubelet.conf` file is written, the kubelet has finished performing the TLS Bootstrap.
|
||||
Kubeadm deletes the `/etc/kubernetes/bootstrap-kubelet.conf` file after completing the TLS Bootstrap.
|
||||
-->
|
||||
|
||||
当 `/etc/kubernetes/kubelet.conf` 文件被写入后,kubelet 就完成了 TLS 引导过程。
|
||||
Kubeadm 在完成 TLS 引导过程后将删除 `/etc/kubernetes/bootstrap-kubelet.conf` 文件。
|
||||
|
||||
|
||||
<!--
|
||||
## The kubelet drop-in file for systemd
|
||||
|
||||
|
|
@ -348,7 +340,7 @@ This file specifies the default locations for all of the files managed by kubead
|
|||
通过 `kubeadm` [DEB](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf)
|
||||
或者 [RPM 包](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubeadm/10-kubeadm.conf)
|
||||
安装的配置文件被写入 `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` 并由系统使用。
|
||||
它对原来的 [RPM 版本 `kubelet.service`](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)
|
||||
它对原来的 [RPM 版本 `kubelet.service`](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service)
|
||||
或者 [DEB 版本 `kubelet.service`](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service)
|
||||
作了增强:
|
||||
|
||||
|
|
@ -407,4 +399,3 @@ Kubernetes 版本对应的 DEB 和 RPM 软件包是:
|
|||
| `kubectl` | 安装 `/usr/bin/kubectl` 可执行文件。 |
|
||||
| `cri-tools` | 从 [cri-tools git 仓库](https://github.com/kubernetes-sigs/cri-tools)中安装 `/usr/bin/crictl` 可执行文件。 |
|
||||
| `kubernetes-cni` | 从 [plugins git 仓库](https://github.com/containernetworking/plugins)中安装 `/opt/cni/bin` 可执行文件。|
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue