From 6c447c9282fe753662d61f3ac9c0dda7f9c4e3a5 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 23 Nov 2020 16:41:45 +0800 Subject: [PATCH] [zh] Resync kubeadm-upgrade --- .../kubeadm/kubeadm-upgrade.md | 233 ++++++++++-------- 1 file changed, 132 insertions(+), 101 deletions(-) diff --git a/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index e01404b95d..bfe410fe65 100644 --- a/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -2,7 +2,7 @@ title: 升级 kubeadm 集群 content_type: task weight: 20 -min-kubernetes-server-version: 1.18 +min-kubernetes-server-version: 1.19 --- -本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.17.x 版本升级到 1.18.x 版本, -或者从版本 1.18.x 升级到 1.18.y ,其中 `y > x`。 +本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.18.x 版本升级到 1.19.x 版本, +或者从版本 1.19.x 升级到 1.19.y ,其中 `y > x`。 -- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) -- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) -- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) +- [将 kubeadm 集群从 1.17 升级到 1.18](https://v1-18.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) +- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/) +- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/) -- 你需要有一个由 `kubeadm` 创建并运行着 1.17.0 或更高版本的 Kubernetes 集群。 +- 你需要有一个由 `kubeadm` 创建并运行着 1.18.0 或更高版本的 Kubernetes 集群。 - [禁用交换分区](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)。 - 集群应使用静态的控制平面和 etcd Pod 或者 外部 etcd。 - 务必仔细认真阅读[发行说明]({{< latest-release-notes >}})。 @@ -89,26 +91,26 @@ The upgrade workflow at high level is the following: ## 确定要升级到哪个版本 -找到最新的稳定版 1.18: +找到最新的稳定版 1.19: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu、Debian 或 HypriotOS" %}} ``` apt update apt-cache policy kubeadm -# 在列表中查找最新的 1.18 版本 -# 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁 +# 在列表中查找最新的 1.19 版本 +# 它看起来应该是 1.19.x-00 ,其中 x 是最新的补丁 ``` {{% /tab %}} {{% tab name="CentOS、RHEL 或 Fedora" %}} ``` yum list --showduplicates kubeadm --disableexcludes=kubernetes -# 在列表中查找最新的 1.18 版本 -# 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本 +# 在列表中查找最新的 1.19 版本 +# 它看起来应该是 1.19.x-0 ,其中 x 是最新的补丁版本 ``` {{% /tab %}} {{< /tabs >}} @@ -130,16 +132,20 @@ yum list --showduplicates kubeadm --disableexcludes=kubernetes {{< tabs name="k8s_install_kubeadm_first_cp" >}} {{% tab name="Ubuntu、Debian 或 HypriotOS" %}} ```shell -# 用最新的修补程序版本替换 1.18.x-00 中的 x +# 用最新的修补程序版本替换 1.19.x-00 中的 x apt-mark unhold kubeadm && \ -apt-get update && apt-get install -y kubeadm=1.18.x-00 && \ +apt-get update && apt-get install -y kubeadm=1.19.x-00 && \ apt-mark hold kubeadm +- +# 从 apt-get 1.1 版本起,你也可以使用下面的方法 +apt-get update && \ +apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00 ``` {{% /tab %}} {{% tab name="CentOS、RHEL 或 Fedora" %}} ```shell -# 用最新的修补程序版本替换 1.18.x-0 中的 x -yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes +# 用最新的修补程序版本替换 1.19.x-0 中的 x +yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes ``` {{% /tab %}} {{< /tabs >}} @@ -192,36 +198,48 @@ yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.17.3 - [upgrade/versions] kubeadm version: v1.18.0 - [upgrade/versions] Latest stable version: v1.18.0 - [upgrade/versions] Latest version in the v1.17 series: v1.18.0 + [upgrade/versions] Cluster version: v1.18.4 + [upgrade/versions] kubeadm version: v1.19.0 + [upgrade/versions] Latest stable version: v1.19.0 + [upgrade/versions] Latest version in the v1.18 series: v1.18.4 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.17.3 v1.18.0 + Kubelet 1 x v1.18.4 v1.19.0 - Upgrade to the latest version in the v1.17 series: + Upgrade to the latest version in the v1.18 series: COMPONENT CURRENT AVAILABLE - API Server v1.17.3 v1.18.0 - Controller Manager v1.17.3 v1.18.0 - Scheduler v1.17.3 v1.18.0 - Kube Proxy v1.17.3 v1.18.0 - CoreDNS 1.6.5 1.6.7 - Etcd 3.4.3 3.4.3-0 + API Server v1.18.4 v1.19.0 + Controller Manager v1.18.4 v1.19.0 + Scheduler v1.18.4 v1.19.0 + Kube Proxy v1.18.4 v1.19.0 + CoreDNS 1.6.7 1.7.0 + Etcd 3.4.3-0 3.4.7-0 You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.18.0 + kubeadm upgrade apply v1.19.0 _____________________________________________________________________ + + The table below shows the current state of component configs as understood by this version of kubeadm. + Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or + resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually + upgrade to is denoted in the "PREFERRED VERSION" column. + + API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED + kubeproxy.config.k8s.io v1alpha1 v1alpha1 no + kubelet.config.k8s.io v1beta1 v1beta1 no + _____________________________________________________________________ ``` 此命令检查你的集群是否可以升级,并可以获取到升级的版本。 + 其中也显示了组件配置版本状态的表格。 +如果 `kubeadm upgrade plan` 显示有任何组件配置需要手动升级,则用户必须 +通过命令行参数 `--config` 给 `kubeadm upgrade apply` 操作 +提供带有替换配置的配置文件。 +{{}} + - 选择要升级到的版本,然后运行相应的命令。例如: ```shell # 将 x 替换为你为此次升级所选的补丁版本号 - sudo kubeadm upgrade apply v1.18.x + sudo kubeadm upgrade apply v1.19.x ``` 你应该可以看见与下面类似的输出: - ```none + ``` [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks - [upgrade/version] You have chosen to change the cluster version to "v1.18.0" - [upgrade/versions] Cluster version: v1.17.3 - [upgrade/versions] kubeadm version: v1.18.0 + [upgrade/version] You have chosen to change the cluster version to "v1.19.0" + [upgrade/versions] Cluster version: v1.18.4 + [upgrade/versions] kubeadm version: v1.19.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/prepull] Prepulling image for component etcd. - [upgrade/prepull] Prepulling image for component kube-apiserver. - [upgrade/prepull] Prepulling image for component kube-controller-manager. - [upgrade/prepull] Prepulling image for component kube-scheduler. - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-apiserver. - [upgrade/prepull] Prepulled image for component kube-controller-manager. - [upgrade/prepull] Prepulled image for component kube-scheduler. - [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"... - Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 - Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 - Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 + [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster + [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection + [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"... + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 + Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a [upgrade/etcd] Upgrading to TLS for etcd - [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012" - W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + [upgrade/staticpods] Preparing for "etcd" upgrade + [upgrade/staticpods] Renewing etcd-server certificate + [upgrade/staticpods] Renewing etcd-peer certificate + [upgrade/staticpods] Renewing etcd-healthcheck-client certificate + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml" + [upgrade/staticpods] Waiting for the kubelet to restart the component + [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf + Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0 + [apiclient] Found 1 Pods for label selector component=etcd + [upgrade/staticpods] Component "etcd" upgraded successfully! + [upgrade/etcd] Waiting for etcd to become available + [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46 - Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003 + Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18 - Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156 + Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2 + Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml" + [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366 - Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d + Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a + Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" + [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster + W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained. [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` @@ -396,25 +428,23 @@ sudo kubeadm upgrade apply {{< tabs name="k8s_install_kubelet" >}} {{% tab name="Ubuntu、Debian 或 HypriotOS" %}} ```shell -# 用最新的补丁版本替换 1.18.x-00 中的 x +# 用最新的补丁版本替换 1.19.x-00 中的 x apt-mark unhold kubelet kubectl && \ -apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \ +apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \ apt-mark hold kubelet kubectl -``` -从 apt-get 的 1.1 版本开始,你也可以使用下面的方法: +# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法: -```shell apt-get update && \ -apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00 +apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00 ``` {{% /tab %}} {{% tab name="CentOS、RHEL 或 Fedora" %}} -用最新的补丁版本替换 1.18.x-00 中的 x +用最新的补丁版本替换 1.19.x-00 中的 x ```shell -yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes +yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes ``` {{% /tab %}} {{< /tabs >}} @@ -437,7 +467,8 @@ without compromising the minimum required capacity for running your workloads. --> ## 升级工作节点 -工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。 +工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点, +以不影响运行工作负载所需的最小容量。 -### 保护节点 +### 腾空节点 `kubeadm upgrade node` 在工作节点上完成以下工作: