[zh] Resync kubeadm-upgrade
This commit is contained in:
parent
fd65678baa
commit
6c447c9282
|
|
@ -2,7 +2,7 @@
|
|||
title: 升级 kubeadm 集群
|
||||
content_type: task
|
||||
weight: 20
|
||||
min-kubernetes-server-version: 1.18
|
||||
min-kubernetes-server-version: 1.19
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
|
|
@ -17,10 +17,10 @@ min-kubernetes-server-version: 1.18
|
|||
|
||||
<!--
|
||||
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
|
||||
1.17.x to version 1.18.x, and from version 1.18.x to 1.18.y (where `y > x`).
|
||||
1.18.x to version 1.19.x, and from version 1.19.x to 1.19.y (where `y > x`).
|
||||
-->
|
||||
本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.17.x 版本升级到 1.18.x 版本,
|
||||
或者从版本 1.18.x 升级到 1.18.y ,其中 `y > x`。
|
||||
本页介绍如何将 `kubeadm` 创建的 Kubernetes 集群从 1.18.x 版本升级到 1.19.x 版本,
|
||||
或者从版本 1.19.x 升级到 1.19.y ,其中 `y > x`。
|
||||
|
||||
<!--
|
||||
To see information about upgrading clusters created using older versions of kubeadm,
|
||||
|
|
@ -29,15 +29,17 @@ please refer to following pages instead:
|
|||
要查看 kubeadm 创建的有关旧版本集群升级的信息,请参考以下页面:
|
||||
|
||||
<!--
|
||||
- [Upgrading kubeadm cluster from 1.17 to 1.18](https://v1-18.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [Upgrading kubeadm cluster from 1.16 to 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [Upgrading kubeadm cluster from 1.15 to 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [Upgrading kubeadm cluster from 1.14 to 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
|
||||
- [Upgrading kubeadm cluster from 1.13 to 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
|
||||
-->
|
||||
- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
|
||||
- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
|
||||
- [将 kubeadm 集群从 1.17 升级到 1.18](https://v1-18.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [将 kubeadm 集群从 1.16 升级到 1.17](https://v1-17.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [将 kubeadm 集群从 1.15 升级到 1.16](https://v1-16.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
|
||||
- [将 kubeadm 集群从 1.14 升级到 1.15](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/)
|
||||
- [将 kubeadm 集群从 1.13 升级到 1.14](https://v1-15.docs.kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/)
|
||||
|
||||
<!--
|
||||
The upgrade workflow at high level is the following:
|
||||
|
|
@ -55,14 +57,14 @@ The upgrade workflow at high level is the following:
|
|||
## {{% heading "prerequisites" %}}
|
||||
|
||||
<!--
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.17.0 or later.
|
||||
- You need to have a kubeadm Kubernetes cluster running version 1.18.0 or later.
|
||||
- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux).
|
||||
- The cluster should use a static control plane and etcd pods or external etcd.
|
||||
- Make sure you read the [release notes]({{< latest-release-notes >}}) carefully.
|
||||
- Make sure to back up any important components, such as app-level state stored in a database.
|
||||
`kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice.
|
||||
-->
|
||||
- 你需要有一个由 `kubeadm` 创建并运行着 1.17.0 或更高版本的 Kubernetes 集群。
|
||||
- 你需要有一个由 `kubeadm` 创建并运行着 1.18.0 或更高版本的 Kubernetes 集群。
|
||||
- [禁用交换分区](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux)。
|
||||
- 集群应使用静态的控制平面和 etcd Pod 或者 外部 etcd。
|
||||
- 务必仔细认真阅读[发行说明]({{< latest-release-notes >}})。
|
||||
|
|
@ -89,26 +91,26 @@ The upgrade workflow at high level is the following:
|
|||
<!--
|
||||
## Determine which version to upgrade to
|
||||
|
||||
Find the latest stable 1.18 version:
|
||||
Find the latest stable 1.19 version:
|
||||
-->
|
||||
## 确定要升级到哪个版本
|
||||
|
||||
找到最新的稳定版 1.18:
|
||||
找到最新的稳定版 1.19:
|
||||
|
||||
{{< tabs name="k8s_install_versions" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
```
|
||||
apt update
|
||||
apt-cache policy kubeadm
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-00 ,其中 x 是最新的补丁
|
||||
# 在列表中查找最新的 1.19 版本
|
||||
# 它看起来应该是 1.19.x-00 ,其中 x 是最新的补丁
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
```
|
||||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||||
# 在列表中查找最新的 1.18 版本
|
||||
# 它看起来应该是 1.18.x-0 ,其中 x 是最新的补丁版本
|
||||
# 在列表中查找最新的 1.19 版本
|
||||
# 它看起来应该是 1.19.x-0 ,其中 x 是最新的补丁版本
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
@ -130,16 +132,20 @@ yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
|||
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
```shell
|
||||
# 用最新的修补程序版本替换 1.18.x-00 中的 x
|
||||
# 用最新的修补程序版本替换 1.19.x-00 中的 x
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-get update && apt-get install -y kubeadm=1.19.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
-
|
||||
# 从 apt-get 1.1 版本起,你也可以使用下面的方法
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
```shell
|
||||
# 用最新的修补程序版本替换 1.18.x-0 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
# 用最新的修补程序版本替换 1.19.x-0 中的 x
|
||||
yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
@ -192,36 +198,48 @@ yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
|||
[preflight] Running pre-flight checks.
|
||||
[upgrade] Running cluster health checks
|
||||
[upgrade] Fetching available versions to upgrade to
|
||||
[upgrade/versions] Cluster version: v1.17.3
|
||||
[upgrade/versions] kubeadm version: v1.18.0
|
||||
[upgrade/versions] Latest stable version: v1.18.0
|
||||
[upgrade/versions] Latest version in the v1.17 series: v1.18.0
|
||||
[upgrade/versions] Cluster version: v1.18.4
|
||||
[upgrade/versions] kubeadm version: v1.19.0
|
||||
[upgrade/versions] Latest stable version: v1.19.0
|
||||
[upgrade/versions] Latest version in the v1.18 series: v1.18.4
|
||||
|
||||
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
|
||||
COMPONENT CURRENT AVAILABLE
|
||||
Kubelet 1 x v1.17.3 v1.18.0
|
||||
Kubelet 1 x v1.18.4 v1.19.0
|
||||
|
||||
Upgrade to the latest version in the v1.17 series:
|
||||
Upgrade to the latest version in the v1.18 series:
|
||||
|
||||
COMPONENT CURRENT AVAILABLE
|
||||
API Server v1.17.3 v1.18.0
|
||||
Controller Manager v1.17.3 v1.18.0
|
||||
Scheduler v1.17.3 v1.18.0
|
||||
Kube Proxy v1.17.3 v1.18.0
|
||||
CoreDNS 1.6.5 1.6.7
|
||||
Etcd 3.4.3 3.4.3-0
|
||||
API Server v1.18.4 v1.19.0
|
||||
Controller Manager v1.18.4 v1.19.0
|
||||
Scheduler v1.18.4 v1.19.0
|
||||
Kube Proxy v1.18.4 v1.19.0
|
||||
CoreDNS 1.6.7 1.7.0
|
||||
Etcd 3.4.3-0 3.4.7-0
|
||||
|
||||
You can now apply the upgrade by executing the following command:
|
||||
|
||||
kubeadm upgrade apply v1.18.0
|
||||
kubeadm upgrade apply v1.19.0
|
||||
|
||||
_____________________________________________________________________
|
||||
|
||||
The table below shows the current state of component configs as understood by this version of kubeadm.
|
||||
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
|
||||
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
|
||||
upgrade to is denoted in the "PREFERRED VERSION" column.
|
||||
|
||||
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
|
||||
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
|
||||
kubelet.config.k8s.io v1beta1 v1beta1 no
|
||||
_____________________________________________________________________
|
||||
```
|
||||
|
||||
<!--
|
||||
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
|
||||
It also shows a table with the component config version states.
|
||||
-->
|
||||
此命令检查你的集群是否可以升级,并可以获取到升级的版本。
|
||||
其中也显示了组件配置版本状态的表格。
|
||||
|
||||
<!--
|
||||
`kubeadm upgrade` also automatically renews the certificates that it manages on this node.
|
||||
|
|
@ -234,19 +252,30 @@ For more information see the [certificate management guide](/docs/tasks/administ
|
|||
关于更多细节信息,可参见[证书管理指南](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-certs)。
|
||||
{{</ note >}}
|
||||
|
||||
{{< note >}}
|
||||
<!--
|
||||
If `kubeadm upgrade plan` shows any component configs that require manual upgrade, users must provide
|
||||
a config file with replacement configs to `kubeadm upgrade apply` via the `--config` command line flag.
|
||||
Failing to do so will cause `kubeadm upgrade apply` to exit with an error and not perform an upgrade.
|
||||
-->
|
||||
如果 `kubeadm upgrade plan` 显示有任何组件配置需要手动升级,则用户必须
|
||||
通过命令行参数 `--config` 给 `kubeadm upgrade apply` 操作
|
||||
提供带有替换配置的配置文件。
|
||||
{{</ note >}}
|
||||
|
||||
<!--
|
||||
- Choose a version to upgrade to, and run the appropriate command. For example:
|
||||
|
||||
```shell
|
||||
# replace x with the patch version you picked for this upgrade
|
||||
sudo kubeadm upgrade apply v1.18.x
|
||||
sudo kubeadm upgrade apply v1.19.x
|
||||
```
|
||||
-->
|
||||
- 选择要升级到的版本,然后运行相应的命令。例如:
|
||||
|
||||
```shell
|
||||
# 将 x 替换为你为此次升级所选的补丁版本号
|
||||
sudo kubeadm upgrade apply v1.18.x
|
||||
sudo kubeadm upgrade apply v1.19.x
|
||||
```
|
||||
|
||||
<!--
|
||||
|
|
@ -254,81 +283,84 @@ For more information see the [certificate management guide](/docs/tasks/administ
|
|||
-->
|
||||
你应该可以看见与下面类似的输出:
|
||||
|
||||
```none
|
||||
```
|
||||
[upgrade/config] Making sure the configuration is correct:
|
||||
[upgrade/config] Reading configuration from the cluster...
|
||||
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
|
||||
[preflight] Running pre-flight checks.
|
||||
[upgrade] Running cluster health checks
|
||||
[upgrade/version] You have chosen to change the cluster version to "v1.18.0"
|
||||
[upgrade/versions] Cluster version: v1.17.3
|
||||
[upgrade/versions] kubeadm version: v1.18.0
|
||||
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
|
||||
[upgrade/versions] Cluster version: v1.18.4
|
||||
[upgrade/versions] kubeadm version: v1.19.0
|
||||
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
|
||||
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
|
||||
[upgrade/prepull] Prepulling image for component etcd.
|
||||
[upgrade/prepull] Prepulling image for component kube-apiserver.
|
||||
[upgrade/prepull] Prepulling image for component kube-controller-manager.
|
||||
[upgrade/prepull] Prepulling image for component kube-scheduler.
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
|
||||
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
|
||||
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
|
||||
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
|
||||
[upgrade/prepull] Prepulled image for component etcd.
|
||||
[upgrade/prepull] Prepulled image for component kube-apiserver.
|
||||
[upgrade/prepull] Prepulled image for component kube-controller-manager.
|
||||
[upgrade/prepull] Prepulled image for component kube-scheduler.
|
||||
[upgrade/prepull] Successfully prepulled the images for all the control plane components
|
||||
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.0"...
|
||||
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
|
||||
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
|
||||
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
|
||||
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
|
||||
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
|
||||
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
|
||||
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
|
||||
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
|
||||
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
|
||||
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
|
||||
[upgrade/etcd] Upgrading to TLS for etcd
|
||||
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.0" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
|
||||
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012"
|
||||
W0308 18:48:14.535122 3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
|
||||
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
|
||||
[upgrade/staticpods] Preparing for "etcd" upgrade
|
||||
[upgrade/staticpods] Renewing etcd-server certificate
|
||||
[upgrade/staticpods] Renewing etcd-peer certificate
|
||||
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
|
||||
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
|
||||
Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0
|
||||
[apiclient] Found 1 Pods for label selector component=etcd
|
||||
[upgrade/staticpods] Component "etcd" upgraded successfully!
|
||||
[upgrade/etcd] Waiting for etcd to become available
|
||||
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980"
|
||||
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
|
||||
[upgrade/staticpods] Renewing apiserver certificate
|
||||
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
|
||||
[upgrade/staticpods] Renewing front-proxy-client certificate
|
||||
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml"
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
|
||||
Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4
|
||||
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
|
||||
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
|
||||
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
|
||||
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
|
||||
Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc
|
||||
[apiclient] Found 1 Pods for label selector component=kube-apiserver
|
||||
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
|
||||
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
|
||||
[upgrade/staticpods] Renewing controller-manager.conf certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml"
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
|
||||
Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156
|
||||
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
|
||||
Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0
|
||||
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
|
||||
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
|
||||
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
|
||||
[upgrade/staticpods] Renewing scheduler.conf certificate
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml"
|
||||
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml"
|
||||
[upgrade/staticpods] Waiting for the kubelet to restart the component
|
||||
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
|
||||
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
|
||||
Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d
|
||||
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
|
||||
Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31
|
||||
[apiclient] Found 1 Pods for label selector component=kube-scheduler
|
||||
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
|
||||
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
||||
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
|
||||
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
|
||||
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
|
||||
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
|
||||
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
|
||||
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
||||
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
||||
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
|
||||
W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained.
|
||||
[addons] Applied essential addon: CoreDNS
|
||||
[addons] Applied essential addon: kube-proxy
|
||||
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.0". Enjoy!
|
||||
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!
|
||||
|
||||
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
|
||||
```
|
||||
|
|
@ -396,25 +428,23 @@ sudo kubeadm upgrade apply
|
|||
{{< tabs name="k8s_install_kubelet" >}}
|
||||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
```shell
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
# 用最新的补丁版本替换 1.19.x-00 中的 x
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```shell
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
|
||||
用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
用最新的补丁版本替换 1.19.x-00 中的 x
|
||||
|
||||
```shell
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
@ -437,7 +467,8 @@ without compromising the minimum required capacity for running your workloads.
|
|||
-->
|
||||
## 升级工作节点
|
||||
|
||||
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,以不影响运行工作负载所需的最小容量。
|
||||
工作节点上的升级过程应该一次执行一个节点,或者一次执行几个节点,
|
||||
以不影响运行工作负载所需的最小容量。
|
||||
|
||||
<!--
|
||||
### Upgrade kubeadm
|
||||
|
|
@ -453,33 +484,31 @@ without compromising the minimum required capacity for running your workloads.
|
|||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
# 将 1.19.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubeadm && \
|
||||
apt-get update && apt-get install -y kubeadm=1.18.x-00 && \
|
||||
apt-get update && apt-get install -y kubeadm=1.19.x-00 && \
|
||||
apt-mark hold kubeadm
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```shell
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.18.x-00
|
||||
apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab name="CentOS、RHEL 或 Fedora" %}}
|
||||
|
||||
```shell
|
||||
# 用最新的补丁版本替换 1.18.x-00 中的 x
|
||||
yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
||||
# 用最新的补丁版本替换 1.19.x-00 中的 x
|
||||
yum install -y kubeadm-1.19.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
<!--
|
||||
### Cordon the node
|
||||
### Drain the node
|
||||
-->
|
||||
### 保护节点
|
||||
### 腾空节点
|
||||
|
||||
<!--
|
||||
1. Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run:
|
||||
|
|
@ -546,17 +575,15 @@ yum install -y kubeadm-1.18.x-0 --disableexcludes=kubernetes
|
|||
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
|
||||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
# 将 1.19.x-00 中的 x 替换为最新的补丁版本
|
||||
apt-mark unhold kubelet kubectl && \
|
||||
apt-get update && apt-get install -y kubelet=1.18.x-00 kubectl=1.18.x-00 && \
|
||||
apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
|
||||
apt-mark hold kubelet kubectl
|
||||
```
|
||||
|
||||
从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
# 从 apt-get 的 1.1 版本开始,你也可以使用下面的方法:
|
||||
|
||||
```
|
||||
apt-get update && \
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x-00
|
||||
apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
|
@ -564,7 +591,7 @@ apt-get install -y --allow-change-held-packages kubelet=1.18.x-00 kubectl=1.18.x
|
|||
|
||||
```shell
|
||||
# 将 1.18.x-00 中的 x 替换为最新的补丁版本
|
||||
yum install -y kubelet-1.18.x-0 kubectl-1.18.x-0 --disableexcludes=kubernetes
|
||||
yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
|
@ -680,6 +707,7 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
|
|||
- The control plane is healthy
|
||||
- Enforces the version skew policies.
|
||||
- Makes sure the control plane images are available or available to pull to the machine.
|
||||
- Generates replacements and/or uses user supplied overwrites if component configs require version upgrades.
|
||||
- Upgrades the control plane components or rollbacks if any of them fails to come up.
|
||||
- Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created.
|
||||
- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
|
||||
|
|
@ -692,8 +720,9 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
|
|||
- API 服务器是可访问的
|
||||
- 所有节点处于 `Ready` 状态
|
||||
- 控制面是健康的
|
||||
- 强制执行版本 skew 策略。
|
||||
- 强制执行版本偏差策略。
|
||||
- 确保控制面的镜像是可用的或可拉取到服务器上。
|
||||
- 如果组件配置要求版本升级,则生成替代配置与/或使用用户提供的覆盖版本配置。
|
||||
- 升级控制面组件或回滚(如果其中任何一个组件无法启动)。
|
||||
- 应用新的 `kube-dns` 和 `kube-proxy` 清单,并强制创建所有必需的 RBAC 规则。
|
||||
- 如果旧文件在 180 天后过期,将创建 API 服务器的新证书和密钥文件并备份旧文件。
|
||||
|
|
@ -717,6 +746,8 @@ and post-upgrade manifest file for a certain component, a backup file for it wil
|
|||
|
||||
- Fetches the kubeadm `ClusterConfiguration` from the cluster.
|
||||
- Upgrades the kubelet configuration for this node.
|
||||
- Upgrades the static Pod manifests for the control plane components.
|
||||
- Upgrades the kubelet configuration for this node.
|
||||
-->
|
||||
`kubeadm upgrade node` 在工作节点上完成以下工作:
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue