Merge pull request #6507 from scjane/patch-18

Update configure-upgrade-etcd.md
This commit is contained in:
Qiming 2017-11-30 15:59:56 +08:00 committed by GitHub
commit a49d85186a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 9 additions and 9 deletions

View File

@ -184,7 +184,7 @@ Before starting the restore operation, a snapshot file must be present. It can e
If the access URLs of the restored cluster is changed from the previous cluster, the Kubernetes API server must be reconfigured accordingly. In this case, restart Kubernetes API server with the flag `--etcd-servers=$NEW_ETCD_CLUSTER` instead of the flag `--etcd-servers=$OLD_ETCD_CLUSTER`. Replace `$NEW_ETCD_CLUSTER` and `$OLD_ETCD_CLUSTER` with the respective IP addresses. If a load balancer is used in front of an etcd cluster, you might need to update the load balancer instead.
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
If the majority of etcd members have permanently failed, the etcd cluster is considered failed. In this scenario, Kubernetes cannot make any changes to its current state. Although the scheduled pods might continue to run, no new pods can be scheduled. In such cases, recover the etcd cluster and potentially reconfigure Kubernetes API server to fix the issue.
## Upgrading and rolling back etcd clusters
@ -212,7 +212,7 @@ Note that we need to migrate both the etcd versions that we are using (from 2.2.
to at least 3.0.x) as well as the version of the etcd API that Kubernetes talks to. The etcd 3.0.x
binaries support both the v2 and v3 API.
This document describes how to do this migration. If you want to skip the
This document describes how to do this migration. If you want to skip the
background and cut right to the procedure, see [Upgrade
Procedure](#upgrade-procedure).
@ -227,7 +227,7 @@ There are requirements on how an etcd cluster upgrade can be performed. The prim
Upgrade only one minor release at a time. For example, we cannot upgrade directly from 2.1.x to 2.3.x.
Within patch releases it is possible to upgrade and downgrade between arbitrary versions. Starting a cluster for
any intermediate minor release, waiting until the cluster is healthy, and then
shutting down the cluster down will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
shutting down the cluster will perform the migration. For example, to upgrade from version 2.1.x to 2.3.y,
it is enough to start etcd in 2.2.z version, wait until it is healthy, stop it, and then start the
2.3.y version.
@ -239,7 +239,7 @@ The etcd team has provided a [custom rollback tool](https://git.k8s.io/kubernete
but the rollback tool has these limitations:
* This custom rollback tool is not part of the etcd repo and does not receive the same
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
testing as the rest of etcd. We are testing it in a couple of end-to-end tests.
There is only community support here.
* The rollback can be done only from the 3.0.x version (that is using the v3 API) to the
@ -263,13 +263,13 @@ rollback might require restarting all Kubernetes components on all nodes.
**Note**: At the time of writing, both Kubelet and KubeProxy are using “resource
version” only for watching (i.e. are not using resource versions for anything
else). And both are using reflector and/or informer frameworks for watching
(i.e. they dont send watch requests themselves). Both those frameworks if they
(i.e. they dont send watch requests themselves). Both those frameworks if they
cant renew watch, they will start from “current version” by doing “list + watch
from the resource version returned by list”. That means that if the apiserver
will be down for the period of rollback, all of node components should basically
restart their watches and start from “now” when apiserver is back. And it will
be back with new resource version. That would mean that restarting node
components is not needed. But the assumptions here may not hold forever.
components is not needed. But the assumptions here may not hold forever.
{: .note}
### Design
@ -284,7 +284,7 @@ focus on them at all. We focus only on the upgrade/rollback here.
### New etcd Docker image
We decided to completely change the content of the etcd image and the way it works.
So far, the Docker image for etcd in version X has contained only the etcd and
So far, the Docker image for etcd in version X has contained only the etcd and
etcdctl binaries.
Going forward, the Docker image for etcd in version X will contain multiple
@ -337,7 +337,7 @@ script works as follows:
1. Verify that the detected version is 3.0.x with the v3 API, and the
desired version is 2.2.1 with the v2 API. We dont support any other rollback.
1. If so, we run the custom tool provided by etcd team to do the offline
rollback. This tool reads the v3 formatted data and writes it back to disk
rollback. This tool reads the v3 formatted data and writes it back to disk
in v2 format.
1. Finally update the contents of the version file.
@ -350,7 +350,7 @@ Simply modify the command line in the etcd manifest to:
Starting in Kubernetes version 1.6, this has been done in the manifests for new
Google Compute Engine clusters. You should also specify these environment
variables. In particular,you must keep `STORAGE_MEDIA_TYPE` set to
variables. In particular, you must keep `STORAGE_MEDIA_TYPE` set to
`application/json` if you wish to preserve the option to roll back.
```