Merge pull request #9406 from millerdz/patch-11

Remove web interface upgrade section
This commit is contained in:
Traci Morrison 2019-09-24 11:43:02 -04:00 committed by GitHub
commit b0ad928444
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 66 additions and 88 deletions

View File

@ -26,28 +26,28 @@ This can lead to misconfigurations that are difficult to troubleshoot.
Complete the following checks:
- Systems:
- Confirm time sync across all nodes (and check time daemon logs for any large time drifting)
- Check system requirements `PROD=4` `vCPU/16GB` for UCP managers and DTR replicas
- Review the full UCP/DTR/Engine port requirements
- Ensure that your cluster nodes meet the minimum requirements
- Before performing any upgrade, ensure that you meet all minimum requirements listed
in [UCP System requirements](/ee/ucp/admin/install/system-requirements/), including port openings (UCP 3.x added more
- Review the full UCP/DTR/Engine port requirements
- Ensure that your cluster nodes meet the minimum requirements
- Before performing any upgrade, ensure that you meet all minimum requirements listed
in [UCP System requirements](/ee/ucp/admin/install/system-requirements/), including port openings (UCP 3.x added more
required ports for Kubernetes), memory, and disk space. For example, manager nodes must have at least 8GB of memory.
> **Note**: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
> Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites)
> Azure then please ensure all of the Azure [prerequisites](install-on-azure.md/#azure-prerequisites)
> are met.
- Storage:
- Check `/var/` storage allocation and increase if it is over 70% usage.
- In addition, check all nodes local filesystems for any disk storage issues (and DTR backend storage, for example, NFS).
- If not using Overlay2 storage drivers please take this opportunity to do so, you will find stability there. (Note:
- If not using Overlay2 storage drivers please take this opportunity to do so, you will find stability there. (Note:
The transition from Device mapper to Overlay2 is a destructive rebuild.)
- Operating system:
- If cluster nodes OS branch is older (Ubuntu 14.x, RHEL 7.3, etc), consider patching all relevant packages to the
- If cluster nodes OS branch is older (Ubuntu 14.x, RHEL 7.3, etc), consider patching all relevant packages to the
most recent (including kernel).
- Rolling restart of each node before upgrade (to confirm in-memory settings are the same as startup-scripts).
- Run `check-config.sh` on each cluster node (after rolling restart) for any kernel compatibility issues.
@ -55,7 +55,7 @@ Complete the following checks:
- Procedural:
- Perform a Swarm, UCP and DTR backups pre-upgrade
- Gather compose file/service/stack files
- Gather compose file/service/stack files
- Generate a UCP Support dump (for point in time) pre-upgrade
- Preload Engine/UCP/DTR images. If your cluster is offline (with no
connection to the internet) then Docker provides tarballs containing all
@ -69,26 +69,26 @@ Complete the following checks:
```
- Load troubleshooting packages (netshoot, etc)
- Best order for upgrades: Engine, UCP, and then DTR. Note: The scope of this topic is limited to upgrade instructions for UCP.
- Best order for upgrades: Engine, UCP, and then DTR. Note: The scope of this topic is limited to upgrade instructions for UCP.
- Upgrade strategy:
For each worker node that requires an upgrade, you can upgrade that node in place or you can replace the node
For each worker node that requires an upgrade, you can upgrade that node in place or you can replace the node
with a new worker node. The type of upgrade you perform depends on what is needed for each node:
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
Performed using the CLI. This type of upgrade allows you to
stand up a new cluster in parallel to the current code
and cut over when complete. This type of upgrade allows you to join new worker nodes,
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
in batches of multiple nodes rather than one at a time, and shut down servers to
remove worker nodes. This type of upgrade is the most advanced.
Performed using the CLI. This type of upgrade allows you to
stand up a new cluster in parallel to the current code
and cut over when complete. This type of upgrade allows you to join new worker nodes,
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
in batches of multiple nodes rather than one at a time, and shut down servers to
remove worker nodes. This type of upgrade is the most advanced.
## Back up your cluster
@ -118,48 +118,26 @@ Starting with the manager nodes, and then worker nodes:
and check that the node is healthy and is part of the cluster.
## Upgrade UCP
When upgrading Docker Universal Control Plane (UCP) to version {{ page.ucp_version }}, you can choose from
When upgrading Docker Universal Control Plane (UCP) to version {{ page.ucp_version }}, you can choose from
different upgrade workflows:
> **Important**: In all upgrade workflows, manager nodes are automatically upgraded in place. You cannot control the order
> **Important**: In all upgrade workflows, manager nodes are automatically upgraded in place. You cannot control the order
of manager node upgrades.
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI or the UCP UI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
- [Automated, in-place cluster upgrade](#automated-in-place-cluster-upgrade): Performed on any
manager node. Automatically upgrades the entire cluster.
- Manual cluster upgrade: Performed using the CLI. Automatically upgrades manager
nodes and allows you to control the upgrade order of worker nodes. This type of upgrade is more
advanced than the automated, in-place cluster upgrade.
- [Upgrade existing nodes in place](#phased-in-place-cluster-upgrade): Performed using the CLI.
Automatically upgrades manager nodes and allows you to control the order of worker node upgrades.
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
Performed using the CLI. This type of upgrade allows you to
stand up a new cluster in parallel to the current code
and cut over when complete. This type of upgrade allows you to join new worker nodes,
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
in batches of multiple nodes rather than one at a time, and shut down servers to
- [Replace all worker nodes using blue-green deployment](#replace-existing-worker-nodes-using-blue-green-deployment):
Performed using the CLI. This type of upgrade allows you to
stand up a new cluster in parallel to the current code
and cut over when complete. This type of upgrade allows you to join new worker nodes,
schedule workloads to run on new nodes, pause, drain, and remove old worker nodes
in batches of multiple nodes rather than one at a time, and shut down servers to
remove worker nodes. This type of upgrade is the most advanced.
### Use the web interface to perform an upgrade
> **Note**: If you plan to add nodes to the UCP cluster, use the [CLI](#use-the-cli-to-perform-an-upgrade) for the upgrade.
When an upgrade is available for a UCP installation, a banner appears.
![](../../images/upgrade-ucp-1.png){: .with-border}
Clicking this message takes an admin user directly to the upgrade process.
It can be found under the **Upgrade** tab of the **Admin Settings** section.
![](../../images/upgrade-ucp-2.png){: .with-border}
In the **Available Versions** drop down, select the version you want to update.
Copy and paste the CLI command provided into a terminal on a manager node to
perform the upgrade.
During the upgrade, the web interface will be unavailable, and you should wait
until completion before continuing to interact with it. When the upgrade
completes, you'll see a notification that a newer version of the web interface
is available and a browser refresh is required to see it.
### Use the CLI to perform an upgrade
There are two different ways to upgrade a UCP Cluster via the CLI. The first is
@ -193,7 +171,7 @@ $ docker container run --rm -it \
The upgrade command will print messages regarding the progress of the upgrade as
it automatically upgrades UCP on all nodes in the cluster.
### Phased in-place cluster upgrade
The phased approach of upgrading UCP, introduced in UCP 3.2, allows granular
@ -228,7 +206,7 @@ $ docker container run --rm -it \
The `--manual-worker-upgrade` flag will add an upgrade-hold label to all worker
nodes. UCP will be constantly monitor this label, and if that label is removed
UCP will then upgrade the node.
UCP will then upgrade the node.
In order to trigger the upgrade on a worker node, you will have to remove the
label.
@ -240,7 +218,7 @@ $ docker node update --label-rm com.docker.ucp.upgrade-hold <node name or id>
(Optional) Joining new worker nodes to the cluster. Once the manager nodes have
been upgraded to a new UCP version, new worker nodes can be added to the
cluster, assuming they are running the corresponding new docker engine
version.
version.
The swarm join token can be found in the UCP UI, or while ssh'd on a UCP
manager node. More information on finding the swarm token can be found
@ -252,16 +230,16 @@ $ docker swarm join --token SWMTKN-<YOUR TOKEN> <manager ip>:2377
### Replace existing worker nodes using blue-green deployment
This workflow is used to create a parallel environment for a new deployment, which can greatly reduce downtime, upgrades
worker node engines without disrupting workloads, and allows traffic to be migrated to the new environment with
This workflow is used to create a parallel environment for a new deployment, which can greatly reduce downtime, upgrades
worker node engines without disrupting workloads, and allows traffic to be migrated to the new environment with
worker node rollback capability. This type of upgrade creates a parallel environment for reduced downtime and workload disruption.
> **Note**: Steps 2 through 6 can be repeated for groups of nodes - you do not have to replace all worker
> **Note**: Steps 2 through 6 can be repeated for groups of nodes - you do not have to replace all worker
nodes in the cluster at one time.
1. Upgrade manager nodes
- The `--manual-worker-upgrade` command automatically upgrades manager nodes first, and then allows you to control
- The `--manual-worker-upgrade` command automatically upgrades manager nodes first, and then allows you to control
the upgrade of the UCP components on the worker nodes using node labels.
```
@ -275,8 +253,8 @@ nodes in the cluster at one time.
```
2. Join new worker nodes
- New worker nodes have newer engines already installed and have the new UCP version running when they join the cluster.
- New worker nodes have newer engines already installed and have the new UCP version running when they join the cluster.
On the manager node, run commands similar to the following examples to get the Swarm Join token and add new worker nodes:
```
docker swarm join-token worker
@ -290,21 +268,21 @@ nodes in the cluster at one time.
docker swarm join --token SWMTKN-<YOUR TOKEN> <manager ip>:2377
```
4. Pause all existing worker nodes
- This ensures that new workloads are not deployed on existing nodes.
```
docker node update --availability pause <node name>
```
5. Drain paused nodes for workload migration
- Redeploy workloads on all existing nodes to new nodes. Because all existing nodes are “paused”, workloads are
- Redeploy workloads on all existing nodes to new nodes. Because all existing nodes are “paused”, workloads are
automatically rescheduled onto new nodes.
```
docker node update --availability drain <node name>
```
6. Remove drained nodes
- After each node is fully drained, it can be shut down and removed from the cluster. On each worker node that is
- After each node is fully drained, it can be shut down and removed from the cluster. On each worker node that is
getting removed from the cluster, run a command similar to the following example :
```
docker swarm leave <node name>
@ -314,9 +292,9 @@ nodes in the cluster at one time.
docker node rm <node name>
```
7. Remove old UCP agents
- After upgrade completion, remove old UCP agents, which includes 390x and Windows agents, that were carried over
from the previous install by running the following command on the manager node:
- After upgrade completion, remove old UCP agents, which includes 390x and Windows agents, that were carried over
from the previous install by running the following command on the manager node:
```
docker service rm ucp-agent
docker service rm ucp-agent-win
@ -326,44 +304,44 @@ nodes in the cluster at one time.
### Troubleshooting
- Upgrade compatibility
- The upgrade command automatically checks for multiple `ucp-worker-agents` before
proceeding with the upgrade. The existence of multiple `ucp-worker-agents` might indicate
that the cluster still in the middle of a prior manual upgrade and you must resolve the
- The upgrade command automatically checks for multiple `ucp-worker-agents` before
proceeding with the upgrade. The existence of multiple `ucp-worker-agents` might indicate
that the cluster still in the middle of a prior manual upgrade and you must resolve the
conflicting node labels issues before proceeding with the upgrade.
- Upgrade failures
- For worker nodes, an upgrade failure can be rolled back by changing the node label back
to the previous target version. Rollback of manager nodes is not supported.
- For worker nodes, an upgrade failure can be rolled back by changing the node label back
to the previous target version. Rollback of manager nodes is not supported.
- Kubernetes errors in node state messages after upgrading UCP
- Kubernetes errors in node state messages after upgrading UCP
(from https://github.com/docker/kbase/how-to-resolve-kubernetes-errors-after-upgrading-ucp/readme.md)
- The following information applies If you have upgraded to UCP 3.0.0 or newer:
- After performing a UCP upgrade from 2.2.x to 3.x.x, you might see unhealthy nodes in your UCP
- After performing a UCP upgrade from 2.2.x to 3.x.x, you might see unhealthy nodes in your UCP
dashboard with any of the following errors listed:
```
Awaiting healthy status in Kubernetes node inventory
Kubelet is unhealthy: Kubelet stopped posting node status
```
- Alternatively, you may see other port errors such as the one below in the ucp-controller
- Alternatively, you may see other port errors such as the one below in the ucp-controller
container logs:
```
http: proxy error: dial tcp 10.14.101.141:12388: connect: no route to host
```
- UCP 3.x.x requires additional opened ports for Kubernetes use. For ports that are used by the
latest UCP versions and the scope of port use, refer to
- UCP 3.x.x requires additional opened ports for Kubernetes use. For ports that are used by the
latest UCP versions and the scope of port use, refer to
[this page](https://docs.docker.com/ee/ucp/admin/install/system-requirements/#ports-used).
- If you have upgraded from UCP 2.2.x to 3.0.x, verify that the ports 179, 6443, 6444 and 10250 are
- If you have upgraded from UCP 2.2.x to 3.0.x, verify that the ports 179, 6443, 6444 and 10250 are
open for Kubernetes traffic.
- If you have upgraded to UCP 3.1.x, in addition to the ports listed above, do also open
- If you have upgraded to UCP 3.1.x, in addition to the ports listed above, do also open
ports 9099 and 12388.
### Recommended upgrade paths
From UCP 3.0: UCP 3.0 -> UCP 3.1 -> UCP 3.2

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB