mirror of https://github.com/docker/docs.git
Review ucp 2.1 docs
This commit is contained in:
parent
2d9630a865
commit
e0e27a4ee1
12
_config.yml
12
_config.yml
|
@ -93,6 +93,12 @@ defaults:
|
|||
path: "toolbox"
|
||||
values:
|
||||
assignee: "londoncalling"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/dtr/2.2"
|
||||
values:
|
||||
ucp_version: "2.1"
|
||||
dtr_version: "2.2"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/dtr/2.1"
|
||||
|
@ -106,6 +112,12 @@ defaults:
|
|||
hide_from_sitemap: true
|
||||
ucp_version: "1.1"
|
||||
dtr_version: "2.0"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/ucp/2.1"
|
||||
values:
|
||||
ucp_version: "2.1"
|
||||
dtr_version: "2.2"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/ucp/2.0"
|
||||
|
|
|
@ -4,6 +4,11 @@
|
|||
|
||||
# Used by _includes/components/ddc_url_list.html
|
||||
|
||||
- ucp-version: "2.1"
|
||||
tar-files:
|
||||
- ucp-version: "2.1.0-beta1"
|
||||
dtr-version: "2.2.0-beta1"
|
||||
url: https://packages.docker.com/caas/ucp-2.1.0-beta1_dtr-2.2.0-beta1.tar.gz
|
||||
- ucp-version: "2.0"
|
||||
tar-files:
|
||||
- description: "UCP 2.0.3"
|
||||
|
|
|
@ -1106,7 +1106,7 @@ toc:
|
|||
- path: /datacenter/ucp/2.1/guides/install/scale-your-cluster/
|
||||
title: Scale your cluster
|
||||
- path: /datacenter/ucp/2.1/guides/install/upgrade/
|
||||
title: Upgrade UCP
|
||||
title: Upgrade to UCP 2.1
|
||||
- path: /datacenter/ucp/2.1/guides/install/uninstall/
|
||||
title: Uninstall UCP
|
||||
- sectiontitle: Access UCP
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
{% for data in site.data.ddc_offline_files %}
|
||||
{% if data.ucp-version == page.ucp_version %}
|
||||
<div class="row">
|
||||
<div class="col-md-4 col-md-offset-4">
|
||||
<div class="col-md-6 col-md-offset-3">
|
||||
<div class="list-group center-block">
|
||||
{% for tar-file in data.tar-files %}
|
||||
<a href="{{ tar-file.url }}" target="_blank" class="list-group-item">
|
||||
|
|
|
@ -69,9 +69,13 @@ persist the state of UCP. These are the UCP services running on manager nodes:
|
|||
| ucp-cluster-root-ca | A certificate authority used for TLS communication between UCP components |
|
||||
| ucp-controller | The UCP web server |
|
||||
| ucp-kv | Used to store the UCP configurations. Don't use it in your applications, since it's for internal use only |
|
||||
| ucp-metrics | Used to collect and process metrics for a node, like the disk space available |
|
||||
| ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components |
|
||||
| ucp-swarm-manager | Used to provide backwards-compatibility with Docker Swarm |
|
||||
|
||||
|
||||
|
||||
|
||||
### UCP components in worker nodes
|
||||
|
||||
Worker nodes are the ones where you run your applications. These are the UCP
|
||||
|
@ -100,8 +104,11 @@ Docker UCP uses these named volumes to persist data in all nodes where it runs:
|
|||
| ucp-controller-server-certs | Certificate and keys for the UCP web server running in the node |
|
||||
| ucp-kv | UCP configuration data |
|
||||
| ucp-kv-certs | Certificates and keys for the key-value store |
|
||||
| ucp-metrics-data | Monitoring data gathered by UCP |
|
||||
| ucp-metrics-inventory | Configuration file used by the ucp-metrics service |
|
||||
| ucp-node-certs | Certificate and keys for node communication |
|
||||
|
||||
|
||||
You can customize the volume driver used for these volumes, by creating
|
||||
the volumes before installing UCP. During the installation, UCP checks which
|
||||
volumes don't exist in the node, and creates them using the default volume
|
||||
|
|
|
@ -13,15 +13,14 @@ The next step is creating a backup policy and disaster recovery plan.
|
|||
|
||||
## Backup policy
|
||||
|
||||
Docker UCP nodes persist data using [named volumes](../architecture.md).
|
||||
As part of your backup policy you should regularly create backups of UCP.
|
||||
To create a backup of UCP, use the `docker/ucp backup` command. This creates
|
||||
a tar archive with the contents of the [volumes used by UCP](../architecture.md)
|
||||
to persist data, and streams it to stdout.
|
||||
|
||||
As part of your backup policy you should regularly create backups of the
|
||||
controller nodes. Since the nodes used for running user containers don't
|
||||
persist data, you can decide not to create any backups for them.
|
||||
|
||||
To perform a backup of a UCP controller node, use the `docker/ucp backup`
|
||||
command. This creates a tar archive with the contents of the volumes used by
|
||||
UCP on that node, and streams it to stdout.
|
||||
You need to run the backup command on a UCP manager node. Since UCP stores
|
||||
the same data on all manager nodes, you only need to create a backup of a
|
||||
single node.
|
||||
|
||||
To create a consistent backup, the backup command temporarily stops the UCP
|
||||
containers running on the node where the backup is being performed. User
|
||||
|
|
|
@ -21,16 +21,16 @@ Docker Engine to run.
|
|||
For each host that you plan to manage with UCP:
|
||||
|
||||
1. Log in into that host using ssh.
|
||||
2. Install CS Docker Engine:
|
||||
2. Install Docker Engine 1.13:
|
||||
|
||||
```bash
|
||||
curl -SLf https://packages.docker.com/1.12/install.sh | sh
|
||||
curl -fsSL https://test.docker.com/ | sh
|
||||
```
|
||||
|
||||
[You can also install CS Docker Engine using a package manager](/cs-engine/install.md)
|
||||
[You can also install Docker Engine using a package manager](/engine/installation.md)
|
||||
|
||||
Make sure you install the same CS Docker Engine version on all the nodes. Also,
|
||||
if you're creating virtual machine templates with CS Docker Engine already
|
||||
Make sure you install the same Docker Engine version on all the nodes. Also,
|
||||
if you're creating virtual machine templates with Docker Engine already
|
||||
installed, make sure the `/etc/docker/key.json` file is not included in the
|
||||
virtual machine image. When provisioning the virtual machine, restart the Docker
|
||||
daemon to generate a new `/etc/docker/key.json` file.
|
||||
|
@ -61,12 +61,12 @@ To install UCP:
|
|||
|
||||
```none
|
||||
# Pull the latest version of UCP
|
||||
$ docker pull docker/ucp:latest
|
||||
$ docker pull docker/ucp:2.1.0-beta1
|
||||
|
||||
# Install UCP
|
||||
$ docker run --rm -it --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
docker/ucp install \
|
||||
docker/ucp:2.1.0-beta1 install \
|
||||
--host-address <node-ip-address> \
|
||||
--interactive
|
||||
```
|
||||
|
@ -84,7 +84,10 @@ license.
|
|||
|
||||
{: .with-border}
|
||||
|
||||
If you don't have a license yet, [learn how to get a free trial license](license.md).
|
||||
If you're registered in the beta program and don't have a license yet, you
|
||||
can get it from your [Docker Store subscriptions](https://store.docker.com/?overlay=subscriptions).
|
||||
|
||||
<!-- If you don't have a license yet, [learn how to get a free trial license](license.md). -->
|
||||
|
||||
## Step 6: Join manager nodes
|
||||
|
||||
|
|
|
@ -15,11 +15,11 @@ Before installing UCP you should make sure that all nodes (physical or virtual
|
|||
machines) that you'll manage with UCP:
|
||||
|
||||
* [Comply the the system requirements](system-requirements.md)
|
||||
* Are running the same version of CS Docker Engine
|
||||
* Are running the same version of Docker Engine
|
||||
|
||||
## Hostname strategy
|
||||
|
||||
Docker UCP requires CS Docker Engine to run. Before installing the commercially
|
||||
Docker UCP requires Docker Engine to run. Before installing the commercially
|
||||
supported Docker Engine on your cluster nodes, you should plan for a common
|
||||
hostname strategy.
|
||||
|
||||
|
@ -28,13 +28,12 @@ Domain Names (FQDN) likes `engine01.docker.vm`. Independently of your choice,
|
|||
ensure your naming strategy is consistent across the cluster, since Docker
|
||||
Engine and UCP use hostnames.
|
||||
|
||||
As an example, if your cluster has 4 hosts you can name them:
|
||||
As an example, if your cluster has 3 hosts you can name them:
|
||||
|
||||
```bash
|
||||
engine01.docker.vm
|
||||
engine02.docker.vm
|
||||
engine03.docker.vm
|
||||
engine04.docker.vm
|
||||
```none
|
||||
node1.company.example.org
|
||||
node2.company.example.org
|
||||
node3.company.example.org
|
||||
```
|
||||
|
||||
## Static IP addresses
|
||||
|
@ -90,10 +89,9 @@ reach the UCP controller,
|
|||
You can have a certificate for each controller, with a common SAN. As an
|
||||
example, on a three node cluster you can have:
|
||||
|
||||
* engine01.docker.vm with SAN ducp.docker.vm
|
||||
* engine02.docker.vm with SAN ducp.docker.vm
|
||||
* engine03.docker.vm with SAN ducp.docker.vm
|
||||
|
||||
* node1.company.example.org with SAN ucp.company.org
|
||||
* node2.company.example.org with SAN ucp.company.org
|
||||
* node3.company.example.org with SAN ucp.company.org
|
||||
|
||||
Alternatively, you can also install UCP with a single externally-signed
|
||||
certificate for all controllers rather than one for each controller node.
|
||||
|
|
|
@ -14,7 +14,7 @@ You can install UCP on-premises or on a cloud provider. To install UCP,
|
|||
all nodes must have:
|
||||
|
||||
* Linux kernel version 3.10 or higher
|
||||
* CS Docker Engine version 1.13.0 or higher
|
||||
* Docker Engine version 1.13.0 or higher
|
||||
* 2.00 GB of RAM
|
||||
* 3.00 GB of available disk space
|
||||
* A static IP address
|
||||
|
@ -42,6 +42,7 @@ When installing UCP on a host, make sure the following ports are open:
|
|||
| managers | in | TCP 12384 | Port for the authentication storage backend for replication across managers |
|
||||
| managers | in | TCP 12385 | Port for the authentication service API |
|
||||
| managers | in | TCP 12386 | Port for the authentication worker |
|
||||
| managers | in | TCP 12387 | Port for the metrics service |
|
||||
|
||||
## Compatibility and maintenance lifecycle
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ After you uninstall UCP from the cluster, you'll no longer be able to enforce
|
|||
role-based access control to the cluster, or have a centralized way to monitor
|
||||
and manage the cluster.
|
||||
|
||||
WARNING: After uninstalling UCP from the cluster, you will no longer be able to
|
||||
After uninstalling UCP from the cluster, you will no longer be able to
|
||||
join new nodes using `docker swarm join` unless you reinstall UCP.
|
||||
|
||||
To uninstall UCP, log in into a manager node using ssh, and run:
|
||||
|
|
|
@ -2,27 +2,21 @@
|
|||
description: Learn how to upgrade Docker Universal Control Plane with minimal impact
|
||||
to your users.
|
||||
keywords: Docker, UCP, upgrade, update
|
||||
redirect_from:
|
||||
- /ucp/upgrade-ucp/
|
||||
- /ucp/installation/upgrade/
|
||||
title: Upgrade to UCP 2.0
|
||||
title: Upgrade to UCP 2.1
|
||||
---
|
||||
|
||||
This page guides you in upgrading Docker Universal Control Plane (UCP) to
|
||||
version 2.0.
|
||||
version 2.1.
|
||||
|
||||
Before upgrading to a new version of UCP, check the
|
||||
[release notes](../release-notes.md) for the version you are upgrading to.
|
||||
[release notes](../release-notes.md) for this version.
|
||||
There you'll find information about the new features, breaking changes, and
|
||||
other relevant information for upgrading to a particular version.
|
||||
|
||||
## Plan the upgrade
|
||||
|
||||
As part of the upgrade process, you'll be upgrading the CS Docker Engine
|
||||
installed in each node of the cluster to version 1.12. If you're currently
|
||||
running CS Docker Engine 1.11.2-cs3, all containers will be stopped during the
|
||||
upgrade, causing some downtime to UCP and your applications.
|
||||
|
||||
As part of the upgrade process, you'll be upgrading the Docker Engine
|
||||
installed in each node of the cluster to version 1.13.
|
||||
You should plan for the upgrade to take place outside business hours, to ensure
|
||||
there's minimal impact to your users.
|
||||
|
||||
|
@ -38,69 +32,42 @@ Then, [create a backup](../high-availability/backups-and-disaster-recovery.md)
|
|||
of your cluster. This will allow you to recover from an existing backup if
|
||||
something goes wrong during the upgrade process.
|
||||
|
||||
## Upgrade CS Docker Engine
|
||||
## Upgrade Docker Engine
|
||||
|
||||
For each node that is part of your cluster, upgrade the CS Docker Engine
|
||||
installed on that node to CS Docker Engine version 1.12 or higher.
|
||||
For each node that is part of your cluster, upgrade the Docker Engine
|
||||
installed on that node to Docker Engine version 1.13 or higher.
|
||||
|
||||
Starting with the controller nodes, and then worker nodes:
|
||||
Starting with the manager nodes, and then worker nodes:
|
||||
|
||||
1. Log into the node using ssh.
|
||||
2. Upgrade the CS Docker Engine to version 1.12 or higher.
|
||||
|
||||
If you're upgrading from CS Docker Engine 1.11.3 or previous this will cause
|
||||
some downtime on that node, since all containers will be stopped.
|
||||
|
||||
Containers that have a restart policy set to
|
||||
'always', are automatically started after the upgrade. This is the case of
|
||||
UCP and DTR components. All other containers need to be started manually.
|
||||
|
||||
2. Upgrade the Docker Engine to version 1.13 or higher.
|
||||
3. Make sure the node is healthy.
|
||||
|
||||
In your browser, navigate to the **UCP web UI**, and validate that the
|
||||
node is healthy and is part of the cluster.
|
||||
|
||||
## Upgrade the first controller node
|
||||
## Upgrade UCP
|
||||
|
||||
Start by upgrading a controller node that has valid root CA material. This
|
||||
can be the first node where you installed UCP or any controller replica
|
||||
that you've installed using that node's root CA material.
|
||||
You can upgrade UCP from the web UI or the CLI.
|
||||
|
||||
1. Log into the controller node using ssh.
|
||||
2. Pull the docker/ucp image for the version you want to upgrade to.
|
||||
To upgrade from the CLI, log into a UCP manager node using ssh, and run:
|
||||
|
||||
```bash
|
||||
# Check on Docker Hub which versions are available
|
||||
$ docker pull docker/ucp:<version>
|
||||
```
|
||||
```
|
||||
# Check on Docker Hub which versions are available
|
||||
$ docker pull docker/ucp:<version>
|
||||
|
||||
3. Upgrade UCP by running:
|
||||
$ docker run --rm -it \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
docker/ucp:<version> \
|
||||
upgrade --interactive
|
||||
```
|
||||
|
||||
```bash
|
||||
$ docker run --rm -it \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
docker/ucp:<version> \
|
||||
upgrade --interactive
|
||||
```
|
||||
This runs the upgrade command in interactive mode, so that you are prompted
|
||||
for any necessary configuration values.
|
||||
|
||||
This runs the upgrade command in interactive mode, so that you are prompted
|
||||
for any necessary configuration values.
|
||||
|
||||
The upgrade command will make configuration changes to Docker Engine.
|
||||
You'll be prompted to restart the Docker Engine, and run the upgrade
|
||||
command again, to continue the upgrade.
|
||||
|
||||
4. Make sure the node is healthy.
|
||||
|
||||
In your browser, navigate to the **UCP web UI**, and validate that the
|
||||
node is healthy and is part of the cluster.
|
||||
|
||||
## Upgrade other nodes
|
||||
|
||||
Follow the procedure described above to upgrade other nodes in the cluster.
|
||||
Start by upgrading the remaining controller nodes, and then upgrade any worker
|
||||
nodes.
|
||||
Once the upgrade finishes, navigate to the **UCP web UI** and make sure that
|
||||
all the nodes managed by UCP are healthy.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
Loading…
Reference in New Issue