Refactor image names and versions (#159)

* Refactor org/repo:version for DTR

* Refactor org/repo:version for UCP
This commit is contained in:
Joao Fernandes 2017-07-17 18:57:46 -07:00 committed by Jim Galasyn
parent 5641b2deb1
commit 3b66acf7d0
16 changed files with 64 additions and 63 deletions

View File

@ -103,8 +103,9 @@ defaults:
- scope:
path: "datacenter/dtr/2.3"
values:
dtr_version_minor: "2.3"
dtr_version_patch: "2.3.0"
dtr_org: "docker"
dtr_repo: "dtr"
dtr_version: "2.3.0"
- scope:
path: "datacenter/dtr/2.2"
values:
@ -127,9 +128,9 @@ defaults:
- scope:
path: "datacenter/ucp/2.2"
values:
ucp_version: "2.2"
dtr_version: "2.3"
docker_image: "docker/ucp:2.2.0"
ucp_org: "docker"
ucp_repo: "ucp"
ucp_version: "2.3.0"
- scope:
path: "datacenter/ucp/2.1"
values:

View File

@ -83,7 +83,7 @@ command, replacing the placeholders for the real values:
read -sp 'ucp password: ' UCP_PASSWORD; \
docker run -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr:{{ page.dtr_version_patch }} backup \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} backup \
--ucp-url <ucp-url> \
--ucp-insecure-tls \
--ucp-username <ucp-username> \
@ -120,9 +120,9 @@ of the tar file created. The backup of the images should look like:
```none
tar -tf {{ image_backup_file }}
dtr-backup-v{{ page.dtr_version_patch }}/
dtr-backup-v{{ page.dtr_version_patch }}/rethink/
dtr-backup-v{{ page.dtr_version_patch }}/rethink/layers/
dtr-backup-v{{ page.dtr_version }}/
dtr-backup-v{{ page.dtr_version }}/rethink/
dtr-backup-v{{ page.dtr_version }}/rethink/layers/
```
And the backup of the DTR metadata should look like:
@ -131,10 +131,10 @@ And the backup of the DTR metadata should look like:
tar -tf {{ backup-metadata.tar }}
# The archive should look like this
dtr-backup-v{{ page.dtr_version_patch }}/
dtr-backup-v{{ page.dtr_version_patch }}/rethink/
dtr-backup-v{{ page.dtr_version_patch }}/rethink/properties/
dtr-backup-v{{ page.dtr_version_patch }}/rethink/properties/0
dtr-backup-v{{ page.dtr_version }}/
dtr-backup-v{{ page.dtr_version }}/rethink/
dtr-backup-v{{ page.dtr_version }}/rethink/properties/
dtr-backup-v{{ page.dtr_version }}/rethink/properties/0
```
If you've encrypted the metadata backup, you can use:
@ -174,7 +174,7 @@ Start by removing any DTR container that is still running:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} destroy \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} destroy \
--ucp-insecure-tls
```
@ -205,7 +205,7 @@ placeholders for the real values:
read -sp 'ucp password: ' UCP_PASSWORD; \
docker run -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr:{{ page.dtr_version_patch }} restore \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} restore \
--ucp-url <ucp-url> \
--ucp-insecure-tls \
--ucp-username <ucp-username> \

View File

@ -184,7 +184,7 @@ docker run --detach --restart always \
--publish 5000:5000 \
--volume $(pwd)/dtr-ca.pem:/certs/dtr-ca.pem \
--volume $(pwd)/config.yml:/config.yml \
docker/dtr-content-cache:{{ page.dtr_version_patch }} /config.yml
{{ page.dtr_org }}/dtr-content-cache:{{ page.dtr_version }} /config.yml
```
You can also run the command in interactive mode instead of detached by

View File

@ -75,7 +75,7 @@ docker run --detach --restart always \
--volume $(pwd)/dtr-cache-key.pem:/certs/dtr-cache-key.pem \
--volume $(pwd)/dtr-ca.pem:/certs/dtr-ca.pem \
--volume $(pwd)/config.yml:/config.yml \
docker/dtr-content-cache:{{ page.dtr_version_patch }} /config.yml
docker/dtr-content-cache:{{ page.dtr_version }} /config.yml
```
## Use Let's Encrypt

View File

@ -31,7 +31,7 @@ mkdir /tmp/mydir && sudo mount -t nfs <nfs server>:<directory>
One way to configure DTR to use an NFS directory is at install time:
```none
docker run -it --rm docker/dtr:{{ dtr_version_patch }} install \
docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ dtr_version }} install \
--nfs-storage-url <nfs-storage-url> \
<other options>
```
@ -50,7 +50,7 @@ If you want to start using the new DTR built-in support for NFS you can
reconfigure DTR:
```none
docker run -it --rm docker/dtr:{{ dtr_version_patch }} reconfigure \
docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ dtr_version }} reconfigure \
--nfs-storage-url <nfs-storage-url>
```
@ -58,7 +58,7 @@ If you want to reconfigure DTR to stop using NFS storage, leave the option
in blank:
```none
docker run -it --rm docker/dtr:{{ dtr_version_patch}} reconfigure \
docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ dtr_version}} reconfigure \
--nfs-storage-url ""
```

View File

@ -50,7 +50,7 @@ To add replicas to an existing DTR deployment:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} join \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```
@ -70,7 +70,7 @@ To remove a DTR replica from your deployment:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} remove \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
--ucp-insecure-tls
```

View File

@ -29,18 +29,18 @@ You can't install DTR on a standalone Docker Engine.
## Step 3. Install DTR
To install DTR you use the `docker/dtr` image. This image has commands to
To install DTR you use the `{{ page.dtr_org }}/{{ page.dtr_repo }}` image. This image has commands to
install, configure, and backup DTR.
Run the following command to install DTR:
```none
# Pull the latest version of DTR
$ docker pull docker/dtr{{ page.dtr_version_patch }}
$ docker pull {{ page.dtr_org }}/{{ page.dtr_repo }}{{ page.dtr_version }}
# Install DTR
$ docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} install \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} install \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```
@ -119,7 +119,7 @@ To add replicas to a DTR cluster, use the `docker/dtr join` command:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} join \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```

View File

@ -9,7 +9,7 @@ replica. To do that, you just run the destroy command once per replica:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} destroy \
docker/dtr:{{ page.dtr_version }} destroy \
--ucp-insecure-tls
```

View File

@ -18,7 +18,7 @@ Use SSH to log into a UCP node, and run:
```none
docker run -it --rm \
--net dtr-ol --name overlay-test1 \
--entrypoint sh docker/dtr
--entrypoint sh {{ page.dtr_org }}/{{ page.dtr_repo }}
```
Then use SSH to log into another UCP node and run:
@ -26,7 +26,7 @@ Then use SSH to log into another UCP node and run:
```none
docker run -it --rm \
--net dtr-ol --name overlay-test2 \
--entrypoint ping docker/dtr -c 3 overlay-test1
--entrypoint ping {{ page.dtr_org }}/{{ page.dtr_repo }} -c 3 overlay-test1
```
If the second command succeeds, it means that overlay networking is working
@ -74,7 +74,7 @@ and join a new one. Start by running:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} remove \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \
--ucp-insecure-tls
```
@ -82,7 +82,7 @@ And then:
```none
docker run -it --rm \
docker/dtr:{{ page.dtr_version_patch }} join \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```

View File

@ -54,7 +54,7 @@ Make sure you're running DTR 2.2. If that's not the case, [upgrade your installa
Then pull the latest version of DTR:
```none
$ docker pull docker/dtr:{{ page.dtr_version_patch }}
$ docker pull {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }}
```
If the node you're upgrading doesn't have access to the internet, you can
@ -66,7 +66,7 @@ nodes if upgrading offline), run the upgrade command:
```none
$ docker run -it --rm \
docker/dtr{{ page.dtr_version_patch }} upgrade \
{{ page.dtr_org }}/{{ page.dtr_repo }}{{ page.dtr_version }} upgrade \
--ucp-insecure-tls
```

View File

@ -61,12 +61,12 @@ To install UCP:
```none
# Pull the latest version of UCP
$ docker pull {{ page.ucp_latest_image }}
$ docker pull {{ page.docker_image }}
# Install UCP
$ docker run --rm -it --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.ucp_latest_image }} install \
{{ page.docker_image }} install \
--host-address <node-ip-address> \
--interactive
```

View File

@ -15,7 +15,7 @@ The next step is creating a backup policy and disaster recovery plan.
As part of your backup policy you should regularly create backups of UCP.
To create a UCP backup, you can run the `{{ page.docker_image }} backup` command
To create a UCP backup, you can run the `{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup` command
on a single UCP manager. This command creates a tar archive with the
contents of all the [volumes used by UCP](../architecture.md) to persist data
and streams it to stdout.
@ -52,7 +52,7 @@ verify its contents:
# Create a backup, encrypt it, and store it on /tmp/backup.tar
$ docker run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.docker_image }} backup --interactive > /tmp/backup.tar
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup --interactive > /tmp/backup.tar
# Ensure the backup is a valid tar and list its contents
# In a valid backup file, over 100 files should appear in the list
@ -67,7 +67,7 @@ following example:
# Create a backup, encrypt it, and store it on /tmp/backup.tar
$ docker run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.docker_image }} backup --interactive \
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} backup --interactive \
--passphrase "secret" > /tmp/backup.tar
# Decrypt the backup and list its contents
@ -102,7 +102,7 @@ file, presumed to be located at `/tmp/backup.tar`:
```none
$ docker run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.docker_image }} restore < /tmp/backup.tar
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} restore < /tmp/backup.tar
```
If the backup file is encrypted with a passphrase, you will need to provide the
@ -111,7 +111,7 @@ passphrase to the restore operation:
```none
$ docker run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.docker_image }} restore --passphrase "secret" < /tmp/backup.tar
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} restore --passphrase "secret" < /tmp/backup.tar
```
The restore command may also be invoked in interactive mode, in which case the
@ -122,7 +122,7 @@ stdin:
$ docker run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/backup.tar:/config/backup.tar \
{{ page.docker_image }} restore -i
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} restore -i
```
## Disaster recovery
@ -148,7 +148,7 @@ manager failures, the system should be configured for [high availability](config
`uninstall-ucp` command.
4. Perform a restore operation on the recovered swarm manager node.
5. Log in to UCP and browse to the nodes page, or use the CLI `docker node ls`
command.
command.
6. If any nodes are listed as `down`, you'll have to manually [remove these
nodes](../configure/scale-your-cluster.md) from the cluster and then re-join
them using a `docker swarm join` operation with the cluster's new join-token.

View File

@ -29,7 +29,7 @@ command:
$ docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
--name ucp \
{{ page.ucp_latest_image }} uninstall-ucp --interactive
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} uninstall-ucp --interactive
```
This runs the uninstall command in interactive mode, so that you are prompted

View File

@ -33,7 +33,7 @@ This allows you to recover if something goes wrong during the upgrade process.
## Upgrade Docker Engine
For each node that is part of your cluster, upgrade the Docker Engine
installed on that node to Docker Engine version 17.06 or higher. Be sure
installed on that node to Docker Engine version 17.06 or higher. Be sure
to install the Docker Enterprise Edition.
Starting with the manager nodes, and then worker nodes:
@ -80,12 +80,12 @@ To upgrade from the CLI, log into a UCP manager node using ssh, and run:
```
# Get the latest version of UCP
$ docker pull {{ page.docker_image }}
$ docker pull {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }}
$ docker run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.ucp_latest_image }} \
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} \
upgrade --interactive
```

View File

@ -34,7 +34,7 @@ and run:
docker run --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.ucp_latest_image }} \
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} \
support > docker-support.tgz
```

View File

@ -23,24 +23,24 @@ Additional help is available for each command with the `--help` flag.
docker run -it --rm \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.ucp_latest_image }} \
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} \
command [command arguments]
```
## Commands
| Option | Description |
|:----------------|:----------------------------------------------------------|
| `install` | Install UCP on this node |
| `restart` | Start or restart UCP components running on this node |
| `stop` | Stop UCP components running on this node |
| `upgrade` | Upgrade the UCP cluster |
| `images` | Verify the UCP images on this node |
| `uninstall-ucp` | Uninstall UCP from this swarm |
| `dump-certs` | Print the public certificates used by this UCP web server |
| `fingerprint` | Print the TLS fingerprint for this UCP web server |
| `support` | Create a support dump for this UCP node |
| `id` | Print the ID of UCP running on this node |
| `backup` | Create a backup of a UCP manager node |
| `restore` | Restore a UCP cluster from a backup |
| `example-config`| Display an example configuration file for UCP |
| Option | Description |
|:-----------------|:----------------------------------------------------------|
| `install` | Install UCP on this node |
| `restart` | Start or restart UCP components running on this node |
| `stop` | Stop UCP components running on this node |
| `upgrade` | Upgrade the UCP cluster |
| `images` | Verify the UCP images on this node |
| `uninstall-ucp` | Uninstall UCP from this swarm |
| `dump-certs` | Print the public certificates used by this UCP web server |
| `fingerprint` | Print the TLS fingerprint for this UCP web server |
| `support` | Create a support dump for this UCP node |
| `id` | Print the ID of UCP running on this node |
| `backup` | Create a backup of a UCP manager node |
| `restore` | Restore a UCP cluster from a backup |
| `example-config` | Display an example configuration file for UCP |