Storage backend data migration updates

Fix incorrect API command, add backup updates

Update incorrect commands
This commit is contained in:
Maria Bermudez 2019-04-02 17:40:54 -07:00
parent d749399706
commit bf746b4529
9 changed files with 170 additions and 25 deletions

View File

@ -2170,6 +2170,8 @@ manuals:
section:
- path: /ee/dtr/admin/configure/external-storage/
title: Overview
- path: /ee/dtr/admin/configure/external-storage/storage-backend-migration/
title: Switch storage backends
- path: /ee/dtr/admin/configure/external-storage/s3/
title: S3
- path: /ee/dtr/admin/configure/external-storage/nfs/

View File

@ -82,16 +82,11 @@ all replicas can share the same storage backend.
DTR supports Amazon S3 or other storage systems that are S3-compatible like Minio.
[Learn how to configure DTR with Amazon S3](s3.md).
## Switching storage backends
Starting in DTR 2.6, switching storage backends initializes a new metadata store and erases your existing tags. This helps facilitate online garbage collection, which has been introduced in 2.5 as an experimental feature. Make sure to [perform a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-data) before you change your storage backend when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3. If you encounter an issue with lost tags, refer to the following resources:
* For changes to reconfigure and restore options in DTR 2.6, see [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/) and [docker/dtr restore](/reference/dtr/2.6/cli/restore).
* For Docker's recommended recovery strategies, see [DTR 2.6 lost tags after reconfiguring storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage).
* For NFS-specific changes, see [Use NFS](nfs.md).
* For S3-specific changes, see [Learn how to configure DTR with Amazon S3](s3.md).
## Where to go next
- [Switch storage backends](storage-backend-migration.md)
- [Use NFS](nfs.md)
- [Use S3](s3.md)
- CLI reference pages

View File

@ -53,12 +53,18 @@ To support **NFS v4**, more NFS options have been added to the CLI. See [New Fea
> See [Reconfigure Using a Local NFS Volume]( https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for Docker's recommended recovery strategy.
{: .warning}
#### DTR 2.6.4
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends.
```bash
docker run --rm -it \
docker/dtr:{{ page.dtr_version}} reconfigure \
--ucp-url <ucp_url> \
--ucp-username <ucp_username> \
--dtr-storage-volume <dtr-registry-nf>
--nfs-storage-url <dtr-registry-nf>
--async-nfs
--storage-migrated
```
To reconfigure DTR to stop using NFS storage, leave the `--nfs-storage-url` option
@ -71,6 +77,7 @@ docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version}}
## Where to go next
- [Switch storage backends](storage-backend-migration.md)
- [Create a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/)
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
- [Configure where images are stored](index.md)

View File

@ -133,15 +133,26 @@ DTR supports the following S3 regions:
## Update your S3 settings on the web interface
There is currently an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
When running 2.5.x (with experimental garbage collection) or 2.6.0-2.6.3, there is an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
## Restore DTR with S3
To [restore DTR using your previously configured S3 settings](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage), use `docker/dtr restore` with `--dtr-use-default-storage` to keep your metadata.
#### DTR 2.6.4
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends.
## Where to go next
- [Switch storage backends](storage-backend-migration.md)
- [Create a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/)
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
- [Configure where images are stored](index.md)
- CLI reference pages
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)

View File

@ -0,0 +1,78 @@
---
title: Switch storage backends
description: Storage backend migration for Docker Trusted Registry
keywords: dtr, storage drivers, local volume, NFS, Azure, S3,
---
Starting in DTR 2.6, switching storage backends initializes a new metadata store and erases your existing tags. This helps facilitate online garbage collection, which has been introduced in 2.5 as an experimental feature. In earlier versions, DTR would subsequently start a `tagmigration` job to rebuild tag metadata from the file layout in the image layer store. This job has been discontinued, as your storage backend could get out of sync with your DTR metadata, like your manifests and existing repositories. As best practice, DTR storage backends and metadata should always be moved, backed up, and restored together.
## DTR 2.6.4 and above
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure.
### Best practice for data migration
Docker recommends the following steps for your storage backend and metadata migration:
1. Disable garbage collection by selecting "Never" under **System > Garbage Collection**, so blobs referenced in the backup that you create continue to exist. See [Garbage collection](/ee/dtr/admin/configure/garbage-collection/) for more details.
![](/ee/dtr/images/garbage-collection-0.png){: .img-fluid .with-border}
2. [Back up your existing metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata). See [docker/dtr backup](/reference/dtr/2.6/cli/backup/) for CLI command description and options.
3. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your NFS contents to an S3 bucket if you're switching from NFS to S3.
4. [Restore DTR from your backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/) and specify your new storage backend. See [docker/dtr destroy](/reference/dtr/2.6/cli/destroy/) and [docker/dtr restore](/reference/dtr/2.6/cli/backup/) for CLI command descriptions and options.
5. Run garbage collection in "blob" mode. You can either SSH into a DTR node, or [use a UCP client bundle](/ee/ucp/user-access/cli/) to run the following command:
```bash
docker exec -it dtr-jobrunner-<replica_id> sh
```
> See [Find your replica ID](/ee/dtr/admin/disaster-recovery/create-a-backup/#find-your-replica-id) for tips on determining your replica ID.
Within the running container, type `/bin/job_executor` to start the `job_executor` binary.
```bash
/bin/job_executor
/ # onlinegc blobs
```
> Note that the first line results in a display of the binary name, its usage, and available commands including `onlinegc`. To learn more about a command, run `<command_name>` --help.
Running garbage collection in blob mode destroys any new blobs which are not referenced in your previously created backup.
### Alternative options for data migration
- If you have a long maintenance window, you can skip some steps from above and do the following:
1. Put DTR in "read-only" mode. To do so, send the following API request:
```bash
curl -u <username>:$TOKEN -X POST "https://<dtr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
```
On success, you should get a `202 Accepted` response.
2. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your NFS contents to an S3 bucket if you're switching from NFS to S3.
3. [Reconfigure DTR](/reference/dtr/2.6/cli/reconfigure) while specifying the `--storage-migrated` flag to preserve your existing tags.
- If you are not worried about inconsistencies in data, skip steps 1 and 2 and perform a reconfigure.
## DTR 2.6.0-2.6.3 and DTR 2.5 (with experimental garbage collection)
Make sure to [perform a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-data) before you change your storage backend when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3. If you encounter an issue with lost tags, refer to the following resources:
* For changes to reconfigure and restore options in DTR 2.6, see [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/) and [docker/dtr restore](/reference/dtr/2.6/cli/restore).
* For Docker's recommended recovery strategies, see [DTR 2.6 lost tags after reconfiguring storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage).
* For NFS-specific changes, see [Use NFS](nfs.md).
* For S3-specific changes, see [Learn how to configure DTR with Amazon S3](s3.md).
Upgrade to [DTR 2.6.4](#dtr-264-and-above) and follow [best practice for data migration](#best-practice-for-data-migration) to avoid the wiped tags issue.
## Where to go next
- [Use NFS](nfs.md)
- [Use S3](s3.md)
- CLI reference pages
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)

View File

@ -2,6 +2,7 @@
title: Create a backup
description: Learn how to create a backup of Docker Trusted Registry, for disaster recovery.
keywords: dtr, disaster recovery
toc_max_header: 5
---
{% assign metadata_backup_file = "dtr-metadata-backup.tar" %}
@ -43,7 +44,7 @@ command backs up the following data:
## Back up DTR data
To create a backup of DTR you need to:
To create a backup of DTR, you need to:
1. Back up image content
2. Back up DTR metadata
@ -53,13 +54,46 @@ restore. If you have not previously performed a backup, the web interface displa
![](/ee/dtr/images/backup-warning.png)
#### Find your replica ID
Since you need your DTR replica ID during a backup, the following covers a few ways for you to determine your replica ID:
##### UCP web interface
You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface.
##### UCP client bundle
From a terminal [using a UCP client bundle]((/ee/ucp/user-access/cli/)), run:
{% raw %}
```bash
docker ps --format "{{.Names}}" | grep dtr
# The list of DTR containers with <node>/<component>-<replicaID>, e.g.
# node-1/dtr-api-a1640e1c15b6
```
{% endraw %}
##### SSH access
Another way to determine the replica ID is to SSH into a DTR node and run the following:
{% raw %}
```bash
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
&& echo $REPLICA_ID
```
{% endraw %}
### Back up image content
Since you can configure the storage backend that DTR uses to store images,
the way you backup images depends on the storage backend you're using.
the way you back up images depends on the storage backend you're using.
If you've configured DTR to store images on the local file system or NFS mount,
you can backup the images by using ssh to log into a node where DTR is running,
you can backup the images by using SSH to log in to a DTR node,
and creating a tar archive of the [dtr-registry volume](../../architecture.md):
{% raw %}
@ -76,10 +110,16 @@ recommended for that system.
### Back up DTR metadata
To create a DTR backup, load your UCP client bundle, and run the following
command, replacing the placeholders for the real values:
command, replacing the placeholders with real values:
```none
read -sp 'ucp password: ' UCP_PASSWORD; \
```bash
read -sp 'ucp password: ' UCP_PASSWORD;
```
This prompts you for the UCP password. Next, run the following to back up your DTR metadata and save the result into a tar archive. You can learn more about the supported flags in
the [reference documentation](/reference/dtr/2.6/cli/backup.md).
```bash
docker run --log-driver none -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} backup \
@ -95,14 +135,9 @@ Where:
* `<ucp-username>` is the username of a UCP administrator.
* `<replica-id>` is the id of the DTR replica to backup.
This prompts you for the UCP password, backups up the DTR metadata and saves the
result into a tar archive. You can learn more about the supported flags in
the [reference documentation](/reference/dtr/2.5/cli/backup.md).
By default the backup command doesn't stop the DTR replica being backed up.
This allows performing backups without affecting your users. Since the replica
is not stopped, it's possible that happen while the backup is taking place, won't
be persisted.
This means you can take frequent backups without affecting your users.
You can use the `--offline-backup` option to stop the DTR replica while taking
the backup. If you do this, remove the replica from the load balancing pool.
@ -117,6 +152,7 @@ gpg --symmetric {{ metadata_backup_file }}
This prompts you for a password to encrypt the backup, copies the backup file
and encrypts it.
### Test your backups
To validate that the backup was correctly performed, you can print the contents
@ -151,3 +187,13 @@ gpg -d {{ metadata_backup_file }} | tar -t
You can also create a backup of a UCP cluster and restore it into a new
cluster. Then restore DTR on that new cluster to confirm that everything is
working as expected.
## Where to go next
- [Configure your storage backend](/ee/dtr/admin/configure/external-storage/index.md)
- [Switch your storage backend](/ee/dtr/admin/configure/external-storage/storage-backend-migration.md)
- [Use NFS](/ee/dtr/admin/configure/external-storage/nfs.md)
- [Use S3](/ee/dtr/admin/configure/external-storage/s3.md)
- CLI reference pages
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)

View File

@ -59,8 +59,13 @@ the configuration created during a backup.
Load your UCP client bundle, and run the following command, replacing the
placeholders for the real values:
```none
read -sp 'ucp password: ' UCP_PASSWORD; \
```bash
read -sp 'ucp password: ' UCP_PASSWORD;
```
This prompts you for the UCP password. Next, run the following to restore DTR from your backup. You can learn more about the supported flags in [docker/dtr restore](/reference/dtr/2.6/cli/restore).
```bash
docker run -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} restore \

View File

@ -40,13 +40,14 @@ time, configure your DTR for high availability.
| `--log-host` | $LOG_HOST | The syslog system to send logs to. The endpoint to send logs to. Use this flag if you set `--log-protocol` to `tcp` or `udp`. |
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are `debug`, `info`, `warn`, `error`, or `fatal`. |
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal. By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are `tcp`, `udp`, and `internal`. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with `--log-host`. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the issue, manually create a storage volume on each DTR node and reconfigure DTR with `--dtr-storage-volume` and your newly-created volume instead. See [Reconfigure Using a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for more details. To reconfigure DTR to stop using NFS, leave this option empty: `--nfs-storage-url ""`. See [USE NFS](/ee/dtr/admin/configure/external-storage/nfs/) for more details. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the issue, manually create a storage volume on each DTR node and reconfigure DTR with `--dtr-storage-volume` and your newly-created volume instead. See [Reconfigure Using a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for more details. To reconfigure DTR to stop using NFS, leave this option empty: `--nfs-storage-url ""`. See [USE NFS](/ee/dtr/admin/configure/external-storage/nfs/) for more details. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
| `--async-nfs` | $ASYNC_NFS | Use async NFS volume options on the replica specified in the `--existing-replica-id` option. The NFS configuration must be set with `--nfs-storage-url` explicitly to use this option. Using `--async-nfs` will bring down any containers on the replica that use the NFS volume, delete the NFS volume, bring it back up with the appropriate configuration, and restart any containers that were brought down. |
| `--nfs-options` | $NFS_OPTIONS | Pass in NFS volume options verbatim for the replica specified in the `--existing-replica-id` option. The NFS configuration must be set with `--nfs-storage-url` explicitly to use this option. Specifying `--nfs-options` will pass in character-for-character the options specified in the argument when creating or recreating the NFS volume. For instance, to use NFS v4 with async, pass in "rw,nfsvers=4,async" as the argument. |
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for. When using `--http-proxy` you can use this flag to specify a list of domains that you don't want to route through the proxy. Format `acme.com[, acme.org]`. |
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is `80`. This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is `443`. This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
| `--replica-rethinkdb-cache-mb` | $RETHINKDB_CACHE_MB | The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is `(available_memory - 1024) / 2`. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one. |
| `--storage-migrated` | $STORAGE_MIGRATED | A flag added in 2.6.4 which lets you indicate the migration status of your storage data. Specify this flag if you are migrating to a new storage backend and have already moved all contents from your old backend to your new one. If not specified, DTR will assume the new backend is empty during a backend storage switch, and consequently destroy your existing tags and related image metadata. |
| `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from `https://<ucp-url>/ca`, and use `--ucp-ca "$(cat ca.pem)"`. |
| `--ucp-insecure-tls` | $UCP_INSECURE_TLS | Disable TLS verification for UCP. The installation uses TLS but always trusts the TLS certificate used by UCP, which can lead to MITM (man-in-the-middle) attacks. For production deployments, use `--ucp-ca "$(cat ca.pem)"` instead. |
| `--ucp-password` | $UCP_PASSWORD | The UCP administrator password. |

View File

@ -50,8 +50,8 @@ DTR replicas for high availability.
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your DTR system settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. |
| `--dtr-key` | $DTR_KEY | Use a PEM-encoded TLS private key for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with `--dtr-key "$(cat ca.pem)"`. |
| `--dtr-storage-volume` | $DTR_STORAGE_VOLUME | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. If you have previously configured DTR to use a full path or volume name for storage, specify this flag to use the same setting on restore. See [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md) for usage details. |
| `--dtr-use-default-storage` | $DTR_DEFAULT_STORAGE | Mandatory flag to allow for DTR to fall back to your configured storage backend at the time of backup. If cloud storage was configured, then the default storage on restore is cloud storage. Otherwise, local storage is used. With DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, this flag must be specified in order to keep your DTR metadata. If you encounter an issue with lost tags, see [Restore to Cloud Storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage) for Docker's recommended recovery strategy. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. If NFS was previously configured, you have to manually create a storage volume on each DTR node and specify `--dtr-storage-volume` with the newly-created volume instead. See [Restore to a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretoalocalnfsvolume) for more details. For additional NFS configuration options to support **NFS v4**, see [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md). |
| `--dtr-use-default-storage` | $DTR_DEFAULT_STORAGE | Mandatory flag to allow for DTR to fall back to your configured storage backend at the time of backup. If cloud storage was configured, then the default storage on restore is cloud storage. Otherwise, local storage is used. With DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, this flag must be specified in order to keep your DTR metadata. If you encounter an issue with lost tags, see [Restore to Cloud Storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage) for Docker's recommended recovery strategy. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
| `--nfs-storage-url` | $NFS_STORAGE_URL | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. If NFS was previously configured, you have to manually create a storage volume on each DTR node and specify `--dtr-storage-volume` with the newly-created volume instead. See [Restore to a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretoalocalnfsvolume) for more details. For additional NFS configuration options to support **NFS v4**, see [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md). [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
| `--enable-pprof` | $DTR_PPROF | Enables pprof profiling of the server. Use `--enable-pprof=false` to disable it. Once DTR is deployed with this flag, you can access the `pprof` endpoint for the api server at `/debug/pprof`, and the registry endpoint at `/registry_debug_pprof/debug/pprof`. |
| `--help-extended` | $DTR_EXTENDED_HELP | Display extended help text for a given command. |
| `--http-proxy` | $DTR_HTTP_PROXY | The HTTP proxy used for outgoing requests. |