mirror of https://github.com/docker/docs.git
commit
712a23e1b4
|
|
@ -2170,6 +2170,8 @@ manuals:
|
|||
section:
|
||||
- path: /ee/dtr/admin/configure/external-storage/
|
||||
title: Overview
|
||||
- path: /ee/dtr/admin/configure/external-storage/storage-backend-migration/
|
||||
title: Switch storage backends
|
||||
- path: /ee/dtr/admin/configure/external-storage/s3/
|
||||
title: S3
|
||||
- path: /ee/dtr/admin/configure/external-storage/nfs/
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ $ docker run \
|
|||
|
||||
### Options
|
||||
|
||||
The `json-file` logging driver supports the following logging options:
|
||||
The `local` logging driver supports the following logging options:
|
||||
|
||||
| Option | Description | Example value |
|
||||
|:------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------|
|
||||
|
|
|
|||
|
|
@ -202,7 +202,7 @@ cd example
|
|||
# create an example file
|
||||
touch somefile.txt
|
||||
|
||||
# build and image using the current directory as context, and a Dockerfile passed through stdin
|
||||
# build an image using the current directory as context, and a Dockerfile passed through stdin
|
||||
docker build -t myimage:latest -f- . <<EOF
|
||||
FROM busybox
|
||||
COPY somefile.txt .
|
||||
|
|
|
|||
|
|
@ -82,16 +82,11 @@ all replicas can share the same storage backend.
|
|||
DTR supports Amazon S3 or other storage systems that are S3-compatible like Minio.
|
||||
[Learn how to configure DTR with Amazon S3](s3.md).
|
||||
|
||||
## Switching storage backends
|
||||
|
||||
Starting in DTR 2.6, switching storage backends initializes a new metadata store and erases your existing tags. This helps facilitate online garbage collection, which has been introduced in 2.5 as an experimental feature. Make sure to [perform a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-data) before you change your storage backend when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3. If you encounter an issue with lost tags, refer to the following resources:
|
||||
* For changes to reconfigure and restore options in DTR 2.6, see [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/) and [docker/dtr restore](/reference/dtr/2.6/cli/restore).
|
||||
* For Docker's recommended recovery strategies, see [DTR 2.6 lost tags after reconfiguring storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage).
|
||||
* For NFS-specific changes, see [Use NFS](nfs.md).
|
||||
* For S3-specific changes, see [Learn how to configure DTR with Amazon S3](s3.md).
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Switch storage backends](storage-backend-migration.md)
|
||||
- [Use NFS](nfs.md)
|
||||
- [Use S3](s3.md)
|
||||
- CLI reference pages
|
||||
|
|
|
|||
|
|
@ -53,12 +53,18 @@ To support **NFS v4**, more NFS options have been added to the CLI. See [New Fea
|
|||
> See [Reconfigure Using a Local NFS Volume]( https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for Docker's recommended recovery strategy.
|
||||
{: .warning}
|
||||
|
||||
#### DTR 2.6.4
|
||||
|
||||
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. The following shows you how to reconfigure DTR using an NFSv4 volume as a storage backend:
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
docker/dtr:{{ page.dtr_version}} reconfigure \
|
||||
--ucp-url <ucp_url> \
|
||||
--ucp-username <ucp_username> \
|
||||
--dtr-storage-volume <dtr-registry-nf>
|
||||
--nfs-storage-url <dtr-registry-nf>
|
||||
--async-nfs
|
||||
--storage-migrated
|
||||
```
|
||||
|
||||
To reconfigure DTR to stop using NFS storage, leave the `--nfs-storage-url` option
|
||||
|
|
@ -71,6 +77,7 @@ docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version}}
|
|||
|
||||
## Where to go next
|
||||
|
||||
- [Switch storage backends](storage-backend-migration.md)
|
||||
- [Create a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/)
|
||||
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
|
||||
- [Configure where images are stored](index.md)
|
||||
|
|
|
|||
|
|
@ -133,7 +133,7 @@ DTR supports the following S3 regions:
|
|||
|
||||
## Update your S3 settings on the web interface
|
||||
|
||||
There is currently an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
|
||||
When running 2.5.x (with experimental garbage collection) or 2.6.0-2.6.4, there is an issue with [changing your S3 settings on the web interface](/ee/dtr/release-notes#version-26) which leads to erased metadata. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed.
|
||||
|
||||
## Restore DTR with S3
|
||||
|
||||
|
|
@ -141,7 +141,13 @@ To [restore DTR using your previously configured S3 settings](https://success.do
|
|||
|
||||
## Where to go next
|
||||
|
||||
- [Create a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/)
|
||||
- [Restore from a backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/)
|
||||
- [Configure where images are stored](index.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
title: Switch storage backends
|
||||
description: Storage backend migration for Docker Trusted Registry
|
||||
keywords: dtr, storage drivers, local volume, NFS, Azure, S3,
|
||||
---
|
||||
|
||||
Starting in DTR 2.6, switching storage backends initializes a new metadata store and erases your existing tags. This helps facilitate online garbage collection, which has been introduced in 2.5 as an experimental feature. In earlier versions, DTR would subsequently start a `tagmigration` job to rebuild tag metadata from the file layout in the image layer store. This job has been discontinued for DTR 2.5.x (with garbage collection) and DTR 2.6, as your storage backend could get out of sync with your DTR metadata, like your manifests and existing repositories. As best practice, DTR storage backends and metadata should always be moved, backed up, and restored together.
|
||||
|
||||
## DTR 2.6.4 and above
|
||||
|
||||
In DTR 2.6.4, a new flag, `--storage-migrated`, [has been added to `docker/dtr reconfigure`](/reference/dtr/2.6/cli/reconfigure/) which lets you indicate the migration status of your storage data during a reconfigure. If you are not worried about losing your existing tags, you can skip the recommended steps below and [perform a reconfigure](/reference/dtr/2.6/cli/reconfigure/).
|
||||
|
||||
### Best practice for data migration
|
||||
|
||||
Docker recommends the following steps for your storage backend and metadata migration:
|
||||
|
||||
1. Disable garbage collection by selecting "Never" under **System > Garbage Collection**, so blobs referenced in the backup that you create continue to exist. See [Garbage collection](/ee/dtr/admin/configure/garbage-collection/) for more details. Make sure to keep it disabled while you're performing the metadata backup and migrating your storage data.
|
||||
|
||||
{: .img-fluid .with-border}
|
||||
|
||||
2. [Back up your existing metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata). See [docker/dtr backup](/reference/dtr/2.6/cli/backup/) for CLI command description and options.
|
||||
|
||||
3. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your current storage data to your new NFS server.
|
||||
|
||||
4. [Restore DTR from your backup](/ee/dtr/admin/disaster-recovery/restore-from-backup/) and specify your new storage backend. See [docker/dtr destroy](/reference/dtr/2.6/cli/destroy/) and [docker/dtr restore](/reference/dtr/2.6/cli/backup/) for CLI command descriptions and options.
|
||||
|
||||
5. With DTR restored from your backup and your storage data migrated to your new backend, garbage collect any dangling blobs using the following API request:
|
||||
|
||||
```bash
|
||||
curl -u <username>:$TOKEN -X POST "https://<dtr-url>/api/v0/jobs" -H "accept: application/json" -H "content-type: application/json" -d "{ \"action": \"onlinegc_blobs\" }"
|
||||
```
|
||||
On success, you should get a `202 Accepted` response with a job `id` and other related details.
|
||||
|
||||
This ensures any blobs which are not referenced in your previously created backup get destroyed.
|
||||
|
||||
### Alternative option for data migration
|
||||
|
||||
- If you have a long maintenance window, you can skip some steps from above and do the following:
|
||||
|
||||
1. Put DTR in "read-only" mode using the following API request:
|
||||
|
||||
```bash
|
||||
curl -u <username>:$TOKEN -X POST "https://<dtr-url>/api/v0/meta/settings" -H "accept: application/json" -H "content-type: application/json" -d "{ \"readOnlyRegistry\": true }"
|
||||
```
|
||||
On success, you should get a `202 Accepted` response.
|
||||
|
||||
2. Migrate the contents of your current storage backend to the new one you are switching to. For example, upload your current storage data to your new NFS server.
|
||||
|
||||
3. [Reconfigure DTR](/reference/dtr/2.6/cli/reconfigure) while specifying the `--storage-migrated` flag to preserve your existing tags.
|
||||
|
||||
|
||||
## DTR 2.6.0-2.6.4 and DTR 2.5 (with experimental garbage collection)
|
||||
|
||||
Make sure to [perform a backup](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-data) before you change your storage backend when running DTR 2.5 (with online garbage collection) and 2.6.0-2.6.3. If you encounter an issue with lost tags, refer to the following resources:
|
||||
* For changes to reconfigure and restore options in DTR 2.6, see [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/) and [docker/dtr restore](/reference/dtr/2.6/cli/restore).
|
||||
* For Docker's recommended recovery strategies, see [DTR 2.6 lost tags after reconfiguring storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage).
|
||||
* For NFS-specific changes, see [Use NFS](nfs.md).
|
||||
* For S3-specific changes, see [Learn how to configure DTR with Amazon S3](s3.md).
|
||||
|
||||
Upgrade to [DTR 2.6.4](#dtr-264-and-above) and follow [best practice for data migration](#best-practice-for-data-migration) to avoid the wiped tags issue when moving from one NFS serverto another.
|
||||
|
||||
## Where to go next
|
||||
|
||||
- [Use NFS](nfs.md)
|
||||
- [Use S3](s3.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
|
|
@ -2,6 +2,7 @@
|
|||
title: Create a backup
|
||||
description: Learn how to create a backup of Docker Trusted Registry, for disaster recovery.
|
||||
keywords: dtr, disaster recovery
|
||||
toc_max_header: 5
|
||||
---
|
||||
|
||||
{% assign metadata_backup_file = "dtr-metadata-backup.tar" %}
|
||||
|
|
@ -43,7 +44,7 @@ command backs up the following data:
|
|||
|
||||
## Back up DTR data
|
||||
|
||||
To create a backup of DTR you need to:
|
||||
To create a backup of DTR, you need to:
|
||||
|
||||
1. Back up image content
|
||||
2. Back up DTR metadata
|
||||
|
|
@ -53,13 +54,46 @@ restore. If you have not previously performed a backup, the web interface displa
|
|||
|
||||

|
||||
|
||||
#### Find your replica ID
|
||||
|
||||
Since you need your DTR replica ID during a backup, the following covers a few ways for you to determine your replica ID:
|
||||
|
||||
##### UCP web interface
|
||||
|
||||
You can find the list of replicas by navigating to **Shared Resources > Stacks** or **Swarm > Volumes** (when using [swarm mode](/engine/swarm/)) on the UCP web interface.
|
||||
|
||||
##### UCP client bundle
|
||||
|
||||
From a terminal [using a UCP client bundle]((/ee/ucp/user-access/cli/)), run:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
docker ps --format "{{.Names}}" | grep dtr
|
||||
|
||||
# The list of DTR containers with <node>/<component>-<replicaID>, e.g.
|
||||
# node-1/dtr-api-a1640e1c15b6
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
|
||||
##### SSH access
|
||||
|
||||
Another way to determine the replica ID is to SSH into a DTR node and run the following:
|
||||
|
||||
{% raw %}
|
||||
```bash
|
||||
REPLICA_ID=$(docker inspect -f '{{.Name}}' $(docker ps -q -f name=dtr-rethink) | cut -f 3 -d '-')
|
||||
&& echo $REPLICA_ID
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
### Back up image content
|
||||
|
||||
Since you can configure the storage backend that DTR uses to store images,
|
||||
the way you backup images depends on the storage backend you're using.
|
||||
the way you back up images depends on the storage backend you're using.
|
||||
|
||||
If you've configured DTR to store images on the local file system or NFS mount,
|
||||
you can backup the images by using ssh to log into a node where DTR is running,
|
||||
you can backup the images by using SSH to log in to a DTR node,
|
||||
and creating a tar archive of the [dtr-registry volume](../../architecture.md):
|
||||
|
||||
{% raw %}
|
||||
|
|
@ -76,10 +110,16 @@ recommended for that system.
|
|||
### Back up DTR metadata
|
||||
|
||||
To create a DTR backup, load your UCP client bundle, and run the following
|
||||
command, replacing the placeholders for the real values:
|
||||
command, replacing the placeholders with real values:
|
||||
|
||||
```none
|
||||
read -sp 'ucp password: ' UCP_PASSWORD; \
|
||||
```bash
|
||||
read -sp 'ucp password: ' UCP_PASSWORD;
|
||||
```
|
||||
|
||||
This prompts you for the UCP password. Next, run the following to back up your DTR metadata and save the result into a tar archive. You can learn more about the supported flags in
|
||||
the [reference documentation](/reference/dtr/2.6/cli/backup.md).
|
||||
|
||||
```bash
|
||||
docker run --log-driver none -i --rm \
|
||||
--env UCP_PASSWORD=$UCP_PASSWORD \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} backup \
|
||||
|
|
@ -95,14 +135,9 @@ Where:
|
|||
* `<ucp-username>` is the username of a UCP administrator.
|
||||
* `<replica-id>` is the id of the DTR replica to backup.
|
||||
|
||||
This prompts you for the UCP password, backups up the DTR metadata and saves the
|
||||
result into a tar archive. You can learn more about the supported flags in
|
||||
the [reference documentation](/reference/dtr/2.5/cli/backup.md).
|
||||
|
||||
By default the backup command doesn't stop the DTR replica being backed up.
|
||||
This allows performing backups without affecting your users. Since the replica
|
||||
is not stopped, it's possible that happen while the backup is taking place, won't
|
||||
be persisted.
|
||||
This means you can take frequent backups without affecting your users.
|
||||
|
||||
You can use the `--offline-backup` option to stop the DTR replica while taking
|
||||
the backup. If you do this, remove the replica from the load balancing pool.
|
||||
|
|
@ -117,6 +152,7 @@ gpg --symmetric {{ metadata_backup_file }}
|
|||
This prompts you for a password to encrypt the backup, copies the backup file
|
||||
and encrypts it.
|
||||
|
||||
|
||||
### Test your backups
|
||||
|
||||
To validate that the backup was correctly performed, you can print the contents
|
||||
|
|
@ -151,3 +187,13 @@ gpg -d {{ metadata_backup_file }} | tar -t
|
|||
You can also create a backup of a UCP cluster and restore it into a new
|
||||
cluster. Then restore DTR on that new cluster to confirm that everything is
|
||||
working as expected.
|
||||
|
||||
## Where to go next
|
||||
- [Configure your storage backend](/ee/dtr/admin/configure/external-storage/index.md)
|
||||
- [Switch your storage backend](/ee/dtr/admin/configure/external-storage/storage-backend-migration.md)
|
||||
- [Use NFS](/ee/dtr/admin/configure/external-storage/nfs.md)
|
||||
- [Use S3](/ee/dtr/admin/configure/external-storage/s3.md)
|
||||
- CLI reference pages
|
||||
- [docker/dtr install](/reference/dtr/2.6/cli/install/)
|
||||
- [docker/dtr reconfigure](/reference/dtr/2.6/cli/reconfigure/)
|
||||
- [docker/dtr restore](/reference/dtr/2.6/cli/restore/)
|
||||
|
|
|
|||
|
|
@ -59,8 +59,13 @@ the configuration created during a backup.
|
|||
Load your UCP client bundle, and run the following command, replacing the
|
||||
placeholders for the real values:
|
||||
|
||||
```none
|
||||
read -sp 'ucp password: ' UCP_PASSWORD; \
|
||||
```bash
|
||||
read -sp 'ucp password: ' UCP_PASSWORD;
|
||||
```
|
||||
|
||||
This prompts you for the UCP password. Next, run the following to restore DTR from your backup. You can learn more about the supported flags in [docker/dtr restore](/reference/dtr/2.6/cli/restore).
|
||||
|
||||
```bash
|
||||
docker run -i --rm \
|
||||
--env UCP_PASSWORD=$UCP_PASSWORD \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} restore \
|
||||
|
|
|
|||
|
|
@ -54,7 +54,10 @@ information that is necessary.
|
|||
By default DTR is deployed with self-signed certificates, so your UCP deployment
|
||||
might not be able to pull images from DTR.
|
||||
Use the `--dtr-external-url <dtr-domain>:<port>` optional flag while deploying
|
||||
DTR, so that UCP is automatically reconfigured to trust DTR.
|
||||
DTR, so that UCP is automatically reconfigured to trust DTR. Since [HSTS (HTTP Strict-Transport-Security)
|
||||
header](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) is included in all API responses,
|
||||
make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR, or your browser may refuse
|
||||
to load the web interface.
|
||||
|
||||
## Step 4. Check that DTR is running
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ title: Interlock architecture
|
|||
description: Learn more about the architecture of the layer 7 routing solution
|
||||
for Docker swarm services.
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/intro/architecture/
|
||||
---
|
||||
|
||||
This document covers the following considerations:
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Custom templates
|
||||
description: Learn how to use a custom extension template
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/custom_template/
|
||||
---
|
||||
|
||||
Use a custom extension if a needed option is not available in the extension configuration.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Configure HAProxy
|
||||
description: Learn how to configure an HAProxy extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/extensions/haproxy/
|
||||
---
|
||||
|
||||
The following HAProxy configuration options are available:
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ keywords: routing, proxy, interlock, load balancing
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/host-mode-networking/
|
||||
- /ee/ucp/interlock/deploy/host-mode-networking/
|
||||
- https://interlock-dev-docs.netlify.com/usage/host_mode/
|
||||
---
|
||||
|
||||
By default, layer 7 routing components communicate with one another using
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ keywords: routing, proxy, interlock, load balancing
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
- /ee/ucp/interlock/usage/default-service/
|
||||
- https://interlock-dev-docs.netlify.com/config/interlock/
|
||||
---
|
||||
|
||||
To further customize the layer 7 routing solution, you must update the
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Configure Nginx
|
||||
description: Learn how to configure an nginx extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/extensions/nginx/
|
||||
---
|
||||
|
||||
By default, nginx is used as a proxy, so the following configuration options are
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Use application service labels
|
||||
description: Learn how applications use service labels for publishing
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/service_labels/
|
||||
---
|
||||
|
||||
Service labels define hostnames that are routed to the
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Tune the proxy service
|
||||
description: Learn how to tune the proxy service for environment optimization
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/tuning/
|
||||
---
|
||||
|
||||
## Constrain the proxy service to multiple dedicated worker nodes
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Update Interlock services
|
||||
description: Learn how to update the UCP layer 7 routing solution services
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/updates/
|
||||
---
|
||||
|
||||
There are two parts to the update process:
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ description: Learn the deployment steps for the UCP layer 7 routing solution
|
|||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- https://interlock-dev-docs.netlify.com/install/
|
||||
---
|
||||
|
||||
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Offline installation considerations
|
||||
description: Learn how to to install Interlock on a Docker cluster without internet access.
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/install/offline/
|
||||
---
|
||||
|
||||
To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ title: Configure layer 7 routing for production
|
|||
description: Learn how to configure the layer 7 routing solution for a production
|
||||
environment.
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/install/production/
|
||||
---
|
||||
|
||||
This section includes documentation on configuring Interlock
|
||||
|
|
|
|||
|
|
@ -2,6 +2,9 @@
|
|||
title: Layer 7 routing overview
|
||||
description: Learn how to route layer 7 traffic to your Swarm services
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/
|
||||
- https://interlock-dev-docs.netlify.com/intro/about/
|
||||
---
|
||||
|
||||
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Publish Canary application instances
|
||||
description: Learn how to do canary deployments for your Docker swarm services
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/canary/
|
||||
---
|
||||
|
||||
The following example publishes a service as a canary instance.
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ title: Use context and path-based routing
|
|||
description: Learn how to route traffic to your Docker swarm services based
|
||||
on a url path.
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/context_root/
|
||||
---
|
||||
|
||||
The following example publishes a service using context or path based routing.
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ keywords: routing, proxy
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
- https://interlock-dev-docs.netlify.com/usage/hello/
|
||||
---
|
||||
|
||||
After Interlock is deployed, you can launch and publish services and applications.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Specify a routing mode
|
||||
description: Learn about task and VIP backend routing modes for Layer 7 routing
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/default_backend/
|
||||
---
|
||||
|
||||
You can publish services using "vip" and "task" backend routing modes.
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ title: Implement application redirects
|
|||
description: Learn how to implement redirects using swarm services and the
|
||||
layer 7 routing solution for UCP.
|
||||
keywords: routing, proxy, redirects, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/redirects/
|
||||
---
|
||||
|
||||
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Implement service clusters
|
||||
description: Learn how to route traffic to different proxies using a service cluster.
|
||||
keywords: ucp, interlock, load balancing, routing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/service_clusters/
|
||||
---
|
||||
|
||||
## Configure Proxy Services
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ title: Implement persistent (sticky) sessions
|
|||
description: Learn how to configure your swarm services with persistent sessions
|
||||
using UCP.
|
||||
keywords: routing, proxy, cookies, IP hash
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/sessions/
|
||||
---
|
||||
|
||||
You can publish a service and configure the proxy for persistent (sticky) sessions using:
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ description: Learn how to configure your swarm services with SSL.
|
|||
keywords: routing, proxy, tls, ssl
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/ssl/
|
||||
- https://interlock-dev-docs.netlify.com/usage/ssl/
|
||||
---
|
||||
|
||||
This topic covers Swarm services implementation with:
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@
|
|||
title: Use websockets
|
||||
description: Learn how to use websockets in your swarm services.
|
||||
keywords: routing, proxy, websockets
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/websockets/
|
||||
---
|
||||
|
||||
First, create an overlay network to isolate and secure service traffic:
|
||||
|
|
|
|||
|
|
@ -42,6 +42,9 @@ upgrade your installation to the latest release.
|
|||
* Fixed an issue with continuous interlock reconciliation if `ucp-interlock` service image does not match expected version. (ENGORC-2081)
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Upgrading from UCP 3.1.4 to 3.1.5 causes missing Swarm placement constraints banner for some Swarm services (ENGORC-2191)https://docker.atlassian.net/browse/ENGORC-2191. This can cause Swarm services to run unexpectedly on Kubernetes nodes. See https://www.docker.com/ddc-41 for more information.
|
||||
- Workaround: Delete any `ucp-*-s390x` Swarm services. For example, `ucp-auth-api-s390x`.
|
||||
* There are important changes to the upgrade process that, if not correctly followed, can impact the availability of applications running on the Swarm during uprades. These constraints impact any upgrades coming from any Docker Engine version before 18.09 to version 18.09 or greater. For more information about about upgrading Docker Enterprise to version 2.1, see [Upgrade Docker](../upgrade)
|
||||
* To deploy Pods with containers using Restricted Parameters, the user must be an admin and a service account must explicitly have a **ClusterRoleBinding** with `cluster-admin` as the **ClusterRole**. Restricted Parameters on Containers include:
|
||||
* Host Bind Mounts
|
||||
|
|
|
|||
|
|
@ -122,8 +122,8 @@ Docker configs.
|
|||
|
||||
### Defining and using configs in compose files
|
||||
|
||||
Both the `docker compose` and `docker stack` commands support defining configs
|
||||
in a compose file. See
|
||||
The `docker stack` command supports defining configs in a Compose file.
|
||||
However, the `configs` key is not supported for `docker compose`. See
|
||||
[the Compose file reference](/compose/compose-file/#configs) for details.
|
||||
|
||||
### Simple example: Get started with configs
|
||||
|
|
|
|||
|
|
@ -418,7 +418,7 @@ ones if you'd like to explore a bit before moving on.
|
|||
|
||||
```shell
|
||||
docker build -t friendlyhello . # Create image using this directory's Dockerfile
|
||||
docker run -p 4000:80 friendlyhello # Run "friendlyname" mapping port 4000 to 80
|
||||
docker run -p 4000:80 friendlyhello # Run "friendlyhello" mapping port 4000 to 80
|
||||
docker run -d -p 4000:80 friendlyhello # Same thing, but in detached mode
|
||||
docker container ls # List all running containers
|
||||
docker container ls -a # List all containers, even those not running
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ $ docker run -it --rm docker/dtr:{{ site.dtr_version }}.0 install \
|
|||
| `--debug` | $DEBUG | Enable debug mode for additional logs. |
|
||||
| `--dtr-ca` | $DTR_CA | Use a PEM-encoded TLS CA certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own root CA public certificate with `--dtr-ca "$(cat ca.pem)"`. |
|
||||
| `--dtr-cert` | $DTR_CERT | Use a PEM-encoded TLS certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own public key certificate with `--dtr-cert "$(cat cert.pem)"`. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file to establish a chain of trust. |
|
||||
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your DTR system settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. |
|
||||
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your DTR system settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. Since [HSTS (HTTP Strict-Transport-Security) header](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR, or your browser may refuse to load the web interface. |
|
||||
| `--dtr-key` | $DTR_KEY | Use a PEM-encoded TLS private key for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with `--dtr-key "$(cat key.pem)"`. |
|
||||
| `--dtr-storage-volume` | $DTR_STORAGE_VOLUME | Customize the volume to store Docker images. By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify a full path or volume name for DTR to store images. For high-availability, make sure all DTR replicas can read and write data on this volume. If you're using NFS, use `--nfs-storage-url` instead. |
|
||||
| `--enable-pprof` | $DTR_PPROF | Enables pprof profiling of the server. Use `--enable-pprof=false` to disable it. Once DTR is deployed with this flag, you can access the `pprof` endpoint for the api server at `/debug/pprof`, and the registry endpoint at `/registry_debug_pprof/debug/pprof`. |
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ time, configure your DTR for high availability.
|
|||
| `--debug` | $DEBUG | Enable debug mode for additional logs of this bootstrap container (the log level of downstream DTR containers can be set with `--log-level`). |
|
||||
| `--dtr-ca` | $DTR_CA | Use a PEM-encoded TLS CA certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own root CA public certificate with `--dtr-ca "$(cat ca.pem)"`. |
|
||||
| `--dtr-cert` | $DTR_CERT | Use a PEM-encoded TLS certificate for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own public key certificate with `--dtr-cert "$(cat cert.pem)"`. If the certificate has been signed by an intermediate certificate authority, append its public key certificate at the end of the file to establish a chain of trust. |
|
||||
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the url you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users login separately into the two applications. You can enable and disable single sign-on in the DTR settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. |
|
||||
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the url you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users login separately into the two applications. You can enable and disable single sign-on in the DTR settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. Since [HSTS (HTTP Strict-Transport-Security) header](https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) is included in all API responses, make sure to specify the FQDN (Fully Qualified Domain Name) of your DTR, or your browser may refuse to load the web interface. |
|
||||
| `--dtr-key` | $DTR_KEY | Use a PEM-encoded TLS private key for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with `--dtr-key "$(cat key.pem)"`. |
|
||||
| `--dtr-storage-volume` | $DTR_STORAGE_VOLUME | Customize the volume to store Docker images. By default DTR creates a volume to store the Docker images in the local filesystem of the node where DTR is running, without high-availability. Use this flag to specify a full path or volume name for DTR to store images. For high-availability, make sure all DTR replicas can read and write data on this volume. If you're using NFS, use `--nfs-storage-url` instead. |
|
||||
| `--enable-pprof` | $DTR_PPROF | Enables pprof profiling of the server. Use `--enable-pprof=false` to disable it. Once DTR is deployed with this flag, you can access the pprof endpoint for the api server at `/debug/pprof`, and the registry endpoint at `/registry_debug_pprof/debug/pprof`. |
|
||||
|
|
@ -40,13 +40,14 @@ time, configure your DTR for high availability.
|
|||
| `--log-host` | $LOG_HOST | The syslog system to send logs to. The endpoint to send logs to. Use this flag if you set `--log-protocol` to `tcp` or `udp`. |
|
||||
| `--log-level` | $LOG_LEVEL | Log level for all container logs when logging to syslog. Default: INFO. The supported log levels are `debug`, `info`, `warn`, `error`, or `fatal`. |
|
||||
| `--log-protocol` | $LOG_PROTOCOL | The protocol for sending logs. Default is internal. By default, DTR internal components log information using the logger specified in the Docker daemon in the node where the DTR replica is deployed. Use this option to send DTR logs to an external syslog system. The supported values are `tcp`, `udp`, and `internal`. Internal is the default option, stopping DTR from sending logs to an external system. Use this flag with `--log-host`. |
|
||||
| `--nfs-storage-url` | $NFS_STORAGE_URL | When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the issue, manually create a storage volume on each DTR node and reconfigure DTR with `--dtr-storage-volume` and your newly-created volume instead. See [Reconfigure Using a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for more details. To reconfigure DTR to stop using NFS, leave this option empty: `--nfs-storage-url ""`. See [USE NFS](/ee/dtr/admin/configure/external-storage/nfs/) for more details. |
|
||||
| `--nfs-storage-url` | $NFS_STORAGE_URL | When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. To work around the issue, manually create a storage volume on each DTR node and reconfigure DTR with `--dtr-storage-volume` and your newly-created volume instead. See [Reconfigure Using a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#reconfigureusingalocalnfsvolume) for more details. To reconfigure DTR to stop using NFS, leave this option empty: `--nfs-storage-url ""`. See [USE NFS](/ee/dtr/admin/configure/external-storage/nfs/) for more details. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
|
||||
| `--async-nfs` | $ASYNC_NFS | Use async NFS volume options on the replica specified in the `--existing-replica-id` option. The NFS configuration must be set with `--nfs-storage-url` explicitly to use this option. Using `--async-nfs` will bring down any containers on the replica that use the NFS volume, delete the NFS volume, bring it back up with the appropriate configuration, and restart any containers that were brought down. |
|
||||
| `--nfs-options` | $NFS_OPTIONS | Pass in NFS volume options verbatim for the replica specified in the `--existing-replica-id` option. The NFS configuration must be set with `--nfs-storage-url` explicitly to use this option. Specifying `--nfs-options` will pass in character-for-character the options specified in the argument when creating or recreating the NFS volume. For instance, to use NFS v4 with async, pass in "rw,nfsvers=4,async" as the argument. |
|
||||
| `--no-proxy` | $DTR_NO_PROXY | List of domains the proxy should not be used for. When using `--http-proxy` you can use this flag to specify a list of domains that you don't want to route through the proxy. Format `acme.com[, acme.org]`. |
|
||||
| `--replica-http-port` | $REPLICA_HTTP_PORT | The public HTTP port for the DTR replica. Default is `80`. This allows you to customize the HTTP port where users can reach DTR. Once users access the HTTP port, they are redirected to use an HTTPS connection, using the port specified with --replica-https-port. This port can also be used for unencrypted health checks. |
|
||||
| `--replica-https-port` | $REPLICA_HTTPS_PORT | The public HTTPS port for the DTR replica. Default is `443`. This allows you to customize the HTTPS port where users can reach DTR. Each replica can use a different port. |
|
||||
| `--replica-rethinkdb-cache-mb` | $RETHINKDB_CACHE_MB | The maximum amount of space in MB for RethinkDB in-memory cache used by the given replica. Default is auto. Auto is `(available_memory - 1024) / 2`. This config allows changing the RethinkDB cache usage per replica. You need to run it once per replica to change each one. |
|
||||
| `--storage-migrated` | $STORAGE_MIGRATED | A flag added in 2.6.4 which lets you indicate the migration status of your storage data. Specify this flag if you are migrating to a new storage backend and have already moved all contents from your old backend to your new one. If not specified, DTR will assume the new backend is empty during a backend storage switch, and consequently destroy your existing tags and related image metadata. |
|
||||
| `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from `https://<ucp-url>/ca`, and use `--ucp-ca "$(cat ca.pem)"`. |
|
||||
| `--ucp-insecure-tls` | $UCP_INSECURE_TLS | Disable TLS verification for UCP. The installation uses TLS but always trusts the TLS certificate used by UCP, which can lead to MITM (man-in-the-middle) attacks. For production deployments, use `--ucp-ca "$(cat ca.pem)"` instead. |
|
||||
| `--ucp-password` | $UCP_PASSWORD | The UCP administrator password. |
|
||||
|
|
|
|||
|
|
@ -50,8 +50,8 @@ DTR replicas for high availability.
|
|||
| `--dtr-external-url` | $DTR_EXTERNAL_URL | URL of the host or load balancer clients use to reach DTR. When you use this flag, users are redirected to UCP for logging in. Once authenticated they are redirected to the URL you specify in this flag. If you don't use this flag, DTR is deployed without single sign-on with UCP. Users and teams are shared but users log in separately into the two applications. You can enable and disable single sign-on within your DTR system settings. Format `https://host[:port]`, where port is the value you used with `--replica-https-port`. |
|
||||
| `--dtr-key` | $DTR_KEY | Use a PEM-encoded TLS private key for DTR. By default DTR generates a self-signed TLS certificate during deployment. You can use your own TLS private key with `--dtr-key "$(cat ca.pem)"`. |
|
||||
| `--dtr-storage-volume` | $DTR_STORAGE_VOLUME | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. If you have previously configured DTR to use a full path or volume name for storage, specify this flag to use the same setting on restore. See [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md) for usage details. |
|
||||
| `--dtr-use-default-storage` | $DTR_DEFAULT_STORAGE | Mandatory flag to allow for DTR to fall back to your configured storage backend at the time of backup. If cloud storage was configured, then the default storage on restore is cloud storage. Otherwise, local storage is used. With DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, this flag must be specified in order to keep your DTR metadata. If you encounter an issue with lost tags, see [Restore to Cloud Storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage) for Docker's recommended recovery strategy. |
|
||||
| `--nfs-storage-url` | $NFS_STORAGE_URL | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. If NFS was previously configured, you have to manually create a storage volume on each DTR node and specify `--dtr-storage-volume` with the newly-created volume instead. See [Restore to a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretoalocalnfsvolume) for more details. For additional NFS configuration options to support **NFS v4**, see [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md). |
|
||||
| `--dtr-use-default-storage` | $DTR_DEFAULT_STORAGE | Mandatory flag to allow for DTR to fall back to your configured storage backend at the time of backup. If cloud storage was configured, then the default storage on restore is cloud storage. Otherwise, local storage is used. With DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, this flag must be specified in order to keep your DTR metadata. If you encounter an issue with lost tags, see [Restore to Cloud Storage](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretocloudstorage) for Docker's recommended recovery strategy. [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
|
||||
| `--nfs-storage-url` | $NFS_STORAGE_URL | Mandatory flag to allow for DTR to fall back to your configured storage setting at the time of backup. When running DTR 2.5 (with experimental online garbage collection) and 2.6.0-2.6.3, there is an issue with [reconfiguring and restoring DTR with `--nfs-storage-url`](/ee/dtr/release-notes#version-26) which leads to erased tags. Make sure to [back up your DTR metadata](/ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata) before you proceed. If NFS was previously configured, you have to manually create a storage volume on each DTR node and specify `--dtr-storage-volume` with the newly-created volume instead. See [Restore to a Local NFS Volume](https://success.docker.com/article/dtr-26-lost-tags-after-reconfiguring-storage#restoretoalocalnfsvolume) for more details. For additional NFS configuration options to support **NFS v4**, see [docker/dtr install](install.md) and [docker/dtr reconfigure](reconfigure.md). [Upgrade to 2.6.4](/reference/dtr/2.6/cli/upgrade/) and follow [Best practice for data migration in 2.6.4](/ee/dtr/admin/configure/external-storage/storage-backend-migration/#best-practice-for-data-migration) when switching storage backends. |
|
||||
| `--enable-pprof` | $DTR_PPROF | Enables pprof profiling of the server. Use `--enable-pprof=false` to disable it. Once DTR is deployed with this flag, you can access the `pprof` endpoint for the api server at `/debug/pprof`, and the registry endpoint at `/registry_debug_pprof/debug/pprof`. |
|
||||
| `--help-extended` | $DTR_EXTENDED_HELP | Display extended help text for a given command. |
|
||||
| `--http-proxy` | $DTR_HTTP_PROXY | The HTTP proxy used for outgoing requests. |
|
||||
|
|
|
|||
|
|
@ -45,4 +45,3 @@ healthy and that all nodes have been upgraded successfully.
|
|||
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IP. The default IP pool is `192.168.0.0/16`. |
|
||||
| `--nodeport-range` | Allowed port range for Kubernetes services of type `NodePort`. The default port range is `32768-35535`. |
|
||||
| `--cloud-provider` | The cloud provider for the cluster |
|
||||
| `--unmanaged-cni` | Flag to indicate if CNI provider is Calico and managed by UCP. Calico is the default CNI provider. The default value is `true` when using the default Calico CNI. |
|
||||
|
|
|
|||
Loading…
Reference in New Issue