diff --git a/_data/toc.yaml b/_data/toc.yaml index 4f79e11e36..67f7982d67 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -1345,8 +1345,6 @@ manuals: path: /ee/ucp/interlock/usage/canary/ - title: Using context or path-based routing path: /ee/ucp/interlock/usage/context/ - - title: Publishing a default host service - path: /ee/ucp/interlock/usage/default-backend/ - title: Specifying a routing mode path: /ee/ucp/interlock/usage/interlock-vip-mode/ - title: Using routing labels diff --git a/ee/dtr/admin/disaster-recovery/create-a-backup.md b/ee/dtr/admin/disaster-recovery/create-a-backup.md index f13fb2e336..7ea6845e56 100644 --- a/ee/dtr/admin/disaster-recovery/create-a-backup.md +++ b/ee/dtr/admin/disaster-recovery/create-a-backup.md @@ -2,7 +2,7 @@ title: Create a backup description: Learn how to create a backup of Docker Trusted Registry, for disaster recovery. keywords: dtr, disaster recovery -toc_max_header: 5 +toc_max_header: 3 --- {% assign metadata_backup_file = "dtr-metadata-backup.tar" %} @@ -93,16 +93,36 @@ Since you can configure the storage backend that DTR uses to store images, the way you back up images depends on the storage backend you're using. If you've configured DTR to store images on the local file system or NFS mount, -you can backup the images by using SSH to log in to a DTR node, -and creating a tar archive of the [dtr-registry volume](../../architecture.md): +you can back up the images by using SSH to log into a DTR node, +and creating a `tar` archive of the [dtr-registry volume](../../architecture.md): + +#### Example backup commands + +##### Local images {% raw %} ```none -sudo tar -cf {{ image_backup_file }} \ --C /var/lib/docker/volumes/ dtr-registry- +sudo tar -cf dtr-image-backup-$(date +%Y%m%d-%H_%M_%S).tar \ + /var/lib/docker/volumes/dtr-registry-$(docker ps --filter name=dtr-rethinkdb \ + --format "{{ .Names }}" | sed 's/dtr-rethinkdb-//') ``` {% endraw %} +##### NFS-mounted images + +{% raw %} +```none +sudo tar -cf dtr-image-backup-$(date +%Y%m%d-%H_%M_%S).tar \ + /var/lib/docker/volumes/dtr-registry-nfs-$(docker ps --filter name=dtr-rethinkdb \ + --format "{{ .Names }}" | sed 's/dtr-rethinkdb-//') +``` +{% endraw %} + +###### Expected output +```bash +tar: Removing leading `/' from member names +``` + If you're using a different storage backend, follow the best practices recommended for that system. @@ -110,37 +130,49 @@ recommended for that system. ### Back up DTR metadata To create a DTR backup, load your UCP client bundle, and run the following -command, replacing the placeholders with real values: +concatenated commands: -```bash -read -sp 'ucp password: ' UCP_PASSWORD; -``` - -This prompts you for the UCP password. Next, run the following to back up your DTR metadata and save the result into a tar archive. You can learn more about the supported flags in -the [reference documentation](/reference/dtr/2.6/cli/backup.md). - -```bash +```none +DTR_VERSION=$(docker container inspect $(docker container ps -f name=dtr-registry -q) | \ + grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \ +REPLICA_ID=$(docker ps --filter name=dtr-rethinkdb --format "{{ .Names }}" | head -1 | \ + sed 's|.*/||' | sed 's/dtr-rethinkdb-//'); \ +read -p 'ucp-url (The UCP URL including domain and port): ' UCP_URL; \ +read -p 'ucp-username (The UCP administrator username): ' UCP_ADMIN; \ +read -sp 'ucp password: ' UCP_PASSWORD; \ docker run --log-driver none -i --rm \ --env UCP_PASSWORD=$UCP_PASSWORD \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} backup \ - --ucp-url \ - --ucp-insecure-tls \ - --ucp-username \ - --existing-replica-id > {{ metadata_backup_file }} + docker/dtr:$DTR_VERSION backup \ + --ucp-username $UCP_ADMIN \ + --ucp-url $UCP_URL \ + --ucp-ca "$(curl https://${UCP_URL}/ca)" \ + --existing-replica-id $REPLICA_ID > dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar ``` -Where: +#### UCP field prompts -* `` is the url you use to access UCP. +* `` is the URL you use to access UCP. * `` is the username of a UCP administrator. -* `` is the id of the DTR replica to backup. +* `` is the DTR replica ID to back up. +The above concatenated commands run through the following tasks: +1. Sets your DTR version and replica ID. To back up +a specific replica, set the replica ID manually by modifying the +`--existing-replica-id` flag in the backup command. +2. Prompts you for your UCP URL (domain and port), username, and password. +3. Prompts you for your UCP password without saving it to your disk or printing it on the terminal. +4. Retrieves the CA certificate for your specified UCP URL. To skip TLS verification, replace the `--ucp-ca` +flag with `--ucp-insecure-tls`. Docker does not recommend this flag for production environments. +5. Includes DTR version and timestamp to your `tar` backup file. -By default the backup command doesn't stop the DTR replica being backed up. -This means you can take frequent backups without affecting your users. +You can learn more about the supported flags in +the [reference documentation](/reference/dtr/2.6/cli/backup.md). -You can use the `--offline-backup` option to stop the DTR replica while taking -the backup. If you do this, remove the replica from the load balancing pool. +By default, the backup command does not pause the DTR replica being backed up to +prevent interruptions of user access to DTR. Since the replica +is not stopped, changes that happen during the backup may not be saved. +Use the `--offline-backup` flag to stop the DTR replica during the backup procedure. If you set this flag, +remove the replica from the load balancing pool to avoid user interruption. Also, the backup contains sensitive information like private keys, so you can encrypt the backup by running: diff --git a/ee/dtr/user/promotion-policies/push-mirror.md b/ee/dtr/user/promotion-policies/push-mirror.md index 0465b0240f..b7c552a260 100644 --- a/ee/dtr/user/promotion-policies/push-mirror.md +++ b/ee/dtr/user/promotion-policies/push-mirror.md @@ -5,7 +5,7 @@ keywords: registry, promotion, mirror --- Docker Trusted Registry allows you to create mirroring policies for a repository. -When an image gets pushed to a repository and meets a certain criteria, +When an image gets pushed to a repository and meets the mirroring criteria, DTR automatically pushes it to a repository in a remote Docker Trusted or Hub registry. This not only allows you to mirror images but also allows you to create diff --git a/ee/ucp/admin/install/install-on-azure.md b/ee/ucp/admin/install/install-on-azure.md index badd4874d5..4c1d4f122c 100644 --- a/ee/ucp/admin/install/install-on-azure.md +++ b/ee/ucp/admin/install/install-on-azure.md @@ -4,63 +4,54 @@ description: Learn how to install Docker Universal Control Plane in a Microsoft keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes --- -Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking -and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure +Docker Universal Control Plane (UCP) closely integrates with Microsoft Azure for its Kubernetes Networking +and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure, the Calico CNI leverages the Azure networking infrastructure for data path networking and the Azure IPAM for IP address management. There are -infrastructure prerequisites that are required prior to UCP installation for the +infrastructure prerequisites required prior to UCP installation for the Calico / Azure integration. ## Docker UCP Networking -Docker UCP configures the Azure IPAM module for Kubernetes to allocate -IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure -virtual machine that's part of the Kubernetes cluster to be configured with a pool of -IP addresses. +Docker UCP configures the Azure IPAM module for Kubernetes to allocate IP +addresses for Kubernetes pods. The Azure IPAM module requires each Azure virtual +machine which is part of the Kubernetes cluster to be configured with a pool of IP +addresses. -There are two options for provisoning IPs for the Kubernetes cluster on Azure -- Docker UCP provides an automated mechanism to configure and maintain IP pools - for standalone Azure virtual machines. This service runs within the calico-node daemonset - and by default will provision 128 IP address for each node. This value can be - configured through the `azure_ip_count`in the UCP - [configuration file](../configure/ucp-configuration-file) before or after the - UCP installation. Note that if this value is reduced post-installation, existing - virtual machines will not be reconciled, and you will have to manually edit the IP count - in Azure. -- Manually provision additional IP address for each Azure virtual machine. This could be done - as part of an Azure Virtual Machine Scale Set through an ARM template. You can find an example [here](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set). - Note that the `azure_ip_count` value in the UCP - [configuration file](../configure/ucp-configuration-file) will need to be set - to 0, otherwise UCP's IP Allocator service will provision the IP Address on top of - those you have already provisioned. +There are two options for provisoning IPs for the Kubernetes cluster on Azure: + +- _An automated mechanism provided by UCP which allows for IP pool configuration and maintenance + for standalone Azure virtual machines._ This service runs within the + `calico-node` daemonset and provisions 128 IP addresses for each + node by default. For information on customizing this value, see [Adjusting the IP count value](#adjusting-the-ip-count-value). +- _Manual provision of additional IP address for each Azure virtual machine._ This + could be done through the Azure Portal, the Azure CLI `$ az network nic ip-config create`, + or an ARM template. You can find an example of an ARM template + [here](#manually-provision-ip-address-as-part-of-an-azure-virtual-machine-scale-set). ## Azure Prerequisites -You must meet these infrastructure prerequisites in order -to successfully deploy Docker UCP on Azure +You must meet the following infrastructure prerequisites in order +to successfully deploy Docker UCP on Azure: -- All UCP Nodes (Managers and Workers) need to be deployed into the same -Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) -components could be deployed in a second Azure Resource Group. -- The Azure Vnet and Subnet must be appropriately sized for your - environment, and addresses from this pool are consumed by Kubernetes Pods. For more information, see - [Considerations for IPAM +- All UCP Nodes (Managers and Workers) need to be deployed into the same Azure + Resource Group. The Azure Networking components (Virtual Network, Subnets, + Security Groups) could be deployed in a second Azure Resource Group. +- The Azure Virtual Network and Subnet must be appropriately sized for your + environment, as addresses from this pool will be consumed by Kubernetes Pods. + For more information, see [Considerations for IPAM Configuration](#considerations-for-ipam-configuration). -- All UCP Nodes (Managers and Workers) need to be attached to the same -Azure Subnet. -- All UCP (Managers and Workers) need to be tagged in Azure with the -`Orchestrator` tag. Note the value for this tag is the Kubernetes version number -in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each -UCP release. To find the relevant version please see the UCP -[Release Notes](../../release-notes). For example for UCP 3.1.0 the tag -would be `Orchestrator=Kubernetes:1.11.2`. -- The Azure Virtual Machine Object Name needs to match the Azure Virtual Machine -Computer Name and the Node Operating System's Hostname. Note this applies to the -FQDN of the host including domain names. -- An Azure Service Principal with `Contributor` access to the Azure Resource -Group hosting the UCP Nodes. Note, if using a separate networking Resource -Group the same Service Principal will need `Network Contributor` access to this -Resource Group. +- All UCP worker and manager nodes need to be attached to the same Azure + Subnet. +- The Azure Virtual Machine Object Name needs to match the Azure Virtual Machine + Computer Name and the Node Operating System's Hostname which is the FQDN of + the host, including domain names. Note that this requires all characters to be in lowercase. +- An Azure Service Principal with `Contributor` access to the Azure Resource + Group hosting the UCP Nodes. This Service principal will be used by Kubernetes + to communicate with the Azure API. The Service Principal ID and Secret Key are + needed as part of the UCP prerequisites. If you are using a separate Resource + Group for the networking components, the same Service Principal will need + `Network Contributor` access to this Resource Group. UCP requires the following information for the installation: @@ -68,17 +59,18 @@ UCP requires the following information for the installation: objects are being deployed. - `tenantId` - The Azure Active Directory Tenant ID in which the UCP objects are being deployed. -- `aadClientId` - The Azure Service Principal ID -- `aadClientSecret` - The Azure Service Principal Secret Key +- `aadClientId` - The Azure Service Principal ID. +- `aadClientSecret` - The Azure Service Principal Secret Key. ### Azure Configuration File -For Docker UCP to integrate into Microsoft Azure, you need to place an Azure -configuration file within each UCP node in your cluster, at -`/etc/kubernetes/azure.json`. The `azure.json` file needs 0644 permissions. +For Docker UCP to integrate with Microsoft Azure,each UCP node in your cluster +needs an Azure configuration file, `azure.json`. Place the file within +`/etc/kubernetes`. Since the config file is owned by `root`, set its permissions +to `0644` to ensure the container user has read access. -See the template below. Note entries that do not contain `****` should not be -changed. +The following is an example template for `azure.json`. Replace `***` with real values, and leave the other +parameters as is. ``` { @@ -105,45 +97,44 @@ changed. } ``` -There are some optional values for Azure deployments: +There are some optional parameters for Azure deployments: -- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set. -- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a +- `primaryAvailabilitySetName` - The Worker Nodes availability set. +- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a seperate resource group. -- `"routeTableName": "****",` - If you have defined multiple Route tables within +- `routeTableName` - If you have defined multiple Route tables within an Azure subnet. -More details on this configuration file can be found -[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go). +See [Kubernetes' azure.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go) for more details on this configuration file. ## Considerations for IPAM Configuration -The subnet and the virtual network associated with the primary interface of -the Azure virtual machines need to be configured with a large enough address prefix/range. -The number of required IP addresses depends on the number of pods running -on each node and the number of nodes in the cluster. +The subnet and the virtual network associated with the primary interface of the +Azure virtual machines need to be configured with a large enough address +prefix/range. The number of required IP addresses depends on the workload and +the number of nodes in the cluster. -For example, in a cluster of 256 nodes, to run a maximum of 128 pods -concurrently on a node, make sure that the address space of the subnet and the -virtual network can allocate at least 128 * 256 IP addresses, _in addition to_ -initial IP allocations to virtual machine NICs during Azure resource creation. +For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the +virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods +concurrently on a node. This would be ***in addition to*** initial IP allocations to virtual machine +NICs (network interfaces) during Azure resource creation. Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set -the address space of the subnet and virtual network to 10.0.0.0/16. This +the address space of the subnet and virtual network to `10.0.0.0/16`. This ensures that the network can dynamically allocate at least 32768 addresses, plus a buffer for initial allocations for primary IP addresses. > Azure IPAM, UCP, and Kubernetes > > The Azure IPAM module queries an Azure virtual machine's metadata to obtain -> a list of IP addresses that are assigned to the virtual machine's NICs. The +> a list of IP addresses which are assigned to the virtual machine's NICs. The > IPAM module allocates these IP addresses to Kubernetes pods. You configure the > IP addresses as `ipConfigurations` in the NICs associated with a virtual machine > or scale set member, so that Azure IPAM can provide them to Kubernetes when > requested. {: .important} -## Manually provision IP address as part of an Azure virtual machine scale set +## Manually provision IP address pools as part of an Azure virtual machine scale set Configure IP Pools for each member of the virtual machine scale set during provisioning by associating multiple `ipConfigurations` with the scale set’s @@ -204,20 +195,109 @@ for each virtual machine in the virtual machine scale set. } ``` -## Install UCP +## UCP Installation -Use the following command to install UCP on the manager node. -The `--pod-cidr` option maps to the IP address range that you configured for -the subnets in the previous sections, and the `--host-address` maps to the -IP address of the master node. +### Adjust the IP Count Value -> Note: The `pod-cidr` range must be within an Azure subnet attached to the -> host. +If you have manually attached additional IP addresses to the Virtual Machines +(via an ARM Template, Azure CLI or Azure Portal) or you want to reduce the +number of IP Addresses automatically provisioned by UCP from the default of 128 +addresses, you can alter the `azure_ip_count` variable in the UCP +Configuration file before installation. If you are happy with 128 addresses per +Virtual Machine, proceed to [installing UCP](#install-ucp). + +Once UCP has been installed, the UCP [configuration +file](../configure/ucp-configuration-file/) is managed by UCP and populated with +all of the cluster configuration data, such as AD/LDAP information or networking +configuration. As there is no Universal Control Plane deployed yet, we are able +to stage a [configuration file](../configure/ucp-configuration-file/) just +containing the Azure IP Count value. UCP will populate the rest of the cluster +variables during and after the installation. + +Below are some example configuration files with just the `azure_ip_count` +variable defined. These 3-line files can be preloaded into a Docker Swarm prior +to installing UCP in order to override the default `azure_ip_count` value of 128 IP +addresses per node. See [UCP configuration file](../configure/ucp-configuration-file/) +to learn more about the configuration file, and other variables that can be staged pre-install. + +> Note: Do not set the `azure_ip_count` to a value of less than 6 if you have not +> manually provisioned additional IP addresses for each Virtual Machine. The UCP +> installation will need at least 6 IP addresses to allocate to the core UCP components +> that run as Kubernetes pods. That is in addition to the Virtual +> Machine's private IP address. + +If you have manually provisioned additional IP addresses for each Virtual +Machine, and want to disallow UCP from dynamically provisioning IP +addresses for you, then your UCP configuration file would be: + +``` +$ vi example-config-1 +[cluster_config] + azure_ip_count = "0" +``` + +If you want to reduce the IP addresses dynamically allocated from 128 to a +custom value, then your UCP configuration file would be: + +``` +$ vi example-config-2 +[cluster_config] + azure_ip_count = "20" # This value may be different for your environment +``` +See [Considerations for IPAM +Configuration](#considerations-for-ipam-configuration) to calculate an +appropriate value. + +To preload this configuration file prior to installing UCP: + +1. Copy the configuration file to a Virtual Machine that you wish to become a UCP Manager Node. + +2. Initiate a Swarm on that Virtual Machine. + + ``` + $ docker swarm init + ``` + +3. Upload the configuration file to the Swarm, by using a [Docker Swarm Config](/engine/swarm/configs/). +This Swarm Config will need to be named `com.docker.ucp.config`. + ``` + $ docker config create com.docker.ucp.config + ``` + +4. Check that the configuration has been loaded succesfully. + ``` + $ docker config list + ID NAME CREATED UPDATED + igca3q30jz9u3e6ecq1ckyofz com.docker.ucp.config 1 days ago 1 days ago + ``` + +5. You are now ready to [install UCP](#install-ucp). As you have already staged + a UCP configuration file, you will need to add `--existing-config` to the + install command below. + +If you need to adjust this value post-installation, see [instructions](../configure/ucp-configuration-file/) +on how to download the UCP configuration file, change the value, and update the configuration via the API. +If you reduce the value post-installation, existing virtual machines will not be +reconciled, and you will have to manually edit the IP count in Azure. + +### Install UCP + +Run the following command to install UCP on a manager node. The `--pod-cidr` +option maps to the IP address range that you have configured for the Azure +subnet, and the `--host-address` maps to the private IP address of the master +node. Finally if you have set the [Ip Count +Value](#adjusting-the-ip-count-value) you will need to add `--existing-config` +to the install command below. + +> Note: The `pod-cidr` range must match the Azure Virtual Network's Subnet +> attached the hosts. For example, if the Azure Virtual Network had the range +> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of +> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`. ```bash docker container run --rm -it \ --name ucp \ - -v /var/run/docker.sock:/var/run/docker.sock \ + --volume /var/run/docker.sock:/var/run/docker.sock \ {{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \ --host-address \ --pod-cidr \ diff --git a/ee/ucp/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md b/ee/ucp/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md index fe1f7cf6c3..5dc9fe8cd5 100644 --- a/ee/ucp/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md +++ b/ee/ucp/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md @@ -20,6 +20,7 @@ containers to be listed as well. Click on a container to see more details, like its configurations and logs. +![](../../images/troubleshoot-with-logs-2.png){: .with-border} ## Check the logs from the CLI @@ -73,7 +74,7 @@ applications won't be affected by this. To increase the UCP log level, navigate to the UCP web UI, go to the **Admin Settings** tab, and choose **Logs**. -![](../../images/troubleshoot-with-logs-2.png){: .with-border} +![](../../images/troubleshoot-with-logs-3.png){: .with-border} Once you change the log level to **Debug** the UCP containers restart. Now that the UCP components are creating more descriptive logs, you can diff --git a/ee/ucp/images/troubleshoot-with-logs-1.png b/ee/ucp/images/troubleshoot-with-logs-1.png index 136e702b73..ba9b785a4f 100644 Binary files a/ee/ucp/images/troubleshoot-with-logs-1.png and b/ee/ucp/images/troubleshoot-with-logs-1.png differ diff --git a/ee/ucp/images/troubleshoot-with-logs-2.png b/ee/ucp/images/troubleshoot-with-logs-2.png index bb1af8fc70..f19020231a 100644 Binary files a/ee/ucp/images/troubleshoot-with-logs-2.png and b/ee/ucp/images/troubleshoot-with-logs-2.png differ diff --git a/ee/ucp/images/troubleshoot-with-logs-3.png b/ee/ucp/images/troubleshoot-with-logs-3.png new file mode 100644 index 0000000000..df2fed856c Binary files /dev/null and b/ee/ucp/images/troubleshoot-with-logs-3.png differ diff --git a/ee/ucp/interlock/architecture.md b/ee/ucp/interlock/architecture.md index 78ce9352c0..3b455c4af3 100644 --- a/ee/ucp/interlock/architecture.md +++ b/ee/ucp/interlock/architecture.md @@ -3,8 +3,6 @@ title: Interlock architecture description: Learn more about the architecture of the layer 7 routing solution for Docker swarm services. keywords: routing, UCP, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/intro/architecture/ --- This document covers the following considerations: diff --git a/ee/ucp/interlock/config/custom-template.md b/ee/ucp/interlock/config/custom-template.md index b488145442..cc8e63cd8a 100644 --- a/ee/ucp/interlock/config/custom-template.md +++ b/ee/ucp/interlock/config/custom-template.md @@ -2,8 +2,6 @@ title: Custom templates description: Learn how to use a custom extension template keywords: routing, proxy, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/ops/custom_template/ --- Use a custom extension if a needed option is not available in the extension configuration. diff --git a/ee/ucp/interlock/config/haproxy-config.md b/ee/ucp/interlock/config/haproxy-config.md index 4a62dacbaa..6108e8ca75 100644 --- a/ee/ucp/interlock/config/haproxy-config.md +++ b/ee/ucp/interlock/config/haproxy-config.md @@ -2,8 +2,6 @@ title: Configure HAProxy description: Learn how to configure an HAProxy extension keywords: routing, proxy, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/config/extensions/haproxy/ --- The following HAProxy configuration options are available: diff --git a/ee/ucp/interlock/config/host-mode-networking.md b/ee/ucp/interlock/config/host-mode-networking.md index d8d286472d..152fb7b97a 100644 --- a/ee/ucp/interlock/config/host-mode-networking.md +++ b/ee/ucp/interlock/config/host-mode-networking.md @@ -6,7 +6,6 @@ keywords: routing, proxy, interlock, load balancing redirect_from: - /ee/ucp/interlock/usage/host-mode-networking/ - /ee/ucp/interlock/deploy/host-mode-networking/ - - https://interlock-dev-docs.netlify.com/usage/host_mode/ --- By default, layer 7 routing components communicate with one another using diff --git a/ee/ucp/interlock/config/index.md b/ee/ucp/interlock/config/index.md index cfb7cb8804..e38ba7bc4b 100644 --- a/ee/ucp/interlock/config/index.md +++ b/ee/ucp/interlock/config/index.md @@ -5,7 +5,6 @@ keywords: routing, proxy, interlock, load balancing redirect_from: - /ee/ucp/interlock/deploy/configure/ - /ee/ucp/interlock/usage/default-service/ - - https://interlock-dev-docs.netlify.com/config/interlock/ --- To further customize the layer 7 routing solution, you must update the diff --git a/ee/ucp/interlock/config/nginx-config.md b/ee/ucp/interlock/config/nginx-config.md index ecdccc9024..580fd470e4 100644 --- a/ee/ucp/interlock/config/nginx-config.md +++ b/ee/ucp/interlock/config/nginx-config.md @@ -2,8 +2,6 @@ title: Configure Nginx description: Learn how to configure an nginx extension keywords: routing, proxy, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/config/extensions/nginx/ --- By default, nginx is used as a proxy, so the following configuration options are diff --git a/ee/ucp/interlock/config/service-labels.md b/ee/ucp/interlock/config/service-labels.md index 1ec898358b..2ee2d3170b 100644 --- a/ee/ucp/interlock/config/service-labels.md +++ b/ee/ucp/interlock/config/service-labels.md @@ -2,8 +2,6 @@ title: Use application service labels description: Learn how applications use service labels for publishing keywords: routing, proxy, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/config/service_labels/ --- Service labels define hostnames that are routed to the diff --git a/ee/ucp/interlock/config/tuning.md b/ee/ucp/interlock/config/tuning.md index 8e0ea6648d..21c74ea66e 100644 --- a/ee/ucp/interlock/config/tuning.md +++ b/ee/ucp/interlock/config/tuning.md @@ -2,8 +2,6 @@ title: Tune the proxy service description: Learn how to tune the proxy service for environment optimization keywords: routing, proxy, interlock -redirect_from: - - https://interlock-dev-docs.netlify.com/ops/tuning/ --- ## Constrain the proxy service to multiple dedicated worker nodes diff --git a/ee/ucp/interlock/config/updates.md b/ee/ucp/interlock/config/updates.md index 8f03783549..e984e28fbe 100644 --- a/ee/ucp/interlock/config/updates.md +++ b/ee/ucp/interlock/config/updates.md @@ -2,8 +2,6 @@ title: Update Interlock services description: Learn how to update the UCP layer 7 routing solution services keywords: routing, proxy, interlock -redirect_from: - - https://interlock-dev-docs.netlify.com/ops/updates/ --- There are two parts to the update process: diff --git a/ee/ucp/interlock/deploy/index.md b/ee/ucp/interlock/deploy/index.md index 068ad42ec9..f282b28c64 100644 --- a/ee/ucp/interlock/deploy/index.md +++ b/ee/ucp/interlock/deploy/index.md @@ -4,7 +4,6 @@ description: Learn the deployment steps for the UCP layer 7 routing solution keywords: routing, proxy, interlock redirect_from: - /ee/ucp/interlock/deploy/configuration-reference/ - - https://interlock-dev-docs.netlify.com/install/ --- This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh. diff --git a/ee/ucp/interlock/deploy/offline-install.md b/ee/ucp/interlock/deploy/offline-install.md index 727b46049a..4b27f8c4c5 100644 --- a/ee/ucp/interlock/deploy/offline-install.md +++ b/ee/ucp/interlock/deploy/offline-install.md @@ -2,8 +2,6 @@ title: Offline installation considerations description: Learn how to to install Interlock on a Docker cluster without internet access. keywords: routing, proxy, interlock -redirect_from: - - https://interlock-dev-docs.netlify.com/install/offline/ --- To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker diff --git a/ee/ucp/interlock/deploy/production.md b/ee/ucp/interlock/deploy/production.md index ca98d9809b..0a353b4e8c 100644 --- a/ee/ucp/interlock/deploy/production.md +++ b/ee/ucp/interlock/deploy/production.md @@ -3,8 +3,6 @@ title: Configure layer 7 routing for production description: Learn how to configure the layer 7 routing solution for a production environment. keywords: routing, proxy, interlock -redirect_from: - - https://interlock-dev-docs.netlify.com/install/production/ --- This section includes documentation on configuring Interlock diff --git a/ee/ucp/interlock/index.md b/ee/ucp/interlock/index.md index 7e91868220..6aed55d0a5 100644 --- a/ee/ucp/interlock/index.md +++ b/ee/ucp/interlock/index.md @@ -2,9 +2,6 @@ title: Layer 7 routing overview description: Learn how to route layer 7 traffic to your Swarm services keywords: routing, UCP, interlock, load balancing -redirect_from: - - https://interlock-dev-docs.netlify.com/ - - https://interlock-dev-docs.netlify.com/intro/about/ --- Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality. diff --git a/ee/ucp/interlock/usage/canary.md b/ee/ucp/interlock/usage/canary.md index 60f6bc4f0a..2041ec7f16 100644 --- a/ee/ucp/interlock/usage/canary.md +++ b/ee/ucp/interlock/usage/canary.md @@ -2,8 +2,6 @@ title: Publish Canary application instances description: Learn how to do canary deployments for your Docker swarm services keywords: routing, proxy -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/canary/ --- The following example publishes a service as a canary instance. diff --git a/ee/ucp/interlock/usage/context.md b/ee/ucp/interlock/usage/context.md index 2b63220795..6390931108 100644 --- a/ee/ucp/interlock/usage/context.md +++ b/ee/ucp/interlock/usage/context.md @@ -3,8 +3,6 @@ title: Use context and path-based routing description: Learn how to route traffic to your Docker swarm services based on a url path. keywords: routing, proxy -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/context_root/ --- The following example publishes a service using context or path based routing. diff --git a/ee/ucp/interlock/usage/index.md b/ee/ucp/interlock/usage/index.md index 61eb2889b5..0a488ccf26 100644 --- a/ee/ucp/interlock/usage/index.md +++ b/ee/ucp/interlock/usage/index.md @@ -5,7 +5,6 @@ keywords: routing, proxy redirect_from: - /ee/ucp/interlock/deploy/configuration-reference/ - /ee/ucp/interlock/deploy/configure/ - - https://interlock-dev-docs.netlify.com/usage/hello/ --- After Interlock is deployed, you can launch and publish services and applications. diff --git a/ee/ucp/interlock/usage/interlock-vip-mode.md b/ee/ucp/interlock/usage/interlock-vip-mode.md index b1c3be7edf..e528ff66e2 100644 --- a/ee/ucp/interlock/usage/interlock-vip-mode.md +++ b/ee/ucp/interlock/usage/interlock-vip-mode.md @@ -3,7 +3,7 @@ title: Specify a routing mode description: Learn about task and VIP backend routing modes for Layer 7 routing keywords: routing, proxy, interlock redirect_from: - - https://interlock-dev-docs.netlify.com/usage/default_backend/ + - /ee/ucp/interlock/usage/default-backend/ --- You can publish services using "vip" and "task" backend routing modes. diff --git a/ee/ucp/interlock/usage/redirects.md b/ee/ucp/interlock/usage/redirects.md index 555f61355b..82e3e72764 100644 --- a/ee/ucp/interlock/usage/redirects.md +++ b/ee/ucp/interlock/usage/redirects.md @@ -3,8 +3,6 @@ title: Implement application redirects description: Learn how to implement redirects using swarm services and the layer 7 routing solution for UCP. keywords: routing, proxy, redirects, interlock -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/redirects/ --- The following example publishes a service and configures a redirect from `old.local` to `new.local`. diff --git a/ee/ucp/interlock/usage/service-clusters.md b/ee/ucp/interlock/usage/service-clusters.md index bb8f272a5b..181ad9bcfb 100644 --- a/ee/ucp/interlock/usage/service-clusters.md +++ b/ee/ucp/interlock/usage/service-clusters.md @@ -2,8 +2,6 @@ title: Implement service clusters description: Learn how to route traffic to different proxies using a service cluster. keywords: ucp, interlock, load balancing, routing -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/service_clusters/ --- ## Configure Proxy Services diff --git a/ee/ucp/interlock/usage/sessions.md b/ee/ucp/interlock/usage/sessions.md index f092c95b6b..e312826b54 100644 --- a/ee/ucp/interlock/usage/sessions.md +++ b/ee/ucp/interlock/usage/sessions.md @@ -3,8 +3,6 @@ title: Implement persistent (sticky) sessions description: Learn how to configure your swarm services with persistent sessions using UCP. keywords: routing, proxy, cookies, IP hash -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/sessions/ --- You can publish a service and configure the proxy for persistent (sticky) sessions using: diff --git a/ee/ucp/interlock/usage/ssl.md b/ee/ucp/interlock/usage/ssl.md index d29d41cb12..154636f2fe 100644 --- a/ee/ucp/interlock/usage/ssl.md +++ b/ee/ucp/interlock/usage/ssl.md @@ -4,7 +4,6 @@ description: Learn how to configure your swarm services with SSL. keywords: routing, proxy, tls, ssl redirect_from: - /ee/ucp/interlock/usage/ssl/ - - https://interlock-dev-docs.netlify.com/usage/ssl/ --- This topic covers Swarm services implementation with: diff --git a/ee/ucp/interlock/usage/websockets.md b/ee/ucp/interlock/usage/websockets.md index 637cd979a5..69eb9febb7 100644 --- a/ee/ucp/interlock/usage/websockets.md +++ b/ee/ucp/interlock/usage/websockets.md @@ -2,8 +2,6 @@ title: Use websockets description: Learn how to use websockets in your swarm services. keywords: routing, proxy, websockets -redirect_from: - - https://interlock-dev-docs.netlify.com/usage/websockets/ --- First, create an overlay network to isolate and secure service traffic: diff --git a/reference/dtr/2.6/cli/backup.md b/reference/dtr/2.6/cli/backup.md index 4be9f3fda9..277ed862c9 100644 --- a/reference/dtr/2.6/cli/backup.md +++ b/reference/dtr/2.6/cli/backup.md @@ -15,12 +15,37 @@ docker run -i --rm docker/dtr \ backup [command options] > backup.tar ``` -### Example Usage +### Example Commands + +#### Basic ```bash -docker run -i --rm docker/dtr \ +docker run -i --rm --log-driver none docker/dtr:{{ page.dtr_version }} \ backup --ucp-ca "$(cat ca.pem)" --existing-replica-id 5eb9459a7832 > backup.tar ``` +### Advanced (with chained commands) +```bash +DTR_VERSION=$(docker container inspect $(docker container ps -f \ + name=dtr-registry -q) | grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \ +REPLICA_ID=$(docker ps --filter name=dtr-rethinkdb \ + --format "{{ .Names }}" | head -1 | sed 's|.*/||' | sed 's/dtr-rethinkdb-//'); \ +read -p 'ucp-url (The UCP URL including domain and port): ' UCP_URL; \ +read -p 'ucp-username (The UCP administrator username): ' UCP_ADMIN; \ +read -sp 'ucp password: ' UCP_PASSWORD; \ +docker run --log-driver none -i --rm \ + --env UCP_PASSWORD=$UCP_PASSWORD \ + docker/dtr:$DTR_VERSION backup \ + --ucp-username $UCP_ADMIN \ + --ucp-url $UCP_URL \ + --ucp-ca "$(curl https://${UCP_URL}/ca)" \ + --existing-replica-id $REPLICA_ID > \ + dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar +``` + +For a detailed explanation on the advanced example, see +[Back up your DTR metadata](ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata). +To learn more about the `--log-driver` option for `docker run`, see [docker run reference](/engine/reference/run/#logging-drivers---log-driver). + ## Description This command creates a `tar` file with the contents of the volumes used by