mirror of https://github.com/docker/docs.git
Sync published with master (#8673)
* Revert "Netlify redirects interlock (#8595)"
This reverts commit a7793edc74
.
* UCP Install on Azure Patch (#8522)
* Fix grammar on the 2nd pre-req, and did markdown formatting on the rest :)
* Correct Pod-CIDR Warning
* Content cleanup
Please check that I haven't changed the meaning of the updated prerequisites.
* Create a new section on configuring the IP Count value, also responded to feedback from Follis, Steve R and Xinfeng.
* Incorporated Steven F's feedback and Issue 8551
* Provide a warning when setting a small IP Count variable
* Final edits
* Update install-on-azure.md
* Following feedback I have expanded on the 0644 azure.json file permissions and Added the --existing-config file to the UCP install command
* Removed Orchestrator Tag Pre Req from Azure Docs
* Clarifying need for 0644 permissions
* Improved backup commands (#8597)
* Improved backup commands
DTR image backup command improvements:
1. Local and NFS mount image backup commands were invalid (incorrectly used -C flag). Replaced them with commands that work.
2. The new commands automatically populate the correct replica ID and add a datestamp to the backup filename.
DTR Metadata backup command improvements:
DTR metadata backups are more difficult than they need to be and generate many support tickets. I updated the DTR command to avoid common user pitfalls:
1. The prior metadata backup command was subject to user error. Improved the command to automatically collect the DTR version and select a replica.
2. Improved security of the command by automatically collecting UCP CA certificate for verification rather than using --ucp-insecure-tls flag.
3. Improved the backup filename by adding the backed-up version information and date of backup. Knowledge of the version information is required for restoring a backup.
4. Described these improvements for the user.
Image backup commands were tested with local and NFS image storage. The metadata backup command was tested by running it directly on a DTR node and through a UCP client bundle with multiple replicas.
* Technical and editorial review
* More edits
* line 8; remove unnecessary a (#8672)
* line 8; remove unnecessary a
* Minor edit
* Updated the UCP Logging page to include UCP 3.1 screenshots (#8646)
* Added examples (#8599)
* Added examples
Added examples with more detail and automation to help customers backup DTR without creating support tickets.
* Linked to explanation of example command
@omegamormegil I removed the example with prepopulated fields, as I think it doesn't add much, and will only add confusion. Users who need this much detail can run the basic command and follow the terminal prompts.
We can re-add in a follow-up PR, if you think that example is crucial to this page.
* Remove deadlink in the Interlock ToC (#8668)
* Found a deadlink in the Interlock ToC
* Added Redirect
This commit is contained in:
parent
e6c584d3b1
commit
e0dc289650
|
@ -1345,8 +1345,6 @@ manuals:
|
|||
path: /ee/ucp/interlock/usage/canary/
|
||||
- title: Using context or path-based routing
|
||||
path: /ee/ucp/interlock/usage/context/
|
||||
- title: Publishing a default host service
|
||||
path: /ee/ucp/interlock/usage/default-backend/
|
||||
- title: Specifying a routing mode
|
||||
path: /ee/ucp/interlock/usage/interlock-vip-mode/
|
||||
- title: Using routing labels
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: Create a backup
|
||||
description: Learn how to create a backup of Docker Trusted Registry, for disaster recovery.
|
||||
keywords: dtr, disaster recovery
|
||||
toc_max_header: 5
|
||||
toc_max_header: 3
|
||||
---
|
||||
|
||||
{% assign metadata_backup_file = "dtr-metadata-backup.tar" %}
|
||||
|
@ -93,16 +93,36 @@ Since you can configure the storage backend that DTR uses to store images,
|
|||
the way you back up images depends on the storage backend you're using.
|
||||
|
||||
If you've configured DTR to store images on the local file system or NFS mount,
|
||||
you can backup the images by using SSH to log in to a DTR node,
|
||||
and creating a tar archive of the [dtr-registry volume](../../architecture.md):
|
||||
you can back up the images by using SSH to log into a DTR node,
|
||||
and creating a `tar` archive of the [dtr-registry volume](../../architecture.md):
|
||||
|
||||
#### Example backup commands
|
||||
|
||||
##### Local images
|
||||
|
||||
{% raw %}
|
||||
```none
|
||||
sudo tar -cf {{ image_backup_file }} \
|
||||
-C /var/lib/docker/volumes/ dtr-registry-<replica-id>
|
||||
sudo tar -cf dtr-image-backup-$(date +%Y%m%d-%H_%M_%S).tar \
|
||||
/var/lib/docker/volumes/dtr-registry-$(docker ps --filter name=dtr-rethinkdb \
|
||||
--format "{{ .Names }}" | sed 's/dtr-rethinkdb-//')
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
##### NFS-mounted images
|
||||
|
||||
{% raw %}
|
||||
```none
|
||||
sudo tar -cf dtr-image-backup-$(date +%Y%m%d-%H_%M_%S).tar \
|
||||
/var/lib/docker/volumes/dtr-registry-nfs-$(docker ps --filter name=dtr-rethinkdb \
|
||||
--format "{{ .Names }}" | sed 's/dtr-rethinkdb-//')
|
||||
```
|
||||
{% endraw %}
|
||||
|
||||
###### Expected output
|
||||
```bash
|
||||
tar: Removing leading `/' from member names
|
||||
```
|
||||
|
||||
If you're using a different storage backend, follow the best practices
|
||||
recommended for that system.
|
||||
|
||||
|
@ -110,37 +130,49 @@ recommended for that system.
|
|||
### Back up DTR metadata
|
||||
|
||||
To create a DTR backup, load your UCP client bundle, and run the following
|
||||
command, replacing the placeholders with real values:
|
||||
concatenated commands:
|
||||
|
||||
```bash
|
||||
read -sp 'ucp password: ' UCP_PASSWORD;
|
||||
```
|
||||
|
||||
This prompts you for the UCP password. Next, run the following to back up your DTR metadata and save the result into a tar archive. You can learn more about the supported flags in
|
||||
the [reference documentation](/reference/dtr/2.6/cli/backup.md).
|
||||
|
||||
```bash
|
||||
```none
|
||||
DTR_VERSION=$(docker container inspect $(docker container ps -f name=dtr-registry -q) | \
|
||||
grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
|
||||
REPLICA_ID=$(docker ps --filter name=dtr-rethinkdb --format "{{ .Names }}" | head -1 | \
|
||||
sed 's|.*/||' | sed 's/dtr-rethinkdb-//'); \
|
||||
read -p 'ucp-url (The UCP URL including domain and port): ' UCP_URL; \
|
||||
read -p 'ucp-username (The UCP administrator username): ' UCP_ADMIN; \
|
||||
read -sp 'ucp password: ' UCP_PASSWORD; \
|
||||
docker run --log-driver none -i --rm \
|
||||
--env UCP_PASSWORD=$UCP_PASSWORD \
|
||||
{{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} backup \
|
||||
--ucp-url <ucp-url> \
|
||||
--ucp-insecure-tls \
|
||||
--ucp-username <ucp-username> \
|
||||
--existing-replica-id <replica-id> > {{ metadata_backup_file }}
|
||||
docker/dtr:$DTR_VERSION backup \
|
||||
--ucp-username $UCP_ADMIN \
|
||||
--ucp-url $UCP_URL \
|
||||
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
|
||||
--existing-replica-id $REPLICA_ID > dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
|
||||
```
|
||||
|
||||
Where:
|
||||
#### UCP field prompts
|
||||
|
||||
* `<ucp-url>` is the url you use to access UCP.
|
||||
* `<ucp-url>` is the URL you use to access UCP.
|
||||
* `<ucp-username>` is the username of a UCP administrator.
|
||||
* `<replica-id>` is the id of the DTR replica to backup.
|
||||
* `<replica-id>` is the DTR replica ID to back up.
|
||||
|
||||
The above concatenated commands run through the following tasks:
|
||||
1. Sets your DTR version and replica ID. To back up
|
||||
a specific replica, set the replica ID manually by modifying the
|
||||
`--existing-replica-id` flag in the backup command.
|
||||
2. Prompts you for your UCP URL (domain and port), username, and password.
|
||||
3. Prompts you for your UCP password without saving it to your disk or printing it on the terminal.
|
||||
4. Retrieves the CA certificate for your specified UCP URL. To skip TLS verification, replace the `--ucp-ca`
|
||||
flag with `--ucp-insecure-tls`. Docker does not recommend this flag for production environments.
|
||||
5. Includes DTR version and timestamp to your `tar` backup file.
|
||||
|
||||
By default the backup command doesn't stop the DTR replica being backed up.
|
||||
This means you can take frequent backups without affecting your users.
|
||||
You can learn more about the supported flags in
|
||||
the [reference documentation](/reference/dtr/2.6/cli/backup.md).
|
||||
|
||||
You can use the `--offline-backup` option to stop the DTR replica while taking
|
||||
the backup. If you do this, remove the replica from the load balancing pool.
|
||||
By default, the backup command does not pause the DTR replica being backed up to
|
||||
prevent interruptions of user access to DTR. Since the replica
|
||||
is not stopped, changes that happen during the backup may not be saved.
|
||||
Use the `--offline-backup` flag to stop the DTR replica during the backup procedure. If you set this flag,
|
||||
remove the replica from the load balancing pool to avoid user interruption.
|
||||
|
||||
Also, the backup contains sensitive information
|
||||
like private keys, so you can encrypt the backup by running:
|
||||
|
|
|
@ -5,7 +5,7 @@ keywords: registry, promotion, mirror
|
|||
---
|
||||
|
||||
Docker Trusted Registry allows you to create mirroring policies for a repository.
|
||||
When an image gets pushed to a repository and meets a certain criteria,
|
||||
When an image gets pushed to a repository and meets the mirroring criteria,
|
||||
DTR automatically pushes it to a repository in a remote Docker Trusted or Hub registry.
|
||||
|
||||
This not only allows you to mirror images but also allows you to create
|
||||
|
|
|
@ -4,63 +4,54 @@ description: Learn how to install Docker Universal Control Plane in a Microsoft
|
|||
keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes
|
||||
---
|
||||
|
||||
Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking
|
||||
and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure
|
||||
Docker Universal Control Plane (UCP) closely integrates with Microsoft Azure for its Kubernetes Networking
|
||||
and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure,
|
||||
the Calico CNI leverages the Azure networking infrastructure for data path
|
||||
networking and the Azure IPAM for IP address management. There are
|
||||
infrastructure prerequisites that are required prior to UCP installation for the
|
||||
infrastructure prerequisites required prior to UCP installation for the
|
||||
Calico / Azure integration.
|
||||
|
||||
## Docker UCP Networking
|
||||
|
||||
Docker UCP configures the Azure IPAM module for Kubernetes to allocate
|
||||
IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure
|
||||
virtual machine that's part of the Kubernetes cluster to be configured with a pool of
|
||||
IP addresses.
|
||||
Docker UCP configures the Azure IPAM module for Kubernetes to allocate IP
|
||||
addresses for Kubernetes pods. The Azure IPAM module requires each Azure virtual
|
||||
machine which is part of the Kubernetes cluster to be configured with a pool of IP
|
||||
addresses.
|
||||
|
||||
There are two options for provisoning IPs for the Kubernetes cluster on Azure
|
||||
- Docker UCP provides an automated mechanism to configure and maintain IP pools
|
||||
for standalone Azure virtual machines. This service runs within the calico-node daemonset
|
||||
and by default will provision 128 IP address for each node. This value can be
|
||||
configured through the `azure_ip_count`in the UCP
|
||||
[configuration file](../configure/ucp-configuration-file) before or after the
|
||||
UCP installation. Note that if this value is reduced post-installation, existing
|
||||
virtual machines will not be reconciled, and you will have to manually edit the IP count
|
||||
in Azure.
|
||||
- Manually provision additional IP address for each Azure virtual machine. This could be done
|
||||
as part of an Azure Virtual Machine Scale Set through an ARM template. You can find an example [here](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set).
|
||||
Note that the `azure_ip_count` value in the UCP
|
||||
[configuration file](../configure/ucp-configuration-file) will need to be set
|
||||
to 0, otherwise UCP's IP Allocator service will provision the IP Address on top of
|
||||
those you have already provisioned.
|
||||
There are two options for provisoning IPs for the Kubernetes cluster on Azure:
|
||||
|
||||
- _An automated mechanism provided by UCP which allows for IP pool configuration and maintenance
|
||||
for standalone Azure virtual machines._ This service runs within the
|
||||
`calico-node` daemonset and provisions 128 IP addresses for each
|
||||
node by default. For information on customizing this value, see [Adjusting the IP count value](#adjusting-the-ip-count-value).
|
||||
- _Manual provision of additional IP address for each Azure virtual machine._ This
|
||||
could be done through the Azure Portal, the Azure CLI `$ az network nic ip-config create`,
|
||||
or an ARM template. You can find an example of an ARM template
|
||||
[here](#manually-provision-ip-address-as-part-of-an-azure-virtual-machine-scale-set).
|
||||
|
||||
## Azure Prerequisites
|
||||
|
||||
You must meet these infrastructure prerequisites in order
|
||||
to successfully deploy Docker UCP on Azure
|
||||
You must meet the following infrastructure prerequisites in order
|
||||
to successfully deploy Docker UCP on Azure:
|
||||
|
||||
- All UCP Nodes (Managers and Workers) need to be deployed into the same
|
||||
Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups)
|
||||
components could be deployed in a second Azure Resource Group.
|
||||
- The Azure Vnet and Subnet must be appropriately sized for your
|
||||
environment, and addresses from this pool are consumed by Kubernetes Pods. For more information, see
|
||||
[Considerations for IPAM
|
||||
- All UCP Nodes (Managers and Workers) need to be deployed into the same Azure
|
||||
Resource Group. The Azure Networking components (Virtual Network, Subnets,
|
||||
Security Groups) could be deployed in a second Azure Resource Group.
|
||||
- The Azure Virtual Network and Subnet must be appropriately sized for your
|
||||
environment, as addresses from this pool will be consumed by Kubernetes Pods.
|
||||
For more information, see [Considerations for IPAM
|
||||
Configuration](#considerations-for-ipam-configuration).
|
||||
- All UCP Nodes (Managers and Workers) need to be attached to the same
|
||||
Azure Subnet.
|
||||
- All UCP (Managers and Workers) need to be tagged in Azure with the
|
||||
`Orchestrator` tag. Note the value for this tag is the Kubernetes version number
|
||||
in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each
|
||||
UCP release. To find the relevant version please see the UCP
|
||||
[Release Notes](../../release-notes). For example for UCP 3.1.0 the tag
|
||||
would be `Orchestrator=Kubernetes:1.11.2`.
|
||||
- The Azure Virtual Machine Object Name needs to match the Azure Virtual Machine
|
||||
Computer Name and the Node Operating System's Hostname. Note this applies to the
|
||||
FQDN of the host including domain names.
|
||||
- An Azure Service Principal with `Contributor` access to the Azure Resource
|
||||
Group hosting the UCP Nodes. Note, if using a separate networking Resource
|
||||
Group the same Service Principal will need `Network Contributor` access to this
|
||||
Resource Group.
|
||||
- All UCP worker and manager nodes need to be attached to the same Azure
|
||||
Subnet.
|
||||
- The Azure Virtual Machine Object Name needs to match the Azure Virtual Machine
|
||||
Computer Name and the Node Operating System's Hostname which is the FQDN of
|
||||
the host, including domain names. Note that this requires all characters to be in lowercase.
|
||||
- An Azure Service Principal with `Contributor` access to the Azure Resource
|
||||
Group hosting the UCP Nodes. This Service principal will be used by Kubernetes
|
||||
to communicate with the Azure API. The Service Principal ID and Secret Key are
|
||||
needed as part of the UCP prerequisites. If you are using a separate Resource
|
||||
Group for the networking components, the same Service Principal will need
|
||||
`Network Contributor` access to this Resource Group.
|
||||
|
||||
UCP requires the following information for the installation:
|
||||
|
||||
|
@ -68,17 +59,18 @@ UCP requires the following information for the installation:
|
|||
objects are being deployed.
|
||||
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
|
||||
objects are being deployed.
|
||||
- `aadClientId` - The Azure Service Principal ID
|
||||
- `aadClientSecret` - The Azure Service Principal Secret Key
|
||||
- `aadClientId` - The Azure Service Principal ID.
|
||||
- `aadClientSecret` - The Azure Service Principal Secret Key.
|
||||
|
||||
### Azure Configuration File
|
||||
|
||||
For Docker UCP to integrate into Microsoft Azure, you need to place an Azure
|
||||
configuration file within each UCP node in your cluster, at
|
||||
`/etc/kubernetes/azure.json`. The `azure.json` file needs 0644 permissions.
|
||||
For Docker UCP to integrate with Microsoft Azure,each UCP node in your cluster
|
||||
needs an Azure configuration file, `azure.json`. Place the file within
|
||||
`/etc/kubernetes`. Since the config file is owned by `root`, set its permissions
|
||||
to `0644` to ensure the container user has read access.
|
||||
|
||||
See the template below. Note entries that do not contain `****` should not be
|
||||
changed.
|
||||
The following is an example template for `azure.json`. Replace `***` with real values, and leave the other
|
||||
parameters as is.
|
||||
|
||||
```
|
||||
{
|
||||
|
@ -105,45 +97,44 @@ changed.
|
|||
}
|
||||
```
|
||||
|
||||
There are some optional values for Azure deployments:
|
||||
There are some optional parameters for Azure deployments:
|
||||
|
||||
- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set.
|
||||
- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a
|
||||
- `primaryAvailabilitySetName` - The Worker Nodes availability set.
|
||||
- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a
|
||||
seperate resource group.
|
||||
- `"routeTableName": "****",` - If you have defined multiple Route tables within
|
||||
- `routeTableName` - If you have defined multiple Route tables within
|
||||
an Azure subnet.
|
||||
|
||||
More details on this configuration file can be found
|
||||
[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go).
|
||||
See [Kubernetes' azure.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go) for more details on this configuration file.
|
||||
|
||||
## Considerations for IPAM Configuration
|
||||
|
||||
The subnet and the virtual network associated with the primary interface of
|
||||
the Azure virtual machines need to be configured with a large enough address prefix/range.
|
||||
The number of required IP addresses depends on the number of pods running
|
||||
on each node and the number of nodes in the cluster.
|
||||
The subnet and the virtual network associated with the primary interface of the
|
||||
Azure virtual machines need to be configured with a large enough address
|
||||
prefix/range. The number of required IP addresses depends on the workload and
|
||||
the number of nodes in the cluster.
|
||||
|
||||
For example, in a cluster of 256 nodes, to run a maximum of 128 pods
|
||||
concurrently on a node, make sure that the address space of the subnet and the
|
||||
virtual network can allocate at least 128 * 256 IP addresses, _in addition to_
|
||||
initial IP allocations to virtual machine NICs during Azure resource creation.
|
||||
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the
|
||||
virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods
|
||||
concurrently on a node. This would be ***in addition to*** initial IP allocations to virtual machine
|
||||
NICs (network interfaces) during Azure resource creation.
|
||||
|
||||
Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set
|
||||
the address space of the subnet and virtual network to 10.0.0.0/16. This
|
||||
the address space of the subnet and virtual network to `10.0.0.0/16`. This
|
||||
ensures that the network can dynamically allocate at least 32768 addresses,
|
||||
plus a buffer for initial allocations for primary IP addresses.
|
||||
|
||||
> Azure IPAM, UCP, and Kubernetes
|
||||
>
|
||||
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
|
||||
> a list of IP addresses that are assigned to the virtual machine's NICs. The
|
||||
> a list of IP addresses which are assigned to the virtual machine's NICs. The
|
||||
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
|
||||
> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine
|
||||
> or scale set member, so that Azure IPAM can provide them to Kubernetes when
|
||||
> requested.
|
||||
{: .important}
|
||||
|
||||
## Manually provision IP address as part of an Azure virtual machine scale set
|
||||
## Manually provision IP address pools as part of an Azure virtual machine scale set
|
||||
|
||||
Configure IP Pools for each member of the virtual machine scale set during provisioning by
|
||||
associating multiple `ipConfigurations` with the scale set’s
|
||||
|
@ -204,20 +195,109 @@ for each virtual machine in the virtual machine scale set.
|
|||
}
|
||||
```
|
||||
|
||||
## Install UCP
|
||||
## UCP Installation
|
||||
|
||||
Use the following command to install UCP on the manager node.
|
||||
The `--pod-cidr` option maps to the IP address range that you configured for
|
||||
the subnets in the previous sections, and the `--host-address` maps to the
|
||||
IP address of the master node.
|
||||
### Adjust the IP Count Value
|
||||
|
||||
> Note: The `pod-cidr` range must be within an Azure subnet attached to the
|
||||
> host.
|
||||
If you have manually attached additional IP addresses to the Virtual Machines
|
||||
(via an ARM Template, Azure CLI or Azure Portal) or you want to reduce the
|
||||
number of IP Addresses automatically provisioned by UCP from the default of 128
|
||||
addresses, you can alter the `azure_ip_count` variable in the UCP
|
||||
Configuration file before installation. If you are happy with 128 addresses per
|
||||
Virtual Machine, proceed to [installing UCP](#install-ucp).
|
||||
|
||||
Once UCP has been installed, the UCP [configuration
|
||||
file](../configure/ucp-configuration-file/) is managed by UCP and populated with
|
||||
all of the cluster configuration data, such as AD/LDAP information or networking
|
||||
configuration. As there is no Universal Control Plane deployed yet, we are able
|
||||
to stage a [configuration file](../configure/ucp-configuration-file/) just
|
||||
containing the Azure IP Count value. UCP will populate the rest of the cluster
|
||||
variables during and after the installation.
|
||||
|
||||
Below are some example configuration files with just the `azure_ip_count`
|
||||
variable defined. These 3-line files can be preloaded into a Docker Swarm prior
|
||||
to installing UCP in order to override the default `azure_ip_count` value of 128 IP
|
||||
addresses per node. See [UCP configuration file](../configure/ucp-configuration-file/)
|
||||
to learn more about the configuration file, and other variables that can be staged pre-install.
|
||||
|
||||
> Note: Do not set the `azure_ip_count` to a value of less than 6 if you have not
|
||||
> manually provisioned additional IP addresses for each Virtual Machine. The UCP
|
||||
> installation will need at least 6 IP addresses to allocate to the core UCP components
|
||||
> that run as Kubernetes pods. That is in addition to the Virtual
|
||||
> Machine's private IP address.
|
||||
|
||||
If you have manually provisioned additional IP addresses for each Virtual
|
||||
Machine, and want to disallow UCP from dynamically provisioning IP
|
||||
addresses for you, then your UCP configuration file would be:
|
||||
|
||||
```
|
||||
$ vi example-config-1
|
||||
[cluster_config]
|
||||
azure_ip_count = "0"
|
||||
```
|
||||
|
||||
If you want to reduce the IP addresses dynamically allocated from 128 to a
|
||||
custom value, then your UCP configuration file would be:
|
||||
|
||||
```
|
||||
$ vi example-config-2
|
||||
[cluster_config]
|
||||
azure_ip_count = "20" # This value may be different for your environment
|
||||
```
|
||||
See [Considerations for IPAM
|
||||
Configuration](#considerations-for-ipam-configuration) to calculate an
|
||||
appropriate value.
|
||||
|
||||
To preload this configuration file prior to installing UCP:
|
||||
|
||||
1. Copy the configuration file to a Virtual Machine that you wish to become a UCP Manager Node.
|
||||
|
||||
2. Initiate a Swarm on that Virtual Machine.
|
||||
|
||||
```
|
||||
$ docker swarm init
|
||||
```
|
||||
|
||||
3. Upload the configuration file to the Swarm, by using a [Docker Swarm Config](/engine/swarm/configs/).
|
||||
This Swarm Config will need to be named `com.docker.ucp.config`.
|
||||
```
|
||||
$ docker config create com.docker.ucp.config <local-configuration-file>
|
||||
```
|
||||
|
||||
4. Check that the configuration has been loaded succesfully.
|
||||
```
|
||||
$ docker config list
|
||||
ID NAME CREATED UPDATED
|
||||
igca3q30jz9u3e6ecq1ckyofz com.docker.ucp.config 1 days ago 1 days ago
|
||||
```
|
||||
|
||||
5. You are now ready to [install UCP](#install-ucp). As you have already staged
|
||||
a UCP configuration file, you will need to add `--existing-config` to the
|
||||
install command below.
|
||||
|
||||
If you need to adjust this value post-installation, see [instructions](../configure/ucp-configuration-file/)
|
||||
on how to download the UCP configuration file, change the value, and update the configuration via the API.
|
||||
If you reduce the value post-installation, existing virtual machines will not be
|
||||
reconciled, and you will have to manually edit the IP count in Azure.
|
||||
|
||||
### Install UCP
|
||||
|
||||
Run the following command to install UCP on a manager node. The `--pod-cidr`
|
||||
option maps to the IP address range that you have configured for the Azure
|
||||
subnet, and the `--host-address` maps to the private IP address of the master
|
||||
node. Finally if you have set the [Ip Count
|
||||
Value](#adjusting-the-ip-count-value) you will need to add `--existing-config`
|
||||
to the install command below.
|
||||
|
||||
> Note: The `pod-cidr` range must match the Azure Virtual Network's Subnet
|
||||
> attached the hosts. For example, if the Azure Virtual Network had the range
|
||||
> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of
|
||||
> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`.
|
||||
|
||||
```bash
|
||||
docker container run --rm -it \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \
|
||||
--host-address <ucp-ip> \
|
||||
--pod-cidr <ip-address-range> \
|
||||
|
|
|
@ -20,6 +20,7 @@ containers to be listed as well.
|
|||
|
||||
Click on a container to see more details, like its configurations and logs.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
## Check the logs from the CLI
|
||||
|
||||
|
@ -73,7 +74,7 @@ applications won't be affected by this.
|
|||
To increase the UCP log level, navigate to the UCP web UI, go to the
|
||||
**Admin Settings** tab, and choose **Logs**.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
Once you change the log level to **Debug** the UCP containers restart.
|
||||
Now that the UCP components are creating more descriptive logs, you can
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 169 KiB After Width: | Height: | Size: 164 KiB |
Binary file not shown.
Before Width: | Height: | Size: 76 KiB After Width: | Height: | Size: 119 KiB |
Binary file not shown.
After Width: | Height: | Size: 99 KiB |
|
@ -3,8 +3,6 @@ title: Interlock architecture
|
|||
description: Learn more about the architecture of the layer 7 routing solution
|
||||
for Docker swarm services.
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/intro/architecture/
|
||||
---
|
||||
|
||||
This document covers the following considerations:
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Custom templates
|
||||
description: Learn how to use a custom extension template
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/custom_template/
|
||||
---
|
||||
|
||||
Use a custom extension if a needed option is not available in the extension configuration.
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Configure HAProxy
|
||||
description: Learn how to configure an HAProxy extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/extensions/haproxy/
|
||||
---
|
||||
|
||||
The following HAProxy configuration options are available:
|
||||
|
|
|
@ -6,7 +6,6 @@ keywords: routing, proxy, interlock, load balancing
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/host-mode-networking/
|
||||
- /ee/ucp/interlock/deploy/host-mode-networking/
|
||||
- https://interlock-dev-docs.netlify.com/usage/host_mode/
|
||||
---
|
||||
|
||||
By default, layer 7 routing components communicate with one another using
|
||||
|
|
|
@ -5,7 +5,6 @@ keywords: routing, proxy, interlock, load balancing
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
- /ee/ucp/interlock/usage/default-service/
|
||||
- https://interlock-dev-docs.netlify.com/config/interlock/
|
||||
---
|
||||
|
||||
To further customize the layer 7 routing solution, you must update the
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Configure Nginx
|
||||
description: Learn how to configure an nginx extension
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/extensions/nginx/
|
||||
---
|
||||
|
||||
By default, nginx is used as a proxy, so the following configuration options are
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Use application service labels
|
||||
description: Learn how applications use service labels for publishing
|
||||
keywords: routing, proxy, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/config/service_labels/
|
||||
---
|
||||
|
||||
Service labels define hostnames that are routed to the
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Tune the proxy service
|
||||
description: Learn how to tune the proxy service for environment optimization
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/tuning/
|
||||
---
|
||||
|
||||
## Constrain the proxy service to multiple dedicated worker nodes
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Update Interlock services
|
||||
description: Learn how to update the UCP layer 7 routing solution services
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/ops/updates/
|
||||
---
|
||||
|
||||
There are two parts to the update process:
|
||||
|
|
|
@ -4,7 +4,6 @@ description: Learn the deployment steps for the UCP layer 7 routing solution
|
|||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- https://interlock-dev-docs.netlify.com/install/
|
||||
---
|
||||
|
||||
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh.
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Offline installation considerations
|
||||
description: Learn how to to install Interlock on a Docker cluster without internet access.
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/install/offline/
|
||||
---
|
||||
|
||||
To install Interlock on a Docker cluster without internet access, the Docker images must be loaded. This topic describes how to export the images from a local Docker
|
||||
|
|
|
@ -3,8 +3,6 @@ title: Configure layer 7 routing for production
|
|||
description: Learn how to configure the layer 7 routing solution for a production
|
||||
environment.
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/install/production/
|
||||
---
|
||||
|
||||
This section includes documentation on configuring Interlock
|
||||
|
|
|
@ -2,9 +2,6 @@
|
|||
title: Layer 7 routing overview
|
||||
description: Learn how to route layer 7 traffic to your Swarm services
|
||||
keywords: routing, UCP, interlock, load balancing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/
|
||||
- https://interlock-dev-docs.netlify.com/intro/about/
|
||||
---
|
||||
|
||||
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Publish Canary application instances
|
||||
description: Learn how to do canary deployments for your Docker swarm services
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/canary/
|
||||
---
|
||||
|
||||
The following example publishes a service as a canary instance.
|
||||
|
|
|
@ -3,8 +3,6 @@ title: Use context and path-based routing
|
|||
description: Learn how to route traffic to your Docker swarm services based
|
||||
on a url path.
|
||||
keywords: routing, proxy
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/context_root/
|
||||
---
|
||||
|
||||
The following example publishes a service using context or path based routing.
|
||||
|
|
|
@ -5,7 +5,6 @@ keywords: routing, proxy
|
|||
redirect_from:
|
||||
- /ee/ucp/interlock/deploy/configuration-reference/
|
||||
- /ee/ucp/interlock/deploy/configure/
|
||||
- https://interlock-dev-docs.netlify.com/usage/hello/
|
||||
---
|
||||
|
||||
After Interlock is deployed, you can launch and publish services and applications.
|
||||
|
|
|
@ -3,7 +3,7 @@ title: Specify a routing mode
|
|||
description: Learn about task and VIP backend routing modes for Layer 7 routing
|
||||
keywords: routing, proxy, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/default_backend/
|
||||
- /ee/ucp/interlock/usage/default-backend/
|
||||
---
|
||||
|
||||
You can publish services using "vip" and "task" backend routing modes.
|
||||
|
|
|
@ -3,8 +3,6 @@ title: Implement application redirects
|
|||
description: Learn how to implement redirects using swarm services and the
|
||||
layer 7 routing solution for UCP.
|
||||
keywords: routing, proxy, redirects, interlock
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/redirects/
|
||||
---
|
||||
|
||||
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Implement service clusters
|
||||
description: Learn how to route traffic to different proxies using a service cluster.
|
||||
keywords: ucp, interlock, load balancing, routing
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/service_clusters/
|
||||
---
|
||||
|
||||
## Configure Proxy Services
|
||||
|
|
|
@ -3,8 +3,6 @@ title: Implement persistent (sticky) sessions
|
|||
description: Learn how to configure your swarm services with persistent sessions
|
||||
using UCP.
|
||||
keywords: routing, proxy, cookies, IP hash
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/sessions/
|
||||
---
|
||||
|
||||
You can publish a service and configure the proxy for persistent (sticky) sessions using:
|
||||
|
|
|
@ -4,7 +4,6 @@ description: Learn how to configure your swarm services with SSL.
|
|||
keywords: routing, proxy, tls, ssl
|
||||
redirect_from:
|
||||
- /ee/ucp/interlock/usage/ssl/
|
||||
- https://interlock-dev-docs.netlify.com/usage/ssl/
|
||||
---
|
||||
|
||||
This topic covers Swarm services implementation with:
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Use websockets
|
||||
description: Learn how to use websockets in your swarm services.
|
||||
keywords: routing, proxy, websockets
|
||||
redirect_from:
|
||||
- https://interlock-dev-docs.netlify.com/usage/websockets/
|
||||
---
|
||||
|
||||
First, create an overlay network to isolate and secure service traffic:
|
||||
|
|
|
@ -15,12 +15,37 @@ docker run -i --rm docker/dtr \
|
|||
backup [command options] > backup.tar
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
### Example Commands
|
||||
|
||||
#### Basic
|
||||
```bash
|
||||
docker run -i --rm docker/dtr \
|
||||
docker run -i --rm --log-driver none docker/dtr:{{ page.dtr_version }} \
|
||||
backup --ucp-ca "$(cat ca.pem)" --existing-replica-id 5eb9459a7832 > backup.tar
|
||||
```
|
||||
|
||||
### Advanced (with chained commands)
|
||||
```bash
|
||||
DTR_VERSION=$(docker container inspect $(docker container ps -f \
|
||||
name=dtr-registry -q) | grep -m1 -Po '(?<=DTR_VERSION=)\d.\d.\d'); \
|
||||
REPLICA_ID=$(docker ps --filter name=dtr-rethinkdb \
|
||||
--format "{{ .Names }}" | head -1 | sed 's|.*/||' | sed 's/dtr-rethinkdb-//'); \
|
||||
read -p 'ucp-url (The UCP URL including domain and port): ' UCP_URL; \
|
||||
read -p 'ucp-username (The UCP administrator username): ' UCP_ADMIN; \
|
||||
read -sp 'ucp password: ' UCP_PASSWORD; \
|
||||
docker run --log-driver none -i --rm \
|
||||
--env UCP_PASSWORD=$UCP_PASSWORD \
|
||||
docker/dtr:$DTR_VERSION backup \
|
||||
--ucp-username $UCP_ADMIN \
|
||||
--ucp-url $UCP_URL \
|
||||
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
|
||||
--existing-replica-id $REPLICA_ID > \
|
||||
dtr-metadata-${DTR_VERSION}-backup-$(date +%Y%m%d-%H_%M_%S).tar
|
||||
```
|
||||
|
||||
For a detailed explanation on the advanced example, see
|
||||
[Back up your DTR metadata](ee/dtr/admin/disaster-recovery/create-a-backup/#back-up-dtr-metadata).
|
||||
To learn more about the `--log-driver` option for `docker run`, see [docker run reference](/engine/reference/run/#logging-drivers---log-driver).
|
||||
|
||||
## Description
|
||||
|
||||
This command creates a `tar` file with the contents of the volumes used by
|
||||
|
|
Loading…
Reference in New Issue