Merge branch 'master' of github.com:docker/docker.github.io

This commit is contained in:
Maria Bermudez 2018-12-10 12:41:24 -08:00
commit 4bdcbf8ae9
57 changed files with 692 additions and 217 deletions

View File

@ -155,7 +155,7 @@ defaults:
ucp_org: "docker"
ucp_repo: "ucp"
dtr_repo: "dtr"
ucp_version: "3.1.0"
ucp_version: "3.1.1"
dtr_version: "2.6.0"
dtr_latest_image: "docker/dtr:2.6.0"
- scope:
@ -164,7 +164,7 @@ defaults:
hide_from_sitemap: true
ucp_org: "docker"
ucp_repo: "ucp"
ucp_version: "3.0.6"
ucp_version: "3.0.7"
- scope:
path: "datacenter/ucp/2.2"
values:

View File

@ -6,6 +6,14 @@
- product: "ucp"
version: "3.1"
tar-files:
- description: "3.1.1 Linux"
url: https://packages.docker.com/caas/ucp_images_3.1.1.tar.gz
- description: "3.1.1 Windows Server 2016 LTSC"
url: https://packages.docker.com/caas/ucp_images_win_2016_3.1.1.tar.gz
- description: "3.1.1 Windows Server 1709"
url: https://packages.docker.com/caas/ucp_images_win_1709_3.1.1.tar.gz
- description: "3.1.1 Windows Server 1803"
url: https://packages.docker.com/caas/ucp_images_win_1803_3.1.1.tar.gz
- description: "3.1.0 Linux"
url: https://packages.docker.com/caas/ucp_images_3.1.0.tar.gz
- description: "3.1.0 Windows Server 2016 LTSC"
@ -17,6 +25,14 @@
- product: "ucp"
version: "3.0"
tar-files:
- description: "3.0.7 Linux"
url: https://packages.docker.com/caas/ucp_images_3.0.7.tar.gz
- description: "3.0.7 Windows Server 2016 LTSC"
url: https://packages.docker.com/caas/ucp_images_win_2016_3.0.7.tar.gz
- description: "3.0.7 Windows Server 1709"
url: https://packages.docker.com/caas/ucp_images_win_1709_3.0.7.tar.gz
- description: "3.0.7 Windows Server 1803"
url: https://packages.docker.com/caas/ucp_images_win_1803_3.0.7.tar.gz
- description: "3.0.6 Linux"
url: https://packages.docker.com/caas/ucp_images_3.0.6.tar.gz
- description: "3.0.6 IBM Z"

View File

@ -1,6 +1,8 @@
| Docker version | Maximum API version | Change log |
|:---------------|:---------------------------|:---------------------------------------------------------|
| 18.09 | [1.39](/engine/api/v1.39/) | [changes](/engine/api/version-history/#v139-api-changes) |
| 18.06 | [1.38](/engine/api/v1.38/) | [changes](/engine/api/version-history/#v138-api-changes) |
| 18.05 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) |
| 18.04 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) |
| 18.03 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) |

View File

@ -60,7 +60,7 @@ Compose to set up and run WordPress. Before starting, make sure you have
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
volumes:
db_data:
db_data: {}
```
> **Notes**:

View File

@ -18,7 +18,6 @@ unless you configure it to use a different logging driver.
In addition to using the logging drivers included with Docker, you can also
implement and use [logging driver plugins](/engine/admin/logging/plugins.md).
Logging driver plugins are available in Docker 17.05 and higher.
## Configure the default logging driver
@ -44,24 +43,29 @@ example sets two configurable options on the `json-file` logging driver:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
If you do not specify a logging driver, the default is `json-file`. Thus,
the default output for commands such as `docker inspect <CONTAINER>` is JSON.
To find the current default logging driver for the Docker daemon, run
`docker info` and search for `Logging Driver`. You can use the following
command on Linux, macOS, or PowerShell on Windows:
command:
```bash
$ docker info | grep 'Logging Driver'
$ docker info --format '{{.LoggingDriver}}'
Logging Driver: json-file
json-file
```
## Configure the logging driver for a container

View File

@ -53,7 +53,12 @@ The following example sets the log driver to `fluentd` and sets the
}
```
Restart Docker for the changes to take effect.
Restart Docker for the changes to take effect.
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `fluentd-async-connect` or `fluentd-max-retries`) must therefore be enclosed
> in quotes (`"`).
To set the logging driver for a specific container, pass the
`--log-driver` option to `docker run`:

View File

@ -60,6 +60,10 @@ To make the configuration permanent, you can configure it in `/etc/docker/daemon
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `gelf-tcp-max-reconnect`) must therefore be enclosed in quotes (`"`).
You can set the logging driver for a specific container by setting the
`--log-driver` flag when using `docker container create` or `docker run`:

View File

@ -23,17 +23,22 @@ configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `json-file` and sets the `max-size`
option.
and `max-file` options.
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m"
"max-size": "10m",
"max-file": "3"
}
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration.
You can set the logging driver for a specific container by using the

View File

@ -36,6 +36,11 @@ The daemon.json file is located in `/etc/docker/` on Linux hosts or
configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `splunk-gzip` or `splunk-gzip-level`) must therefore be enclosed in quotes
> (`"`).
To use the `splunk` driver for a specific container, use the commandline flags
`--log-driver` and `log-opt` with `docker run`:

View File

@ -41,7 +41,8 @@ configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `syslog` and sets the
`syslog-address` option.
`syslog-address` option. The `syslog-address` options supports both UDP and TCP;
this example uses UDP.
```json
{
@ -54,7 +55,9 @@ The following example sets the log driver to `syslog` and sets the
Restart Docker for the changes to take effect.
> **Note**: The syslog-address supports both UDP and TCP.
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Numeric and boolean values (such as the value
> for `syslog-tls-skip-verify`) must therefore be enclosed in quotes (`"`).
You can set the logging driver for a specific container by using the
`--log-driver` flag to `docker container create` or `docker run`:
@ -75,7 +78,7 @@ starting the container.
| Option | Description | Example value |
|:-------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp | udp|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`.| `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` |
| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp|udp|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`. | `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` |
| `syslog-facility` | The `syslog` facility to use. Can be the number or name for any valid `syslog` facility. See the [syslog documentation](https://tools.ietf.org/html/rfc5424#section-6.2.1). | `--log-opt syslog-facility=daemon` |
| `syslog-tls-ca-cert` | The absolute path to the trust certificates signed by the CA. **Ignored if the address protocol is not `tcp+tls`.** | `--log-opt syslog-tls-ca-cert=/etc/ca-certificates/custom/ca.pem` |
| `syslog-tls-cert` | The absolute path to the TLS certificate file. **Ignored if the address protocol is not `tcp+tls`**. | `--log-opt syslog-tls-cert=/etc/ca-certificates/custom/cert.pem` |

View File

@ -23,9 +23,11 @@ After installing DTR, you can join additional DTR replicas using `docker/dtr joi
### Example Usage
$ docker run -it --rm docker/dtr install \
--ucp-node <UCP_NODE_HOSTNAME> \
--ucp-insecure-tls
```bash
docker run -it --rm docker/dtr install \
--ucp-node <UCP_NODE_HOSTNAME> \
--ucp-insecure-tls
```
Note: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment.

View File

@ -4,7 +4,16 @@ description: Add a new replica to an existing DTR cluster
keywords: dtr, cli, join
---
Add a new replica to an existing DTR cluster
Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of UCP.
## Usage
```bash
docker run -it --rm \
docker/dtr:2.5.6 join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```

View File

@ -27,7 +27,7 @@ installation will fail.
Use a computer with internet access to download the UCP package from the
following links.
{% include components/ddc_url_list_2.html product="ucp" version="2.2" %}
{% include components/ddc_url_list_2.html product="ucp" version="3.0" %}
## Download the offline package

View File

@ -0,0 +1,290 @@
---
title: Install UCP on Azure
description: Learn how to install Docker Universal Control Plane in a Microsoft Azure environment.
keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes
---
Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking
and Persistent Storage feature set. UCP deploys the Calico CNI provider, in Azure
the Calico CNI leverages the Azure networking infrastructure for data path
networking and the Azure IPAM for IP address management. There are
infrastructure prerequisites that are required prior to UCP installation for the
Calico / Azure integration.
## Docker UCP Networking
Docker UCP configures the Azure IPAM module for Kubernetes to allocate
IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure
VM that's part of the Kubernetes cluster to be configured with a pool of
IP addresses.
You have two options for deploying the VMs for the Kubernetes cluster on Azure:
- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides
an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms)
to configure and maintain IP pools for stand-alone Azure VMs.
- Install the cluster on an Azure virtual machine scale set. Configure the
IP pools by using an ARM template like
[this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set).
The steps for setting up IP address management are different in the two
environments. If you're using a scale set, you set up `ipConfigurations`
in an ARM template. If you're using stand-alone VMs, you set up IP pools
for each VM by using a utility container that's configured to run as a
global Swarm service, which Docker provides.
## Azure Prerequisites
The following list of infrastructure prerequisites need to be met in order
to successfully deploy Docker UCP on Azure.
- All UCP Nodes (Managers and Workers) need to be deployed into the same
Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups)
components could be deployed in a second Azure Resource Group.
- All UCP Nodes (Managers and Workers) need to be attached to the same
Azure Subnet.
- All UCP (Managers and Workers) need to be tagged in Azure with the
`Orchestrator` tag. Note the value for this tag is the Kubernetes version number
in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each
UCP release. To find the relevant version please see the UCP
[Release Notes](../../release-notes). For example for UCP 3.0.6 the tag
would be `Orchestrator=Kubernetes:1.8.15`.
- The Azure Computer Name needs to match the Node Operating System's Hostname.
Note this applies to the FQDN of the host including domain names.
- An Azure Service Principal with `Contributor` access to the Azure Resource
Group hosting the UCP Nodes. Note, if using a separate networking Resource
Group the same Service Principal will need `Network Contributor` access to this
Resource Group.
The following information will be required for the installation:
- `subscriptionId` - The Azure Subscription ID in which the UCP
objects are being deployed.
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
objects are being deployed.
- `aadClientId` - The Azure Service Principal ID
- `aadClientSecret` - The Azure Service Principal Secret Key
### Azure Configuration File
For Docker UCP to integrate in to Microsoft Azure, an Azure configuration file
will need to be placed within each UCP node in your cluster. This file
will need to be placed at `/etc/kubernetes/azure.json`.
See the template below. Note entries that do not contain `****` should not be
changed.
```
{
"cloud":"AzurePublicCloud",
"tenantId": "***",
"subscriptionId": "***",
"aadClientId": "***",
"aadClientSecret": "***",
"resourceGroup": "***",
"location": "****",
"subnetName": "/****",
"securityGroupName": "****",
"vnetName": "****",
"cloudProviderBackoff": false,
"cloudProviderBackoffRetries": 0,
"cloudProviderBackoffExponent": 0,
"cloudProviderBackoffDuration": 0,
"cloudProviderBackoffJitter": 0,
"cloudProviderRatelimit": false,
"cloudProviderRateLimitQPS": 0,
"cloudProviderRateLimitBucket": 0,
"useManagedIdentityExtension": false,
"useInstanceMetadata": true
}
```
There are some optional values for Azure deployments:
- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set.
- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a
seperate resource group.
- `"routeTableName": "****",` - If you have defined multiple Route tables within
an Azure subnet.
More details on this configuration file can be found
[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go).
## Considerations for IPAM Configuration
The subnet and the virtual network associated with the primary interface of
the Azure VMs need to be configured with a large enough address prefix/range.
The number of required IP addresses depends on the number of pods running
on each node and the number of nodes in the cluster.
For example, in a cluster of 256 nodes, to run a maximum of 128 pods
concurrently on a node, make sure that the address space of the subnet and the
virtual network can allocate at least 128 * 256 IP addresses, _in addition to_
initial IP allocations to VM NICs during Azure resource creation.
Accounting for IP addresses that are allocated to NICs during VM bring-up, set
the address space of the subnet and virtual network to 10.0.0.0/16. This
ensures that the network can dynamically allocate at least 32768 addresses,
plus a buffer for initial allocations for primary IP addresses.
> Azure IPAM, UCP, and Kubernetes
>
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
> a list of IP addresses that are assigned to the virtual machine's NICs. The
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine
> or scale set member, so that Azure IPAM can provide them to Kubernetes when
> requested.
{: .important}
#### Additional Notes
- The `IP_COUNT` variable defines the subnet size for each node's pod IPs. This subnet size is the same for all hosts.
- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts.
## Configure IP pools for Azure stand-alone VMs
Follow these steps once the underlying infrastructure has been provisioned.
### Configure multiple IP addresses per VM NIC
Follow the steps below to configure multiple IP addresses per VM NIC.
1. Initialize a swarm cluster comprising the virtual machines you created
earlier. On one of the nodes of the cluster, run:
```bash
docker swarm init
```
2. Note the tokens for managers and workers. You may retrieve the join tokens
at any time by running `$ docker swarm join-token manager` or `$ docker swarm
join-token worker` on the manager node.
3. Join two other nodes on the cluster as manager (recommended for HA) by running:
```bash
docker swarm join --token <manager-token>
```
4. Join remaining nodes on the cluster as workers:
```bash
docker swarm join --token <worker-token>
```
5. Create a file named "azure_ucp_admin.toml" that contains contents from
creating the Service Principal.
```
$ cat > azure_ucp_admin.toml <<EOF
AZURE_CLIENT_ID = "<AD App ID field from Step 1>"
AZURE_TENANT_ID = "<AD Tenant ID field from Step 1>"
AZURE_SUBSCRIPTION_ID = "<Azure subscription ID>"
AZURE_CLIENT_SECRET = "<AD App Secret field from Step 1>"
EOF
```
6. Create a Docker Swarm secret based on the "azure_ucp_admin.toml" file.
```bash
docker secret create azure_ucp_admin.toml azure_ucp_admin.toml
```
7. Create a global swarm service using the [docker4x/az-nic-ips](https://hub.docker.com/r/docker4x/az-nic-ips/)
image on Docker Hub. Use the Swarm secret to prepopulate the virtual machines
with the desired number of IP addresses per VM from the VNET pool. Set the
number of IPs to allocate to each VM through the IP_COUNT environment variable.
For example, to configure 128 IP addresses per VM, run the following command:
```bash
docker service create \
--mode=global \
--secret=azure_ucp_admin.toml \
--log-driver json-file \
--log-opt max-size=1m \
--env IP_COUNT=128 \
--name ipallocator \
--constraint "node.platform.os == linux" \
docker4x/az-nic-ips
```
## Set up IP configurations on an Azure virtual machine scale set
Configure IP Pools for each member of the VM scale set during provisioning by
associating multiple `ipConfigurations` with the scale sets
`networkInterfaceConfigurations`. Here's an example `networkProfile`
configuration for an ARM template that configures pools of 32 IP addresses
for each VM in the VM scale set.
```json
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[variables('nicName')]",
"properties": {
"ipConfigurations": [
{
"name": "[variables('ipConfigName1')]",
"properties": {
"primary": "true",
"subnet": {
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
},
"loadBalancerBackendAddressPools": [
{
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/backendAddressPools/', variables('bePoolName'))]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('loadBalancerName'), '/inboundNatPools/', variables('natPoolName'))]"
}
]
}
},
{
"name": "[variables('ipConfigName2')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
}
}
}
.
.
.
{
"name": "[variables('ipConfigName32')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
}
}
}
],
"primary": "true"
}
}
]
}
```
## Install UCP
Use the following command to install UCP on the manager node.
The `--pod-cidr` option maps to the IP address range that you configured for
the subnets in the previous sections, and the `--host-address` maps to the
IP address of the master node.
```bash
docker container run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
{{ page.ucp_org }}/{{ page.ucp_repo }}:{{ page.ucp_version }} install \
--host-address <ucp-ip> \
--interactive \
--swarm-port 3376 \
--pod-cidr <ip-address-range> \
--cloud-provider Azure
```

View File

@ -17,7 +17,7 @@ copy this package to the host where you upgrade UCP.
Use a computer with internet access to download the UCP package from the
following links.
{% include components/ddc_url_list_2.html product="ucp" version="2.2" %}
{% include components/ddc_url_list_2.html product="ucp" version="3.0" %}
## Download the offline package

View File

@ -19,9 +19,13 @@ in each node of the swarm to version 17.06 Enterprise Edition. Plan for the
upgrade to take place outside of business hours, to ensure there's minimal
impact to your users.
Also, don't make changes to UCP configurations while you're upgrading it.
Don't make changes to UCP configurations while you're upgrading.
This can lead to misconfigurations that are difficult to troubleshoot.
> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
> Azure then please ensure all of the Azure [prerequisities](install-on-azure.md/#azure-prerequisites)
> are met.
## Back up your swarm
Before starting an upgrade, make sure that your swarm is healthy. If a problem
@ -48,7 +52,7 @@ Starting with the manager nodes, and then worker nodes:
2. Upgrade the Docker Engine to version 17.06 or higher.
3. Make sure the node is healthy.
In your browser, navigate to the **Nodes** page in the UCP web UI,
In your browser, navigate to the **Nodes** page in the UCP web interface,
and check that the node is healthy and is part of the swarm.
> Swarm mode
@ -58,9 +62,9 @@ Starting with the manager nodes, and then worker nodes:
## Upgrade UCP
You can upgrade UCP from the web UI or the CLI.
You can upgrade UCP from the web interface or the CLI.
### Use the UI to perform an upgrade
### Use the web interface to perform an upgrade
When an upgrade is available for a UCP installation, a banner appears.
@ -76,13 +80,13 @@ Select a version to upgrade to using the **Available UCP Versions** dropdown,
then click to upgrade.
Before the upgrade happens, a confirmation dialog along with important
information regarding swarm and UI availability is displayed.
information regarding swarm and interface availability is displayed.
![](../../images/upgrade-ucp-3.png){: .with-border}
During the upgrade, the UI is unavailable, so wait until the upgrade is complete
before trying to use the UI. When the upgrade completes, a notification alerts
you that a newer version of the UI is available, and you can see the new UI
During the upgrade, the interface is unavailable, so wait until the upgrade is complete
before trying to use the interface. When the upgrade completes, a notification alerts
you that a newer version of the interface is available, and you can see the new interface
after you refresh your browser.
### Use the CLI to perform an upgrade
@ -103,7 +107,7 @@ $ docker container run --rm -it \
This runs the upgrade command in interactive mode, so that you are prompted
for any necessary configuration values.
Once the upgrade finishes, navigate to the UCP web UI and make sure that
Once the upgrade finishes, navigate to the UCP web interface and make sure that
all the nodes managed by UCP are healthy.
## Recommended upgrade paths

View File

@ -99,7 +99,8 @@ $ docker build --progress=plain .
## Overriding default frontends
To override the default frontend, set the first line of the Dockerfile as a comment with a specific frontend image:
The new syntax features in `Dockerfile` are available if you override the default frontend. To override
the default frontend, set the first line of the `Dockerfile` as a comment with a specific frontend image:
```
# syntax = <frontend image>, e.g. # syntax = docker/dockerfile:1.0-experimental
```
@ -151,3 +152,40 @@ $ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt
#9 duration: 1.470401133s
...
```
## Using SSH to access private data in builds
> **Acknowledgment**:
> Please see [Build secrets and SSH forwarding in Docker 18.09](https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066)
> for more information and examples.
The `docker build` has a `--ssh` option to allow the Docker Engine to forward SSH agent connections. For more information
on SSH agent, see the [OpenSSH man page](https://man.openbsd.org/ssh-agent).
Only the commands in the `Dockerfile` that have explicitly requested the SSH access by defining `type=ssh` mount have
access to SSH agent connections. The other commands have no knowledge of any SSH agent being available.
To request SSH access for a `RUN` command in the `Dockerfile`, define a mount with type `ssh`. This will set up the
`SSH_AUTH_SOCK` environment variable to make programs relying on SSH automatically use that socket.
Here is an example Dockerfile using SSH in the container:
```Dockerfile
# syntax=docker/dockerfile:experimental
FROM alpine
# Install ssh client and git
RUN apk add --no-cache openssh-client git
# Download public key for github.com
RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
# Clone private repository
RUN --mount=type=ssh git clone git@github.com:myorg/myproject.git myproject
```
Once the `Dockerfile` is created, use the `--ssh` option for connectivity with the SSH agent.
```bash
$ docker build --ssh default .
```

View File

@ -129,7 +129,7 @@ EOF
```
docker build -t foo https://github.com/thajeztah/pgadmin4-docker.git -f-<<EOF
FROM busybox
COPY LICENSE config_local.py /usr/local/lib/python2.7/site-packages/pgadmin4/
COPY LICENSE config_distro.py /usr/local/lib/python2.7/site-packages/pgadmin4/
EOF
```

View File

@ -246,3 +246,8 @@ Console](https://aws.amazon.com/){: target="_blank" class="_"}, navigate to
the Docker stack you want to remove.
![uninstall](img/aws-delete-stack.png)
Stack removal does not remove EBS and EFS volumes created by the cloudstor
volume plugin or the S3 bucket associated with DTR. Those resources must be
removed manually. See the [cloudstor](/docker-for-aws/persistent-data-volumes/#list-or-remove-volumes-created-by-cloudstor)
docs for instructions on removing volumes.

View File

@ -62,6 +62,9 @@ With Docker for Mac, you only get (and only usually need) one VM, managed by Doc
for Mac. Docker for Mac automatically upgrades the Docker client and
daemon when updates are available.
Also note that Docker for Mac cant route traffic to containers, so you can't
directly access an exposed port on a running container from the hosting machine.
If you do need multiple VMs, such as when testing multi-node swarms, you can
continue to use Docker Machine, which operates outside the scope of Docker for
Mac. See [Docker Toolbox and Docker for Mac

View File

@ -18,6 +18,14 @@ for Mac](install.md#download-docker-for-mac).
## Edge Releases of 2018
### Docker Community Edition 2.0.0.0-mac82 2018-12-07
[Download](https://download.docker.com/mac/edge/29268/Docker.dmg)
* Upgrades
- [Docker compose 1.23.2](https://github.com/docker/compose/releases/tag/1.23.2)
- [Docker Machine 0.16.0](https://github.com/docker/machine/releases/tag/v0.16.0)
### Docker Community Edition 2.0.0.0-mac77 2018-11-14
[Download](https://download.docker.com/mac/edge/28700/Docker.dmg)

View File

@ -20,6 +20,13 @@ for Mac](install.md#download-docker-for-mac).
## Stable Releases of 2018
### Docker Community Edition 2.0.0.0-mac81 2018-12-07
[Download](https://download.docker.com/mac/stable/29211/Docker.dmg)
* Upgrades
- [Docker compose 1.23.2](https://github.com/docker/compose/releases/tag/1.23.2)
### Docker Community Edition 2.0.0.0-mac78 2018-11-19
[Download](https://download.docker.com/mac/stable/28905/Docker.dmg)

View File

@ -18,6 +18,16 @@ for Windows](install.md#download-docker-for-windows).
## Edge Releases of 2018
### Docker Community Edition 2.0.0.0-win82 2018-12-07
[Download](https://download.docker.com/win/edge/29268/Docker%20for%20Windows%20Installer.exe)
* Upgrades
- [Docker compose 1.23.2](https://github.com/docker/compose/releases/tag/1.23.2)
* Bug fixes and minor changes
- Compose: Fixed a bug where build context URLs would fail to build on Windows. Fixes [docker/for-win#2918](https://github.com/docker/for-win/issues/2918)
### Docker Community Edition 2.0.0.0-win77 2018-11-14
[Download](https://download.docker.com/win/edge/28777/Docker%20for%20Windows%20Installer.exe)

View File

@ -20,6 +20,16 @@ for Windows](install.md#download-docker-for-windows).
## Stable Releases of 2018
### Docker Community Edition 2.0.0.0-win81 2018-12-07
[Download](https://download.docker.com/win/stable/29211/Docker%20for%20Windows%20Installer.exe)
* Upgrades
- [Docker compose 1.23.2](https://github.com/docker/compose/releases/tag/1.23.2)
* Bug fixes and minor changes
- Compose: Fixed a bug where build context URLs would fail to build on Windows. Fixes [docker/for-win#2918](https://github.com/docker/for-win/issues/2918)
### Docker Community Edition 2.0.0.0-win78 2018-11-19
[Download](https://download.docker.com/win/stable/28905/Docker%20for%20Windows%20Installer.exe)

View File

@ -41,7 +41,7 @@ it (for example `docs/base:testing`). If it's not specified, the tag defaults to
You can name your local images either when you build it, using
`docker build -t <hub-user>/<repo-name>[:<tag>]`,
by re-tagging an existing local image `docker tag <existing-image> <hub-user>/<repo-name>[:<tag>]`,
or by using `docker commit <exiting-container> <hub-user>/<repo-name>[:<tag>]` to commit
or by using `docker commit <existing-container> <hub-user>/<repo-name>[:<tag>]` to commit
changes.
Now you can push this repository to the registry designated by its name or tag.

View File

@ -49,7 +49,7 @@ Docker Hub Webhook payloads have the following payload JSON format:
"comment_count": 0,
"date_created": 1.417494799e+09,
"description": "",
"dockerfile": "#\n# BUILD\u0009\u0009docker build -t svendowideit/apt-cacher .\n# RUN\u0009\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# \u0009\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\n#\nFROM\u0009\u0009ubuntu\n\n\nVOLUME\u0009\u0009[\/var/cache/apt-cacher-ng\]\nRUN\u0009\u0009apt-get update ; apt-get install -yq apt-cacher-ng\n\nEXPOSE \u0009\u00093142\nCMD\u0009\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\n",
"dockerfile": "#\n# BUILD\u0009\u0009docker build -t svendowideit/apt-cacher .\n# RUN\u0009\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# \u0009\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\n#\nFROM\u0009\u0009ubuntu\n\n\nVOLUME\u0009\u0009[/var/cache/apt-cacher-ng]\nRUN\u0009\u0009apt-get update ; apt-get install -yq apt-cacher-ng\n\nEXPOSE \u0009\u00093142\nCMD\u0009\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\n",
"full_description": "Docker Hub based automated build from a GitHub repo",
"is_official": false,
"is_private": true,

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 308 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 498 KiB

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 170 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -4,21 +4,18 @@ description: Learn how to delete images from Docker Trusted Registry.
keywords: registry, delete
---
To delete an image, go to the **DTR web UI**, and navigate to the image
**repository** you want to delete. In the **Tags** tab, select all the image
To delete an image, navigate to the **Tags** tab of the repository page on the DTR web interface.
In the **Tags** tab, select all the image
tags you want to delete, and click the **Delete** button.
![](../../images/delete-images-1.png){: .with-border}
You can also delete all image versions, by deleting the repository. For that,
in the image **repository**, navigate to the **Settings** tab, and click the
**Delete** button.
You can also delete all image versions by deleting the repository. To delete a repository, navigate to **Settings** and click **Delete** under "Delete Repository".
## Delete signed images
DTR only allows deleting images if that image hasn't been signed. You first
need to delete all the trust data associated with the image. Then you'll
be able to delete it.
DTR only allows deleting images if the image has not been signed. You first
need to delete all the trust data associated with the image before you are able to delete the image.
![](../../images/delete-images-2.png){: .with-border}
@ -33,11 +30,11 @@ There are three steps to delete a signed image:
To find which roles signed an image, you first need to learn which roles
are trusted to sign the image.
[Set up your Notary client](../../access-dtr/configure-your-notary-client.md),
[Set up your Notary client](/ee/dtr/user/manage-images/sign-images/#configure-your-notary-client),
and run:
```
notary delegation list dtr.example.org/library/wordpress
notary delegation list dtr-example.com/library/wordpress
```
In this example, the repository owner delegated trust to the
@ -55,10 +52,10 @@ you can learn which roles actually signed it:
```
# Check if the image was signed by the "targets" role
notary list dtr.example.org/library/wordpress
notary list dtr-example.com/library/wordpress
# Check if the image was signed by a specific role
notary list dtr.example.org/library/wordpress --roles <role-name>
notary list dtr-example.com/library/wordpress --roles <role-name>
```
In this example the image was signed by three roles: `targets`,
@ -73,7 +70,7 @@ to do this operation.
For each role that signed the image, run:
```
notary remove dtr.example.org/library/wordpress <tag> \
notary remove dtr-example.com/library/wordpress <tag> \
--roles <role-name> --publish
```

View File

@ -44,7 +44,7 @@ name of our repository will be `dtr-example.com/test-user-1/wordpress`.
> Immutable Tags and Tag Limit
>
> Starting in DTR 2.6, repository admins can enable tag pruning by [setting a tag limit](tag-pruning/#set-a-tag-limit). This can only be set if you turn off **Immutability** and allow your repository tags to be overwritten.
> Starting in DTR 2.6, repository admins can enable tag pruning by [setting a tag limit](../tag-pruning/#set-a-tag-limit). This can only be set if you turn off **Immutability** and allow your repository tags to be overwritten.
> Image name size for DTR
>

View File

@ -9,11 +9,11 @@ DTR scans your images for vulnerabilities but sometimes it can report that
your image has vulnerabilities you know have been fixed. If that happens you
can dismiss the warning.
In the **DTR web UI**, navigate to the repository that has been scanned.
In the **DTR web interface**, navigate to the repository that has been scanned.
![Tag list](../../images/override-vulnerability-1.png){: .with-border}
![](../../images/scan-images-for-vulns-3.png){: .with-border}
Click **View details** for the image you want to see the scan results, and
Click **View details** to review the image scan results, and
choose **Components** to see the vulnerabilities for each component packaged
in the image.
@ -23,7 +23,7 @@ vulnerability, and click **hide**.
![Vulnerability list](../../images/override-vulnerability-2.png){: .with-border}
The vulnerability is hidden system-wide and will no longer be reported as a vulnerability
on other affected images with the same layer IDs or digests.
on affected images with the same layer IDs or digests.
After dismissing a vulnerability, DTR will not reevaluate the promotion policies
you have set up for the repository.

View File

@ -6,7 +6,7 @@ redirect_from:
- /datacenter/dtr/2.5/guides/user/manage-images/pull-and-push-images/
---
{% assign domain="dtr.example.org" %}
{% assign domain="dtr-example.com" %}
{% assign org="library" %}
{% assign repo="wordpress" %}
{% assign tag="latest" %}
@ -25,11 +25,11 @@ from Docker Hub or any other registry. Since DTR is secure by default, you
always need to authenticate before pulling images.
In this example, DTR can be accessed at {{ domain }}, and the user
was granted permissions to access the NGINX, and Wordpress repositories.
was granted permissions to access the `nginx` and `wordpress` repositories in the `library` organization.
![](../../images/pull-push-images-1.png){: .with-border}
Click on the repository to see its details.
Click on the repository name to see its details.
![](../../images/pull-push-images-2.png){: .with-border}
@ -70,7 +70,7 @@ docker login {{ domain }}
docker push {{ domain }}/{{ org }}/{{ repo }}:{{ tag }}
```
Go back to the **DTR web UI** to validate that the tag was successfully pushed.
On the web interface, navigate to the **Tags** tab on the repository page to confirm that the tag was successfully pushed.
![](../../images/pull-push-images-3.png){: .with-border}

View File

@ -8,7 +8,7 @@ keywords: registry, scan, vulnerability
Docker Trusted Registry can scan images in your repositories to verify that they
are free from known security vulnerabilities or exposures, using Docker Security
Scanning. The results of these scans are reported for each image tag.
Scanning. The results of these scans are reported for each image tag in a repository.
Docker Security Scanning is available as an add-on to Docker Trusted Registry,
and an administrator configures it for your DTR instance. If you do not see
@ -22,7 +22,7 @@ a new scan.
## The Docker Security Scan process
Scans run either on demand when a user clicks the **Start a Scan** links or
Scans run either on demand when you click the **Start a Scan** link or
**Scan** button (see [Manual scanning](#manual-scanning) below), or automatically
on any `docker push` to the repository.
@ -30,7 +30,7 @@ First the scanner performs a binary scan on each layer of the image, identifies
the software components in each layer, and indexes the SHA of each component in
a bill-of-materials. A binary scan evaluates the components on a bit-by-bit
level, so vulnerable components are discovered even if they are
statically-linked or under a different name.
statically linked or under a different name.
The scan then compares the SHA of each component against the US National
Vulnerability Database that is installed on your DTR instance. When
@ -49,15 +49,15 @@ image repository.
If your DTR instance is configured in this way, you do not need to do anything
once your `docker push` completes. The scan runs automatically, and the results
are reported in the repository's **Images** tab after the scan finishes.
are reported in the repository's **Tags** tab after the scan finishes.
## Manual scanning
If your repository owner enabled Docker Security Scanning but disabled automatic
scanning, you can manually start a scan for images in repositories to which you
have `write` access.
scanning, you can manually start a scan for images in repositories you
have `write` access to.
To start a security scan, navigate to the **tag details**, and click the **Scan** button.
To start a security scan, navigate to the repository **Tags** tab on the web interface, click "View details" next to the relevant tag, and click **Scan**.
![](../../images/scan-images-for-vulns-1.png){: .with-border}
@ -85,33 +85,33 @@ To change the repository scanning mode:
Once DTR has run a security scan for an image, you can view the results.
The **Images** tab for each repository includes a summary of the most recent
The **Tags** tab for each repository includes a summary of the most recent
scan results for each image.
![](../../images/scan-images-for-vulns-3.png){: .with-border}
- A green shield icon with a check mark indicates that the scan did not find
- The text "Clean" in green indicates that the scan did not find
any vulnerabilities.
- A red or orange shield icon indicates that vulnerabilities were found, and
the number of vulnerabilities is included on that same line.
- A red or orange text indicates that vulnerabilities were found, and
the number of vulnerabilities is included on that same line according to severity: ***Critical***, ***Major***, ***Minor***.
If the vulnerability scan can't detect the version of a component, it reports
If the vulnerability scan could not detect the version of a component, it reports
the vulnerabilities for all versions of that component.
From the **Images** tab you can click **View details** for a specific tag to see
From the repository **Tags** tab, you can click **View details** for a specific tag to see
the full scan results. The top of the page also includes metadata about the
image, including the SHA, image size, date last pushed and user who last pushed,
image, including the SHA, image size, last push date, user who initiated the push,
the security scan summary, and the security scan progress.
The scan results for each image include two different modes so you can quickly
view details about the image, its components, and any vulnerabilities found.
- The **Layers** view lists the layers of the image in order as they are built
by the Dockerfile.
- The **Layers** view lists the layers of the image in the order that they are built
by Dockerfile.
This view can help you find exactly which command in the build introduced
the vulnerabilities, and which components are associated with that single
command. Click a layer to see a summary of its components. You can then
click on a component to switch to the Component view and get more details
click on a component to switch to the **Component** view and get more details
about the specific item.
> **Tip**: The layers view can be long, so be sure
@ -120,8 +120,7 @@ by the Dockerfile.
![](../../images/scan-images-for-vulns-4.png){: .with-border}
- The **Components** view lists the individual component libraries indexed by
the scanning system, in order of severity and number of vulnerabilities found,
most vulnerable first.
the scanning system, in order of severity and number of vulnerabilities found, with the most vulnerable library listed first.
Click on an individual component to view details about the vulnerability it
introduces, including a short summary and a link to the official CVE
@ -139,18 +138,17 @@ vulnerability and decide what to do.
If you discover vulnerable components, you should check if there is an updated
version available where the security vulnerability has been addressed. If
necessary, you might contact the component's maintainers to ensure that the
vulnerability is being addressed in a future version or patch update.
necessary, you can contact the component's maintainers to ensure that the
vulnerability is being addressed in a future version or a patch update.
If the vulnerability is in a `base layer` (such as an operating system) you
might not be able to correct the issue in the image. In this case, you might
switch to a different version of the base layer, or you might find an
equivalent, less vulnerable base layer. You might also decide that the
vulnerability or exposure is acceptable.
might not be able to correct the issue in the image. In this case, you can
switch to a different version of the base layer, or you can find an
equivalent, less vulnerable base layer.
Address vulnerabilities in your repositories by updating the images to use
updated and corrected versions of vulnerable components, or by using a different
components that provide the same functionality. When you have updated the source
component offering the same functionality. When you have updated the source
code, run a build to create a new image, tag the image, and push the updated
image to your DTR instance. You can then re-scan the image to confirm that you
have addressed the vulnerabilities.

View File

@ -176,7 +176,7 @@ components. Assigning these values overrides the settings in a container's
| `profiling_enabled` | no | Set to `true` to enable specialized debugging endpoints for profiling UCP performance. The default is `false`. |
| `kv_timeout` | no | Sets the key-value store timeout setting, in milliseconds. The default is `5000`. |
| `kv_snapshot_count` | no | Sets the key-value store snapshot count setting. The default is `20000`. |
| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web interface. |
| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web interface. |
| `cni_installer_url` | no | Specifies the URL of a Kubernetes YAML file to be used for installing a CNI plugin. Applies only during initial installation. If empty, the default CNI plugin is used. |
| `metrics_retention_time` | no | Adjusts the metrics retention time. |
| `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. |
@ -184,11 +184,14 @@ components. Assigning these values overrides the settings in a container's
| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 512MB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. |
| `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. |
| `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. |
| `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. |
| `ipip_mtu` | no | Set the IPIP MTU size for the calico IPIP tunnel interface. |
| `azure_ip_count` | no | Set the IP count for azure allocator to allocate IPs per Azure virtual machine. |
| `nodeport_range` | yes | Set the port range that for Kubernetes services of type NodePort can be exposed in. Default is `32768-35535`. |
| `custom_kube_api_server_flags` | no | Set the configuration options for the Kubernetes API server. |
| `custom_kube_controller_manager_flags` | no | Set the configuration options for the Kubernetes controller manager |
| `custom_kubelet_flags` | no | Set the configuration options for Kubelets |
| `custom_kube_scheduler_flags` | no | Set the configuration options for the Kubernetes scheduler |
| `custom_kube_controller_manager_flags` | no | Set the configuration options for the Kubernetes controller manager. |
| `custom_kubelet_flags` | no | Set the configuration options for Kubelets. |
| `custom_kube_scheduler_flags` | no | Set the configuration options for the Kubernetes scheduler. |
| `local_volume_collection_mapping` | no | Store data about collections for volumes in UCP's local KV store instead of on the volume labels. This is used for enforcing access control on volumes. |
| `manager_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on manager nodes. |
| `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. |

View File

@ -86,7 +86,7 @@ To install UCP:
This runs the install command in interactive mode, so that you're
prompted for any necessary configuration values.
To find what other options are available in the install command, check the
[reference documentation](/reference/ucp/3.0/cli/install.md).
[reference documentation](/reference/ucp/3.1/cli/install.md).
> Custom CNI plugins
>

View File

@ -4,37 +4,125 @@ description: Learn how to install Docker Universal Control Plane in a Microsoft
keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes
---
Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking
and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure
the Calico CNI leverages the Azure networking infrastructure for data path
networking and the Azure IPAM for IP address management. There are
infrastructure prerequisites that are required prior to UCP installation for the
Calico / Azure integration.
## Docker UCP Networking
Docker UCP configures the Azure IPAM module for Kubernetes to allocate
IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure
VM that's part of the Kubernetes cluster to be configured with a pool of
IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure
virtual machine that's part of the Kubernetes cluster to be configured with a pool of
IP addresses.
You have two options for deploying the VMs for the Kubernetes cluster on Azure:
- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides
an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms)
to configure and maintain IP pools for stand-alone Azure VMs.
- Install the cluster on an Azure virtual machine scale set. Configure the
IP pools by using an ARM template like [this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set).
There are two options for provisoning IPs for the Kubernetes cluster on Azure
- Docker UCP provides an automated mechanism to configure and maintain IP pools
for standalone Azure virtual machines. This service runs within the calico-node daemonset
and by default will provision 128 IP address for each node. This value can be
configured through the `azure_ip_count`in the UCP
[configuration file](../configure/ucp-configuration-file) before or after the
UCP installation. Note that if this value is reduced post-installation, existing
virtual machines will not be reconciled, and you will have to manually edit the IP count
in Azure.
- Manually provision additional IP address for each Azure virtual machine. This could be done
as part of an Azure Virtual Machine Scale Set through an ARM template. You can find an example [here](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set).
Note that the `azure_ip_count` value in the UCP
[configuration file](../configure/ucp-configuration-file) will need to be set
to 0, otherwise UCP's IP Allocator service will provision the IP Address on top of
those you have already provisioned.
The steps for setting up IP address management are different in the two
environments. If you're using a scale set, you set up `ipConfigurations`
in an ARM template. If you're using stand-alone VMs, you set up IP pools
for each VM by using a utility container that's configured to run as a
global Swarm service, which Docker provides.
## Azure Prerequisites
## Considerations for size of IP pools
You must meet these infrastructure prerequisites in order
to successfully deploy Docker UCP on Azure
- All UCP Nodes (Managers and Workers) need to be deployed into the same
Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups)
components could be deployed in a second Azure Resource Group.
- All UCP Nodes (Managers and Workers) need to be attached to the same
Azure Subnet.
- All UCP (Managers and Workers) need to be tagged in Azure with the
`Orchestrator` tag. Note the value for this tag is the Kubernetes version number
in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each
UCP release. To find the relevant version please see the UCP
[Release Notes](../../release-notes). For example for UCP 3.1.0 the tag
would be `Orchestrator=Kubernetes:1.11.2`.
- The Azure Computer Name needs to match the Node Operating System's Hostname.
Note this applies to the FQDN of the host including domain names.
- An Azure Service Principal with `Contributor` access to the Azure Resource
Group hosting the UCP Nodes. Note, if using a separate networking Resource
Group the same Service Principal will need `Network Contributor` access to this
Resource Group.
UCP requires the following information for the installation:
- `subscriptionId` - The Azure Subscription ID in which the UCP
objects are being deployed.
- `tenantId` - The Azure Active Directory Tenant ID in which the UCP
objects are being deployed.
- `aadClientId` - The Azure Service Principal ID
- `aadClientSecret` - The Azure Service Principal Secret Key
### Azure Configuration File
For Docker UCP to integrate into Microsoft Azure, you need to place an Azure configuration file
within each UCP node in your cluster, at `/etc/kubernetes/azure.json`.
See the template below. Note entries that do not contain `****` should not be
changed.
```
{
"cloud":"AzurePublicCloud",
"tenantId": "***",
"subscriptionId": "***",
"aadClientId": "***",
"aadClientSecret": "***",
"resourceGroup": "***",
"location": "****",
"subnetName": "/****",
"securityGroupName": "****",
"vnetName": "****",
"cloudProviderBackoff": false,
"cloudProviderBackoffRetries": 0,
"cloudProviderBackoffExponent": 0,
"cloudProviderBackoffDuration": 0,
"cloudProviderBackoffJitter": 0,
"cloudProviderRatelimit": false,
"cloudProviderRateLimitQPS": 0,
"cloudProviderRateLimitBucket": 0,
"useManagedIdentityExtension": false,
"useInstanceMetadata": true
}
```
There are some optional values for Azure deployments:
- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set.
- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a
seperate resource group.
- `"routeTableName": "****",` - If you have defined multiple Route tables within
an Azure subnet.
More details on this configuration file can be found
[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go).
## Considerations for IPAM Configuration
The subnet and the virtual network associated with the primary interface of
the Azure VMs need to be configured with a large enough address prefix/range.
the Azure virtual machines need to be configured with a large enough address prefix/range.
The number of required IP addresses depends on the number of pods running
on each node and the number of nodes in the cluster.
For example, in a cluster of 256 nodes, to run a maximum of 128 pods
concurrently on a node, make sure that the address space of the subnet and the
virtual network can allocate at least 128 * 256 IP addresses, _in addition to_
initial IP allocations to VM NICs during Azure resource creation.
initial IP allocations to virtual machine NICs during Azure resource creation.
Accounting for IP addresses that are allocated to NICs during VM bring-up, set
Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set
the address space of the subnet and virtual network to 10.0.0.0/16. This
ensures that the network can dynamically allocate at least 32768 addresses,
plus a buffer for initial allocations for primary IP addresses.
@ -49,98 +137,13 @@ plus a buffer for initial allocations for primary IP addresses.
> requested.
{: .important}
## Configure IP pools for Azure stand-alone VMs
## Manually provision IP address as part of an Azure virtual machine scale set
Follow these steps when the cluster is deployed using stand-alone Azure VMs.
### Create an Azure resource group
Create an Azure resource group with VMs representing the nodes of the cluster
by using the Azure Portal, CLI, or ARM template.
### Configure multiple IP addresses per VM NIC
Follow the steps below to configure multiple IP addresses per VM NIC.
1. Create a Service Principal with “contributor” level access to the above
resource group you just created. You can do this by using the Azure Portal
or CLI. Also, you can also use a utility container from Docker to create a
Service Principal. If you have the Docker Engine installed, run the
`docker4x/create-sp-azure`. image. The output of `create-sp-azure` contains
the following fields near the end.
```
AD App ID: <...>
AD App Secret: <...>
AD Tenant ID: <...>
```
You'll use these field values in a later step, so make a note of them.
Also, make note of your Azure subscription ID.
2. Initialize a swarm cluster comprising the virtual machines you created
earlier. On one of the nodes of the cluster, run:
```bash
docker swarm init
```
3. Note the tokens for managers and workers.
4. Join two other nodes on the cluster as manager (recommended for HA) by running:
```bash
docker swarm join --token <manager-token>
```
5. Join remaining nodes on the cluster as workers:
```bash
docker swarm join --token <worker-token>
```
6. Create a file named "azure_ucp_admin.toml" that contains contents from
creating the Service Principal.
```
AZURE_CLIENT_ID = "<AD App ID field from Step 1>"
AZURE_TENANT_ID = "<AD Tenant ID field from Step 1>"
AZURE_SUBSCRIPTION_ID = "<Azure subscription ID>"
AZURE_CLIENT_SECRET = "<AD App Secret field from Step 1>"
```
7. Create a Docker Swarm secret based on the "azure_ucp_admin.toml" file.
```bash
docker secret create azure_ucp_admin.toml azure_ucp_admin.toml
```
8. Create a global swarm service using the [docker4x/az-nic-ips](https://hub.docker.com/r/docker4x/az-nic-ips/)
image on Docker Hub. Use the Swarm secret to prepopulate the virtual machines
with the desired number of IP addresses per VM from the VNET pool. Set the
number of IPs to allocate to each VM through the IPCOUNT environment variable.
For example, to configure 128 IP addresses per VM, run the following command:
```bash
docker service create \
--mode=global \
--secret=azure_ucp_admin.toml \
--log-driver json-file \
--log-opt max-size=1m \
--env IP_COUNT=128 \
--name ipallocator \
--constraint "node.platform.os == linux" \
docker4x/az-nic-ips
```
[Install UCP on the cluster](#install-ucp-on-the-cluster).
## Set up IP configurations on an Azure virtual machine scale set
Configure IP Pools for each member of the VM scale set during provisioning by
Configure IP Pools for each member of the virtual machine scale set during provisioning by
associating multiple `ipConfigurations` with the scale sets
`networkInterfaceConfigurations`. Here's an example `networkProfile`
configuration for an ARM template that configures pools of 32 IP addresses
for each VM in the VM scale set.
for each virtual machine in the virtual machine scale set.
```json
"networkProfile": {
@ -195,7 +198,7 @@ for each VM in the VM scale set.
}
```
## Install UCP on the cluster
## Install UCP
Use the following command to install UCP on the manager node.
The `--pod-cidr` option maps to the IP address range that you configured for
@ -213,3 +216,7 @@ docker container run --rm -it \
--pod-cidr <ip-address-range> \
--cloud-provider Azure
```
#### Additional Notes
- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts.

View File

@ -29,6 +29,9 @@ Learn about [UCP system requirements](system-requirements.md).
Ensure that your cluster nodes meet the minimum requirements for port openings.
[Ports used](system-requirements.md/#ports-used) are documented in the UCP system requirements.
> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft
> Azure then please ensure all of the Azure [prequisities](install-on-azure.md/#azure-prerequisites)
> are met.
## Back up your cluster

View File

@ -29,7 +29,7 @@ To use kubectl, install the binary on a workstation which has access to your UCP
{: .important}
First, find which version of Kubernetes is running in your cluster. This can be found
within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.0/api/).
within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.1/api/).
From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner
of the dashboard. Then navigate to **Kubernetes**.

View File

@ -7,10 +7,10 @@ redirect_from:
title: Protect the Docker daemon socket
---
By default, Docker runs via a non-networked Unix socket. It can also
By default, Docker runs through a non-networked UNIX socket. It can also
optionally communicate using an HTTP socket.
If you need Docker to be reachable via the network in a safe manner, you can
If you need Docker to be reachable through the network in a safe manner, you can
enable TLS by specifying the `tlsverify` flag and pointing Docker's
`tlscacert` flag to a trusted CA certificate.
@ -73,7 +73,7 @@ to connect to Docker:
Next, we're going to sign the public key with our CA:
Since TLS connections can be made via IP address as well as DNS name, the IP addresses
Since TLS connections can be made through IP address as well as DNS name, the IP addresses
need to be specified when creating the certificate. For example, to allow connections
using `10.10.10.20` and `127.0.0.1`:
@ -113,24 +113,24 @@ request:
$ openssl req -subj '/CN=client' -new -key key.pem -out client.csr
To make the key suitable for client authentication, create an extensions
To make the key suitable for client authentication, create a new extensions
config file:
$ echo extendedKeyUsage = clientAuth >> extfile.cnf
$ echo extendedKeyUsage = clientAuth > extfile-client.cnf
Now, generate the signed certificate:
$ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -extfile extfile.cnf
-CAcreateserial -out cert.pem -extfile extfile-client.cnf
Signature ok
subject=/CN=client
Getting CA Private Key
Enter pass phrase for ca-key.pem:
After generating `cert.pem` and `server-cert.pem` you can safely remove the
two certificate signing requests:
two certificate signing requests and extensions config files:
$ rm -v client.csr server.csr
$ rm -v client.csr server.csr extfile.cnf extfile-client.cnf
With a default `umask` of 022, your secret keys are *world-readable* and
writable for you and your group.
@ -180,7 +180,7 @@ certificates and trusted CA:
## Secure by default
If you want to secure your Docker client connections by default, you can move
the files to the `.docker` directory in your home directory -- and set the
the files to the `.docker` directory in your home directory --- and set the
`DOCKER_HOST` and `DOCKER_TLS_VERIFY` variables as well (instead of passing
`-H=tcp://$HOST:2376` and `--tlsverify` on every call).

View File

@ -586,7 +586,7 @@ configuration file.
```
4. Verify that the `nginx` service is fully re-deployed, using
`docker service ls nginx`. When it is, you can remove the old `site.conf`
`docker service ps nginx`. When it is, you can remove the old `site.conf`
config.
```bash

View File

@ -84,7 +84,7 @@ services:
restart_policy:
condition: on-failure
ports:
- "4000:80"
- "80:80"
networks:
- webnet
networks:

View File

@ -152,8 +152,8 @@ to choose the best installation path for you.
| Platform | x86_64 |
|:----------------------------------------------------------------------------|:-----------------:|
| [Docker for Mac (macOS)](/docker-for-mac/install.md) | {{ green-check }} |
| [Docker for Windows (Microsoft Windows 10)](/docker-for-windows/install.md) | {{ green-check }} |
| [Docker for Mac (macOS)](/docker-for-mac/install/) | {{ green-check }} |
| [Docker for Windows (Microsoft Windows 10)](/docker-for-windows/install/) | {{ green-check }} |
#### Server
@ -162,10 +162,10 @@ to choose the best installation path for you.
| Platform | x86_64 / amd64 | ARM | ARM64 / AARCH64 | IBM Power (ppc64le) | IBM Z (s390x) |
|:--------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|
| [CentOS]({{ install-prefix-ce }}/centos.md) | [{{ green-check }}]({{ install-prefix-ce }}/centos.md) | | [{{ green-check }}]({{ install-prefix-ce }}/centos.md) | | |
| [Debian]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | | |
| [Fedora]({{ install-prefix-ce }}/fedora.md) | [{{ green-check }}]({{ install-prefix-ce }}/fedora.md) | | | | |
| [Ubuntu]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) |
| [CentOS]({{ install-prefix-ce }}/centos/) | [{{ green-check }}]({{ install-prefix-ce }}/centos/) | | [{{ green-check }}]({{ install-prefix-ce }}/centos/) | | |
| [Debian]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | | |
| [Fedora]({{ install-prefix-ce }}/fedora/) | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | |
| [Ubuntu]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) |
### Backporting

View File

@ -198,6 +198,37 @@ Install-Package -Name docker -ProviderName DockerMsftProvider -RequiredVersion 1
```
The required version must match any of the versions available in this json file: https://dockermsft.blob.core.windows.net/dockercontainer/DockerMsftIndex.json
## Uninstall Docker EE
Use the following commands to completely remove the Docker Engine - Enterprise from a Windows Server:
1. Leave any active Docker Swarm
```PowerShell
docker swarm leave --force
```
1. Remove all running and stopped containers
```PowerShell
docker rm -f $(docker ps --all --quiet)
```
1. Prune container data
```PowerShell
docker system prune --all --volumes
```
1. Uninstall Docker PowerShell Package and Module
```PowerShell
Uninstall-Package -Name docker -ProviderName DockerMsftProvider
Uninstall-Module -Name DockerMsftProvider
```
1. Clean up Windows Networking and file system
```PowerShell
Get-HNSNetwork | Remove-HNSNetwork
Remove-Item -Path "C:\ProgramData\Docker" -Recurse -Force
```
## Preparing a Docker EE Engine for use with UCP

View File

@ -47,7 +47,7 @@ on your network, your container appears to be physically attached to the network
my-macvlan-net
```
You can use `docker network ls` and `docker network inspect pub_net`
You can use `docker network ls` and `docker network inspect my-macvlan-net`
commands to verify that the network exists and is a `macvlan` network.
2. Start an `alpine` container and attach it to the `my-macvlan-net` network. The
@ -138,7 +138,7 @@ be physically attached to the network.
my-8021q-macvlan-net
```
You can use `docker network ls` and `docker network inspect pub_net`
You can use `docker network ls` and `docker network inspect my-8021q-macvlan-net`
commands to verify that the network exists, is a `macvlan` network, and
has parent `eth0.10`. You can use `ip addr show` on the Docker host to
verify that the interface `eth0.10` exists and has a separate IP address

View File

@ -45,7 +45,7 @@ DTR replicas for high availability.
| `--existing-replica-id` | $DTR_REPLICA_ID | The ID of an existing DTR replica. To add, remove or modify DTR, you must connect to an existing healthy replica's database. |
| `--help-extended` | $DTR_EXTENDED_HELP | Display extended help text for a given command. |
| `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: `10.0.0.0/24`. For high-availability, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. |
| `--prune` | $PRUNE | Delete the data volumes of all unhealthy replicas. With this option, the volume of the DTR replica you`re restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling DTR from those replicas. |
| `--prune` | $PRUNE | Delete the data volumes of all unhealthy replicas. With this option, the volume of the DTR replica you're restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling DTR from those replicas. |
| `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from https://<ucp-url>/ca, and use `--ucp-ca "$(cat ca.pem)"`. |
| `--ucp-insecure-tls` | $UCP_INSECURE_TLS | Disable TLS verification for UCP. The installation uses TLS but always trusts the TLS certificate used by UCP, which can lead to MITM (man-in-the-middle) attacks. For production deployments, use `--ucp-ca "$(cat ca.pem)"` instead. |
| `--ucp-password` | $UCP_PASSWORD | The UCP administrator password. |

View File

@ -4,13 +4,19 @@ description: Add a new replica to an existing DTR cluster
keywords: dtr, cli, join
---
Add a new replica to an existing DTR cluster
Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of UCP.
## Usage
```bash
docker run -it --rm \
docker/dtr:2.6.0 join \
--ucp-node <ucp-node-name> \
--ucp-insecure-tls
```
## Description
This command creates a replica of an existing DTR on a node managed by
Docker Universal Control Plane (UCP).

View File

@ -152,14 +152,14 @@ With regard to Docker, the backing filesystem is the filesystem where
`/var/lib/docker/` is located. Some storage drivers only work with specific
backing filesystems.
| Storage driver | Supported backing filesystems |
| Storage driver | Supported backing filesystems |
|:----------------------|:------------------------------|
| `overlay2`, `overlay` | `xfs` with fstype=1, `ext4` |
| `overlay2`, `overlay` | `xfs` with ftype=1, `ext4` |
| `aufs` | `xfs`, `ext4` |
| `devicemapper` | `direct-lvm` |
| `btrfs` | `btrfs` |
| `zfs` | `zfs` |
| `vfs` | any filesystem |
| `vfs` | any filesystem |
## Other considerations