diff --git a/_config.yml b/_config.yml index d10163795a..e1c76e210b 100644 --- a/_config.yml +++ b/_config.yml @@ -155,7 +155,7 @@ defaults: ucp_org: "docker" ucp_repo: "ucp" dtr_repo: "dtr" - ucp_version: "3.1.0" + ucp_version: "3.1.1" dtr_version: "2.6.0" dtr_latest_image: "docker/dtr:2.6.0" - scope: @@ -164,7 +164,7 @@ defaults: hide_from_sitemap: true ucp_org: "docker" ucp_repo: "ucp" - ucp_version: "3.0.6" + ucp_version: "3.0.7" - scope: path: "datacenter/ucp/2.2" values: diff --git a/_data/ddc_offline_files_2.yaml b/_data/ddc_offline_files_2.yaml index 8032b1e1ce..dc493531a0 100644 --- a/_data/ddc_offline_files_2.yaml +++ b/_data/ddc_offline_files_2.yaml @@ -6,6 +6,14 @@ - product: "ucp" version: "3.1" tar-files: + - description: "3.1.1 Linux" + url: https://packages.docker.com/caas/ucp_images_3.1.1.tar.gz + - description: "3.1.1 Windows Server 2016 LTSC" + url: https://packages.docker.com/caas/ucp_images_win_2016_3.1.1.tar.gz + - description: "3.1.1 Windows Server 1709" + url: https://packages.docker.com/caas/ucp_images_win_1709_3.1.1.tar.gz + - description: "3.1.1 Windows Server 1803" + url: https://packages.docker.com/caas/ucp_images_win_1803_3.1.1.tar.gz - description: "3.1.0 Linux" url: https://packages.docker.com/caas/ucp_images_3.1.0.tar.gz - description: "3.1.0 Windows Server 2016 LTSC" @@ -17,6 +25,14 @@ - product: "ucp" version: "3.0" tar-files: + - description: "3.0.7 Linux" + url: https://packages.docker.com/caas/ucp_images_3.0.7.tar.gz + - description: "3.0.7 Windows Server 2016 LTSC" + url: https://packages.docker.com/caas/ucp_images_win_2016_3.0.7.tar.gz + - description: "3.0.7 Windows Server 1709" + url: https://packages.docker.com/caas/ucp_images_win_1709_3.0.7.tar.gz + - description: "3.0.7 Windows Server 1803" + url: https://packages.docker.com/caas/ucp_images_win_1803_3.0.7.tar.gz - description: "3.0.6 Linux" url: https://packages.docker.com/caas/ucp_images_3.0.6.tar.gz - description: "3.0.6 IBM Z" diff --git a/_includes/api-version-matrix.md b/_includes/api-version-matrix.md index 89adc111a8..28cde9838f 100644 --- a/_includes/api-version-matrix.md +++ b/_includes/api-version-matrix.md @@ -1,6 +1,8 @@ | Docker version | Maximum API version | Change log | |:---------------|:---------------------------|:---------------------------------------------------------| +| 18.09 | [1.39](/engine/api/v1.39/) | [changes](/engine/api/version-history/#v139-api-changes) | +| 18.06 | [1.38](/engine/api/v1.38/) | [changes](/engine/api/version-history/#v138-api-changes) | | 18.05 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) | | 18.04 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) | | 18.03 | [1.37](/engine/api/v1.37/) | [changes](/engine/api/version-history/#v137-api-changes) | diff --git a/compose/wordpress.md b/compose/wordpress.md index 0c2113faae..77868adbb1 100644 --- a/compose/wordpress.md +++ b/compose/wordpress.md @@ -60,7 +60,7 @@ Compose to set up and run WordPress. Before starting, make sure you have WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress volumes: - db_data: + db_data: {} ``` > **Notes**: diff --git a/config/containers/logging/configure.md b/config/containers/logging/configure.md index 80c24e5cb5..d0f9d439a8 100644 --- a/config/containers/logging/configure.md +++ b/config/containers/logging/configure.md @@ -18,7 +18,6 @@ unless you configure it to use a different logging driver. In addition to using the logging drivers included with Docker, you can also implement and use [logging driver plugins](/engine/admin/logging/plugins.md). -Logging driver plugins are available in Docker 17.05 and higher. ## Configure the default logging driver @@ -44,24 +43,29 @@ example sets two configurable options on the `json-file` logging driver: { "log-driver": "json-file", "log-opts": { + "max-size": "10m", + "max-file": "3", "labels": "production_status", "env": "os,customer" } } ``` +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Boolean and numeric values (such as the value +> for `max-file` in the example above) must therefore be enclosed in quotes (`"`). If you do not specify a logging driver, the default is `json-file`. Thus, the default output for commands such as `docker inspect ` is JSON. To find the current default logging driver for the Docker daemon, run `docker info` and search for `Logging Driver`. You can use the following -command on Linux, macOS, or PowerShell on Windows: +command: ```bash -$ docker info | grep 'Logging Driver' +$ docker info --format '{{.LoggingDriver}}' -Logging Driver: json-file +json-file ``` ## Configure the logging driver for a container diff --git a/config/containers/logging/fluentd.md b/config/containers/logging/fluentd.md index 93c5032a4a..575824e2bc 100644 --- a/config/containers/logging/fluentd.md +++ b/config/containers/logging/fluentd.md @@ -53,7 +53,12 @@ The following example sets the log driver to `fluentd` and sets the } ``` - Restart Docker for the changes to take effect. +Restart Docker for the changes to take effect. + +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Boolean and numeric values (such as the value +> for `fluentd-async-connect` or `fluentd-max-retries`) must therefore be enclosed +> in quotes (`"`). To set the logging driver for a specific container, pass the `--log-driver` option to `docker run`: diff --git a/config/containers/logging/gelf.md b/config/containers/logging/gelf.md index 55a7462830..c5be667ae5 100644 --- a/config/containers/logging/gelf.md +++ b/config/containers/logging/gelf.md @@ -60,6 +60,10 @@ To make the configuration permanent, you can configure it in `/etc/docker/daemon } ``` +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Boolean and numeric values (such as the value +> for `gelf-tcp-max-reconnect`) must therefore be enclosed in quotes (`"`). + You can set the logging driver for a specific container by setting the `--log-driver` flag when using `docker container create` or `docker run`: diff --git a/config/containers/logging/json-file.md b/config/containers/logging/json-file.md index 79810568b9..6e397885f6 100644 --- a/config/containers/logging/json-file.md +++ b/config/containers/logging/json-file.md @@ -23,17 +23,22 @@ configuring Docker using `daemon.json`, see [daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file). The following example sets the log driver to `json-file` and sets the `max-size` -option. +and `max-file` options. ```json { "log-driver": "json-file", "log-opts": { - "max-size": "10m" + "max-size": "10m", + "max-file": "3" } } ``` +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Boolean and numeric values (such as the value +> for `max-file` in the example above) must therefore be enclosed in quotes (`"`). + Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration. You can set the logging driver for a specific container by using the diff --git a/config/containers/logging/splunk.md b/config/containers/logging/splunk.md index c279c819ff..1e110448ab 100644 --- a/config/containers/logging/splunk.md +++ b/config/containers/logging/splunk.md @@ -36,6 +36,11 @@ The daemon.json file is located in `/etc/docker/` on Linux hosts or configuring Docker using `daemon.json`, see [daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file). +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Boolean and numeric values (such as the value +> for `splunk-gzip` or `splunk-gzip-level`) must therefore be enclosed in quotes +> (`"`). + To use the `splunk` driver for a specific container, use the commandline flags `--log-driver` and `log-opt` with `docker run`: diff --git a/config/containers/logging/syslog.md b/config/containers/logging/syslog.md index cc64d627f4..6355ac13c2 100644 --- a/config/containers/logging/syslog.md +++ b/config/containers/logging/syslog.md @@ -41,7 +41,8 @@ configuring Docker using `daemon.json`, see [daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file). The following example sets the log driver to `syslog` and sets the -`syslog-address` option. +`syslog-address` option. The `syslog-address` options supports both UDP and TCP; +this example uses UDP. ```json { @@ -54,7 +55,9 @@ The following example sets the log driver to `syslog` and sets the Restart Docker for the changes to take effect. -> **Note**: The syslog-address supports both UDP and TCP. +> **Note**: `log-opt` configuration options in the `daemon.json` configuration +> file must be provided as strings. Numeric and boolean values (such as the value +> for `syslog-tls-skip-verify`) must therefore be enclosed in quotes (`"`). You can set the logging driver for a specific container by using the `--log-driver` flag to `docker container create` or `docker run`: @@ -75,7 +78,7 @@ starting the container. | Option | Description | Example value | |:-------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp | udp|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`.| `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` | +| `syslog-address` | The address of an external `syslog` server. The URI specifier may be `[tcp|udp|tcp+tls]://host:port`, `unix://path`, or `unixgram://path`. If the transport is `tcp`, `udp`, or `tcp+tls`, the default port is `514`. | `--log-opt syslog-address=tcp+tls://192.168.1.3:514`, `--log-opt syslog-address=unix:///tmp/syslog.sock` | | `syslog-facility` | The `syslog` facility to use. Can be the number or name for any valid `syslog` facility. See the [syslog documentation](https://tools.ietf.org/html/rfc5424#section-6.2.1). | `--log-opt syslog-facility=daemon` | | `syslog-tls-ca-cert` | The absolute path to the trust certificates signed by the CA. **Ignored if the address protocol is not `tcp+tls`.** | `--log-opt syslog-tls-ca-cert=/etc/ca-certificates/custom/ca.pem` | | `syslog-tls-cert` | The absolute path to the TLS certificate file. **Ignored if the address protocol is not `tcp+tls`**. | `--log-opt syslog-tls-cert=/etc/ca-certificates/custom/cert.pem` | diff --git a/datacenter/dtr/2.5/reference/cli/install.md b/datacenter/dtr/2.5/reference/cli/install.md index 4db1f660bf..d7822bc34e 100644 --- a/datacenter/dtr/2.5/reference/cli/install.md +++ b/datacenter/dtr/2.5/reference/cli/install.md @@ -23,9 +23,11 @@ After installing DTR, you can join additional DTR replicas using `docker/dtr joi ### Example Usage -$ docker run -it --rm docker/dtr install \ - --ucp-node \ - --ucp-insecure-tls +```bash +docker run -it --rm docker/dtr install \ +--ucp-node \ +--ucp-insecure-tls +``` Note: Use `--ucp-ca "$(cat ca.pem)"` instead of `--ucp-insecure-tls` for a production deployment. diff --git a/datacenter/dtr/2.5/reference/cli/join.md b/datacenter/dtr/2.5/reference/cli/join.md index 80373de3e2..8b569affaf 100644 --- a/datacenter/dtr/2.5/reference/cli/join.md +++ b/datacenter/dtr/2.5/reference/cli/join.md @@ -4,7 +4,16 @@ description: Add a new replica to an existing DTR cluster keywords: dtr, cli, join --- -Add a new replica to an existing DTR cluster +Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of UCP. + +## Usage + +```bash +docker run -it --rm \ + docker/dtr:2.5.6 join \ + --ucp-node \ + --ucp-insecure-tls +``` diff --git a/datacenter/ucp/3.0/guides/admin/install/install-offline.md b/datacenter/ucp/3.0/guides/admin/install/install-offline.md index c917cae8ff..40bdf4971c 100644 --- a/datacenter/ucp/3.0/guides/admin/install/install-offline.md +++ b/datacenter/ucp/3.0/guides/admin/install/install-offline.md @@ -27,7 +27,7 @@ installation will fail. Use a computer with internet access to download the UCP package from the following links. -{% include components/ddc_url_list_2.html product="ucp" version="2.2" %} +{% include components/ddc_url_list_2.html product="ucp" version="3.0" %} ## Download the offline package diff --git a/datacenter/ucp/3.0/guides/admin/install/install-on-azure.md b/datacenter/ucp/3.0/guides/admin/install/install-on-azure.md new file mode 100644 index 0000000000..1d256bf6c8 --- /dev/null +++ b/datacenter/ucp/3.0/guides/admin/install/install-on-azure.md @@ -0,0 +1,290 @@ +--- +title: Install UCP on Azure +description: Learn how to install Docker Universal Control Plane in a Microsoft Azure environment. +keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes +--- + +Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking +and Persistent Storage feature set. UCP deploys the Calico CNI provider, in Azure +the Calico CNI leverages the Azure networking infrastructure for data path +networking and the Azure IPAM for IP address management. There are +infrastructure prerequisites that are required prior to UCP installation for the +Calico / Azure integration. + +## Docker UCP Networking + +Docker UCP configures the Azure IPAM module for Kubernetes to allocate +IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure +VM that's part of the Kubernetes cluster to be configured with a pool of +IP addresses. + +You have two options for deploying the VMs for the Kubernetes cluster on Azure: +- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides + an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms) + to configure and maintain IP pools for stand-alone Azure VMs. +- Install the cluster on an Azure virtual machine scale set. Configure the + IP pools by using an ARM template like + [this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set). + +The steps for setting up IP address management are different in the two +environments. If you're using a scale set, you set up `ipConfigurations` +in an ARM template. If you're using stand-alone VMs, you set up IP pools +for each VM by using a utility container that's configured to run as a +global Swarm service, which Docker provides. + +## Azure Prerequisites + +The following list of infrastructure prerequisites need to be met in order +to successfully deploy Docker UCP on Azure. + +- All UCP Nodes (Managers and Workers) need to be deployed into the same +Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) +components could be deployed in a second Azure Resource Group. +- All UCP Nodes (Managers and Workers) need to be attached to the same +Azure Subnet. +- All UCP (Managers and Workers) need to be tagged in Azure with the +`Orchestrator` tag. Note the value for this tag is the Kubernetes version number +in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each +UCP release. To find the relevant version please see the UCP +[Release Notes](../../release-notes). For example for UCP 3.0.6 the tag +would be `Orchestrator=Kubernetes:1.8.15`. +- The Azure Computer Name needs to match the Node Operating System's Hostname. +Note this applies to the FQDN of the host including domain names. +- An Azure Service Principal with `Contributor` access to the Azure Resource +Group hosting the UCP Nodes. Note, if using a separate networking Resource +Group the same Service Principal will need `Network Contributor` access to this +Resource Group. + +The following information will be required for the installation: + +- `subscriptionId` - The Azure Subscription ID in which the UCP +objects are being deployed. +- `tenantId` - The Azure Active Directory Tenant ID in which the UCP +objects are being deployed. +- `aadClientId` - The Azure Service Principal ID +- `aadClientSecret` - The Azure Service Principal Secret Key + +### Azure Configuration File + +For Docker UCP to integrate in to Microsoft Azure, an Azure configuration file +will need to be placed within each UCP node in your cluster. This file +will need to be placed at `/etc/kubernetes/azure.json`. + +See the template below. Note entries that do not contain `****` should not be +changed. + +``` +{ + "cloud":"AzurePublicCloud", + "tenantId": "***", + "subscriptionId": "***", + "aadClientId": "***", + "aadClientSecret": "***", + "resourceGroup": "***", + "location": "****", + "subnetName": "/****", + "securityGroupName": "****", + "vnetName": "****", + "cloudProviderBackoff": false, + "cloudProviderBackoffRetries": 0, + "cloudProviderBackoffExponent": 0, + "cloudProviderBackoffDuration": 0, + "cloudProviderBackoffJitter": 0, + "cloudProviderRatelimit": false, + "cloudProviderRateLimitQPS": 0, + "cloudProviderRateLimitBucket": 0, + "useManagedIdentityExtension": false, + "useInstanceMetadata": true +} +``` + +There are some optional values for Azure deployments: + +- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set. +- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a +seperate resource group. +- `"routeTableName": "****",` - If you have defined multiple Route tables within +an Azure subnet. + +More details on this configuration file can be found +[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go). + + + +## Considerations for IPAM Configuration + +The subnet and the virtual network associated with the primary interface of +the Azure VMs need to be configured with a large enough address prefix/range. +The number of required IP addresses depends on the number of pods running +on each node and the number of nodes in the cluster. + +For example, in a cluster of 256 nodes, to run a maximum of 128 pods +concurrently on a node, make sure that the address space of the subnet and the +virtual network can allocate at least 128 * 256 IP addresses, _in addition to_ +initial IP allocations to VM NICs during Azure resource creation. + +Accounting for IP addresses that are allocated to NICs during VM bring-up, set +the address space of the subnet and virtual network to 10.0.0.0/16. This +ensures that the network can dynamically allocate at least 32768 addresses, +plus a buffer for initial allocations for primary IP addresses. + +> Azure IPAM, UCP, and Kubernetes +> +> The Azure IPAM module queries an Azure virtual machine's metadata to obtain +> a list of IP addresses that are assigned to the virtual machine's NICs. The +> IPAM module allocates these IP addresses to Kubernetes pods. You configure the +> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine +> or scale set member, so that Azure IPAM can provide them to Kubernetes when +> requested. +{: .important} + +#### Additional Notes + +- The `IP_COUNT` variable defines the subnet size for each node's pod IPs. This subnet size is the same for all hosts. +- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts. + +## Configure IP pools for Azure stand-alone VMs + +Follow these steps once the underlying infrastructure has been provisioned. + +### Configure multiple IP addresses per VM NIC + +Follow the steps below to configure multiple IP addresses per VM NIC. + +1. Initialize a swarm cluster comprising the virtual machines you created + earlier. On one of the nodes of the cluster, run: + + ```bash + docker swarm init + ``` + +2. Note the tokens for managers and workers. You may retrieve the join tokens + at any time by running `$ docker swarm join-token manager` or `$ docker swarm + join-token worker` on the manager node. +3. Join two other nodes on the cluster as manager (recommended for HA) by running: + + ```bash + docker swarm join --token + ``` + +4. Join remaining nodes on the cluster as workers: + + ```bash + docker swarm join --token + ``` + +5. Create a file named "azure_ucp_admin.toml" that contains contents from + creating the Service Principal. + + ``` + $ cat > azure_ucp_admin.toml < \ + --interactive \ + --swarm-port 3376 \ + --pod-cidr \ + --cloud-provider Azure +``` \ No newline at end of file diff --git a/datacenter/ucp/3.0/guides/admin/install/upgrade-offline.md b/datacenter/ucp/3.0/guides/admin/install/upgrade-offline.md index cc30953ec4..7f4920d0db 100644 --- a/datacenter/ucp/3.0/guides/admin/install/upgrade-offline.md +++ b/datacenter/ucp/3.0/guides/admin/install/upgrade-offline.md @@ -17,7 +17,7 @@ copy this package to the host where you upgrade UCP. Use a computer with internet access to download the UCP package from the following links. -{% include components/ddc_url_list_2.html product="ucp" version="2.2" %} +{% include components/ddc_url_list_2.html product="ucp" version="3.0" %} ## Download the offline package diff --git a/datacenter/ucp/3.0/guides/admin/install/upgrade.md b/datacenter/ucp/3.0/guides/admin/install/upgrade.md index 459f2604e6..243c401039 100644 --- a/datacenter/ucp/3.0/guides/admin/install/upgrade.md +++ b/datacenter/ucp/3.0/guides/admin/install/upgrade.md @@ -19,9 +19,13 @@ in each node of the swarm to version 17.06 Enterprise Edition. Plan for the upgrade to take place outside of business hours, to ensure there's minimal impact to your users. -Also, don't make changes to UCP configurations while you're upgrading it. +Don't make changes to UCP configurations while you're upgrading. This can lead to misconfigurations that are difficult to troubleshoot. +> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft +> Azure then please ensure all of the Azure [prerequisities](install-on-azure.md/#azure-prerequisites) +> are met. + ## Back up your swarm Before starting an upgrade, make sure that your swarm is healthy. If a problem @@ -48,7 +52,7 @@ Starting with the manager nodes, and then worker nodes: 2. Upgrade the Docker Engine to version 17.06 or higher. 3. Make sure the node is healthy. - In your browser, navigate to the **Nodes** page in the UCP web UI, + In your browser, navigate to the **Nodes** page in the UCP web interface, and check that the node is healthy and is part of the swarm. > Swarm mode @@ -58,9 +62,9 @@ Starting with the manager nodes, and then worker nodes: ## Upgrade UCP -You can upgrade UCP from the web UI or the CLI. +You can upgrade UCP from the web interface or the CLI. -### Use the UI to perform an upgrade +### Use the web interface to perform an upgrade When an upgrade is available for a UCP installation, a banner appears. @@ -76,13 +80,13 @@ Select a version to upgrade to using the **Available UCP Versions** dropdown, then click to upgrade. Before the upgrade happens, a confirmation dialog along with important -information regarding swarm and UI availability is displayed. +information regarding swarm and interface availability is displayed. ![](../../images/upgrade-ucp-3.png){: .with-border} -During the upgrade, the UI is unavailable, so wait until the upgrade is complete -before trying to use the UI. When the upgrade completes, a notification alerts -you that a newer version of the UI is available, and you can see the new UI +During the upgrade, the interface is unavailable, so wait until the upgrade is complete +before trying to use the interface. When the upgrade completes, a notification alerts +you that a newer version of the interface is available, and you can see the new interface after you refresh your browser. ### Use the CLI to perform an upgrade @@ -103,7 +107,7 @@ $ docker container run --rm -it \ This runs the upgrade command in interactive mode, so that you are prompted for any necessary configuration values. -Once the upgrade finishes, navigate to the UCP web UI and make sure that +Once the upgrade finishes, navigate to the UCP web interface and make sure that all the nodes managed by UCP are healthy. ## Recommended upgrade paths diff --git a/develop/develop-images/build_enhancements.md b/develop/develop-images/build_enhancements.md index 545344212e..5cc1712156 100644 --- a/develop/develop-images/build_enhancements.md +++ b/develop/develop-images/build_enhancements.md @@ -99,7 +99,8 @@ $ docker build --progress=plain . ## Overriding default frontends -To override the default frontend, set the first line of the Dockerfile as a comment with a specific frontend image: +The new syntax features in `Dockerfile` are available if you override the default frontend. To override +the default frontend, set the first line of the `Dockerfile` as a comment with a specific frontend image: ``` # syntax = , e.g. # syntax = docker/dockerfile:1.0-experimental ``` @@ -151,3 +152,40 @@ $ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt #9 duration: 1.470401133s ... ``` + +## Using SSH to access private data in builds + +> **Acknowledgment**: +> Please see [Build secrets and SSH forwarding in Docker 18.09](https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066) +> for more information and examples. + +The `docker build` has a `--ssh` option to allow the Docker Engine to forward SSH agent connections. For more information +on SSH agent, see the [OpenSSH man page](https://man.openbsd.org/ssh-agent). + +Only the commands in the `Dockerfile` that have explicitly requested the SSH access by defining `type=ssh` mount have +access to SSH agent connections. The other commands have no knowledge of any SSH agent being available. + +To request SSH access for a `RUN` command in the `Dockerfile`, define a mount with type `ssh`. This will set up the +`SSH_AUTH_SOCK` environment variable to make programs relying on SSH automatically use that socket. + +Here is an example Dockerfile using SSH in the container: + +```Dockerfile +# syntax=docker/dockerfile:experimental +FROM alpine + +# Install ssh client and git +RUN apk add --no-cache openssh-client git + +# Download public key for github.com +RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts + +# Clone private repository +RUN --mount=type=ssh git clone git@github.com:myorg/myproject.git myproject +``` + +Once the `Dockerfile` is created, use the `--ssh` option for connectivity with the SSH agent. + +```bash +$ docker build --ssh default . +``` diff --git a/develop/develop-images/dockerfile_best-practices.md b/develop/develop-images/dockerfile_best-practices.md index 2a456b5dec..98f7710d57 100644 --- a/develop/develop-images/dockerfile_best-practices.md +++ b/develop/develop-images/dockerfile_best-practices.md @@ -129,7 +129,7 @@ EOF ``` docker build -t foo https://github.com/thajeztah/pgadmin4-docker.git -f-</[:]`, by re-tagging an existing local image `docker tag /[:]`, -or by using `docker commit /[:]` to commit +or by using `docker commit /[:]` to commit changes. Now you can push this repository to the registry designated by its name or tag. diff --git a/docker-hub/webhooks.md b/docker-hub/webhooks.md index 4c9cab3e43..4ac2efe334 100644 --- a/docker-hub/webhooks.md +++ b/docker-hub/webhooks.md @@ -49,7 +49,7 @@ Docker Hub Webhook payloads have the following payload JSON format: "comment_count": 0, "date_created": 1.417494799e+09, "description": "", - "dockerfile": "#\n# BUILD\u0009\u0009docker build -t svendowideit/apt-cacher .\n# RUN\u0009\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# \u0009\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\n#\nFROM\u0009\u0009ubuntu\n\n\nVOLUME\u0009\u0009[\/var/cache/apt-cacher-ng\]\nRUN\u0009\u0009apt-get update ; apt-get install -yq apt-cacher-ng\n\nEXPOSE \u0009\u00093142\nCMD\u0009\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\n", + "dockerfile": "#\n# BUILD\u0009\u0009docker build -t svendowideit/apt-cacher .\n# RUN\u0009\u0009docker run -d -p 3142:3142 -name apt-cacher-run apt-cacher\n#\n# and then you can run containers with:\n# \u0009\u0009docker run -t -i -rm -e http_proxy http://192.168.1.2:3142/ debian bash\n#\nFROM\u0009\u0009ubuntu\n\n\nVOLUME\u0009\u0009[/var/cache/apt-cacher-ng]\nRUN\u0009\u0009apt-get update ; apt-get install -yq apt-cacher-ng\n\nEXPOSE \u0009\u00093142\nCMD\u0009\u0009chmod 777 /var/cache/apt-cacher-ng ; /etc/init.d/apt-cacher-ng start ; tail -f /var/log/apt-cacher-ng/*\n", "full_description": "Docker Hub based automated build from a GitHub repo", "is_official": false, "is_private": true, diff --git a/ee/dtr/images/delete-images-1.png b/ee/dtr/images/delete-images-1.png index 97203ec17e..292557a4e6 100644 Binary files a/ee/dtr/images/delete-images-1.png and b/ee/dtr/images/delete-images-1.png differ diff --git a/ee/dtr/images/delete-images-2.png b/ee/dtr/images/delete-images-2.png index 366ac465a5..4770be2178 100644 Binary files a/ee/dtr/images/delete-images-2.png and b/ee/dtr/images/delete-images-2.png differ diff --git a/ee/dtr/images/override-vulnerability-2.png b/ee/dtr/images/override-vulnerability-2.png index e45a808b21..326d272eca 100644 Binary files a/ee/dtr/images/override-vulnerability-2.png and b/ee/dtr/images/override-vulnerability-2.png differ diff --git a/ee/dtr/images/pull-push-images-1.png b/ee/dtr/images/pull-push-images-1.png index 9a031b29a4..87a11cab0d 100644 Binary files a/ee/dtr/images/pull-push-images-1.png and b/ee/dtr/images/pull-push-images-1.png differ diff --git a/ee/dtr/images/pull-push-images-2.png b/ee/dtr/images/pull-push-images-2.png index 2caf2ca2ca..0ce08c6ec0 100644 Binary files a/ee/dtr/images/pull-push-images-2.png and b/ee/dtr/images/pull-push-images-2.png differ diff --git a/ee/dtr/images/pull-push-images-3.png b/ee/dtr/images/pull-push-images-3.png index 11c65d496e..7b4fad6d1d 100644 Binary files a/ee/dtr/images/pull-push-images-3.png and b/ee/dtr/images/pull-push-images-3.png differ diff --git a/ee/dtr/images/scan-images-for-vulns-1.png b/ee/dtr/images/scan-images-for-vulns-1.png index b515d4c452..796a0bcb32 100644 Binary files a/ee/dtr/images/scan-images-for-vulns-1.png and b/ee/dtr/images/scan-images-for-vulns-1.png differ diff --git a/ee/dtr/images/scan-images-for-vulns-2.png b/ee/dtr/images/scan-images-for-vulns-2.png index 9748dbfa81..68d4bd0cd9 100644 Binary files a/ee/dtr/images/scan-images-for-vulns-2.png and b/ee/dtr/images/scan-images-for-vulns-2.png differ diff --git a/ee/dtr/images/scan-images-for-vulns-3.png b/ee/dtr/images/scan-images-for-vulns-3.png index 6d2d4f1e11..2413e6c7f7 100644 Binary files a/ee/dtr/images/scan-images-for-vulns-3.png and b/ee/dtr/images/scan-images-for-vulns-3.png differ diff --git a/ee/dtr/images/scan-images-for-vulns-4.png b/ee/dtr/images/scan-images-for-vulns-4.png index fe0a5d513c..6a273dd6d5 100644 Binary files a/ee/dtr/images/scan-images-for-vulns-4.png and b/ee/dtr/images/scan-images-for-vulns-4.png differ diff --git a/ee/dtr/images/scan-images-for-vulns-5.png b/ee/dtr/images/scan-images-for-vulns-5.png index 294889667e..50e43a4338 100644 Binary files a/ee/dtr/images/scan-images-for-vulns-5.png and b/ee/dtr/images/scan-images-for-vulns-5.png differ diff --git a/ee/dtr/images/security-scanning-setup-1.png b/ee/dtr/images/security-scanning-setup-1.png index 7db7a312c5..f2afa4a162 100644 Binary files a/ee/dtr/images/security-scanning-setup-1.png and b/ee/dtr/images/security-scanning-setup-1.png differ diff --git a/ee/dtr/user/manage-images/delete-images.md b/ee/dtr/user/manage-images/delete-images.md index 463270e3a5..0a9701da5b 100644 --- a/ee/dtr/user/manage-images/delete-images.md +++ b/ee/dtr/user/manage-images/delete-images.md @@ -4,21 +4,18 @@ description: Learn how to delete images from Docker Trusted Registry. keywords: registry, delete --- -To delete an image, go to the **DTR web UI**, and navigate to the image -**repository** you want to delete. In the **Tags** tab, select all the image +To delete an image, navigate to the **Tags** tab of the repository page on the DTR web interface. +In the **Tags** tab, select all the image tags you want to delete, and click the **Delete** button. ![](../../images/delete-images-1.png){: .with-border} -You can also delete all image versions, by deleting the repository. For that, -in the image **repository**, navigate to the **Settings** tab, and click the -**Delete** button. +You can also delete all image versions by deleting the repository. To delete a repository, navigate to **Settings** and click **Delete** under "Delete Repository". ## Delete signed images -DTR only allows deleting images if that image hasn't been signed. You first -need to delete all the trust data associated with the image. Then you'll -be able to delete it. +DTR only allows deleting images if the image has not been signed. You first +need to delete all the trust data associated with the image before you are able to delete the image. ![](../../images/delete-images-2.png){: .with-border} @@ -33,11 +30,11 @@ There are three steps to delete a signed image: To find which roles signed an image, you first need to learn which roles are trusted to sign the image. -[Set up your Notary client](../../access-dtr/configure-your-notary-client.md), +[Set up your Notary client](/ee/dtr/user/manage-images/sign-images/#configure-your-notary-client), and run: ``` -notary delegation list dtr.example.org/library/wordpress +notary delegation list dtr-example.com/library/wordpress ``` In this example, the repository owner delegated trust to the @@ -55,10 +52,10 @@ you can learn which roles actually signed it: ``` # Check if the image was signed by the "targets" role -notary list dtr.example.org/library/wordpress +notary list dtr-example.com/library/wordpress # Check if the image was signed by a specific role -notary list dtr.example.org/library/wordpress --roles +notary list dtr-example.com/library/wordpress --roles ``` In this example the image was signed by three roles: `targets`, @@ -73,7 +70,7 @@ to do this operation. For each role that signed the image, run: ``` -notary remove dtr.example.org/library/wordpress \ +notary remove dtr-example.com/library/wordpress \ --roles --publish ``` diff --git a/ee/dtr/user/manage-images/index.md b/ee/dtr/user/manage-images/index.md index a8cdc66321..9da818f30c 100644 --- a/ee/dtr/user/manage-images/index.md +++ b/ee/dtr/user/manage-images/index.md @@ -44,7 +44,7 @@ name of our repository will be `dtr-example.com/test-user-1/wordpress`. > Immutable Tags and Tag Limit > -> Starting in DTR 2.6, repository admins can enable tag pruning by [setting a tag limit](tag-pruning/#set-a-tag-limit). This can only be set if you turn off **Immutability** and allow your repository tags to be overwritten. +> Starting in DTR 2.6, repository admins can enable tag pruning by [setting a tag limit](../tag-pruning/#set-a-tag-limit). This can only be set if you turn off **Immutability** and allow your repository tags to be overwritten. > Image name size for DTR > diff --git a/ee/dtr/user/manage-images/override-a-vulnerability.md b/ee/dtr/user/manage-images/override-a-vulnerability.md index 917a420fb6..33f75674f9 100644 --- a/ee/dtr/user/manage-images/override-a-vulnerability.md +++ b/ee/dtr/user/manage-images/override-a-vulnerability.md @@ -9,11 +9,11 @@ DTR scans your images for vulnerabilities but sometimes it can report that your image has vulnerabilities you know have been fixed. If that happens you can dismiss the warning. -In the **DTR web UI**, navigate to the repository that has been scanned. +In the **DTR web interface**, navigate to the repository that has been scanned. -![Tag list](../../images/override-vulnerability-1.png){: .with-border} +![](../../images/scan-images-for-vulns-3.png){: .with-border} -Click **View details** for the image you want to see the scan results, and +Click **View details** to review the image scan results, and choose **Components** to see the vulnerabilities for each component packaged in the image. @@ -23,7 +23,7 @@ vulnerability, and click **hide**. ![Vulnerability list](../../images/override-vulnerability-2.png){: .with-border} The vulnerability is hidden system-wide and will no longer be reported as a vulnerability -on other affected images with the same layer IDs or digests. +on affected images with the same layer IDs or digests. After dismissing a vulnerability, DTR will not reevaluate the promotion policies you have set up for the repository. diff --git a/ee/dtr/user/manage-images/pull-and-push-images.md b/ee/dtr/user/manage-images/pull-and-push-images.md index fbe78854fc..43fc5f8b09 100644 --- a/ee/dtr/user/manage-images/pull-and-push-images.md +++ b/ee/dtr/user/manage-images/pull-and-push-images.md @@ -6,7 +6,7 @@ redirect_from: - /datacenter/dtr/2.5/guides/user/manage-images/pull-and-push-images/ --- -{% assign domain="dtr.example.org" %} +{% assign domain="dtr-example.com" %} {% assign org="library" %} {% assign repo="wordpress" %} {% assign tag="latest" %} @@ -25,11 +25,11 @@ from Docker Hub or any other registry. Since DTR is secure by default, you always need to authenticate before pulling images. In this example, DTR can be accessed at {{ domain }}, and the user -was granted permissions to access the NGINX, and Wordpress repositories. +was granted permissions to access the `nginx` and `wordpress` repositories in the `library` organization. ![](../../images/pull-push-images-1.png){: .with-border} -Click on the repository to see its details. +Click on the repository name to see its details. ![](../../images/pull-push-images-2.png){: .with-border} @@ -70,7 +70,7 @@ docker login {{ domain }} docker push {{ domain }}/{{ org }}/{{ repo }}:{{ tag }} ``` -Go back to the **DTR web UI** to validate that the tag was successfully pushed. +On the web interface, navigate to the **Tags** tab on the repository page to confirm that the tag was successfully pushed. ![](../../images/pull-push-images-3.png){: .with-border} diff --git a/ee/dtr/user/manage-images/scan-images-for-vulnerabilities.md b/ee/dtr/user/manage-images/scan-images-for-vulnerabilities.md index 8d12d60f3e..70fa9253f8 100644 --- a/ee/dtr/user/manage-images/scan-images-for-vulnerabilities.md +++ b/ee/dtr/user/manage-images/scan-images-for-vulnerabilities.md @@ -8,7 +8,7 @@ keywords: registry, scan, vulnerability Docker Trusted Registry can scan images in your repositories to verify that they are free from known security vulnerabilities or exposures, using Docker Security -Scanning. The results of these scans are reported for each image tag. +Scanning. The results of these scans are reported for each image tag in a repository. Docker Security Scanning is available as an add-on to Docker Trusted Registry, and an administrator configures it for your DTR instance. If you do not see @@ -22,7 +22,7 @@ a new scan. ## The Docker Security Scan process -Scans run either on demand when a user clicks the **Start a Scan** links or +Scans run either on demand when you click the **Start a Scan** link or **Scan** button (see [Manual scanning](#manual-scanning) below), or automatically on any `docker push` to the repository. @@ -30,7 +30,7 @@ First the scanner performs a binary scan on each layer of the image, identifies the software components in each layer, and indexes the SHA of each component in a bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, so vulnerable components are discovered even if they are -statically-linked or under a different name. +statically linked or under a different name. The scan then compares the SHA of each component against the US National Vulnerability Database that is installed on your DTR instance. When @@ -49,15 +49,15 @@ image repository. If your DTR instance is configured in this way, you do not need to do anything once your `docker push` completes. The scan runs automatically, and the results -are reported in the repository's **Images** tab after the scan finishes. +are reported in the repository's **Tags** tab after the scan finishes. ## Manual scanning If your repository owner enabled Docker Security Scanning but disabled automatic -scanning, you can manually start a scan for images in repositories to which you -have `write` access. +scanning, you can manually start a scan for images in repositories you +have `write` access to. -To start a security scan, navigate to the **tag details**, and click the **Scan** button. +To start a security scan, navigate to the repository **Tags** tab on the web interface, click "View details" next to the relevant tag, and click **Scan**. ![](../../images/scan-images-for-vulns-1.png){: .with-border} @@ -85,33 +85,33 @@ To change the repository scanning mode: Once DTR has run a security scan for an image, you can view the results. -The **Images** tab for each repository includes a summary of the most recent +The **Tags** tab for each repository includes a summary of the most recent scan results for each image. ![](../../images/scan-images-for-vulns-3.png){: .with-border} -- A green shield icon with a check mark indicates that the scan did not find +- The text "Clean" in green indicates that the scan did not find any vulnerabilities. -- A red or orange shield icon indicates that vulnerabilities were found, and -the number of vulnerabilities is included on that same line. +- A red or orange text indicates that vulnerabilities were found, and +the number of vulnerabilities is included on that same line according to severity: ***Critical***, ***Major***, ***Minor***. -If the vulnerability scan can't detect the version of a component, it reports +If the vulnerability scan could not detect the version of a component, it reports the vulnerabilities for all versions of that component. -From the **Images** tab you can click **View details** for a specific tag to see +From the repository **Tags** tab, you can click **View details** for a specific tag to see the full scan results. The top of the page also includes metadata about the -image, including the SHA, image size, date last pushed and user who last pushed, +image, including the SHA, image size, last push date, user who initiated the push, the security scan summary, and the security scan progress. The scan results for each image include two different modes so you can quickly view details about the image, its components, and any vulnerabilities found. -- The **Layers** view lists the layers of the image in order as they are built -by the Dockerfile. +- The **Layers** view lists the layers of the image in the order that they are built +by Dockerfile. This view can help you find exactly which command in the build introduced the vulnerabilities, and which components are associated with that single command. Click a layer to see a summary of its components. You can then - click on a component to switch to the Component view and get more details + click on a component to switch to the **Component** view and get more details about the specific item. > **Tip**: The layers view can be long, so be sure @@ -120,8 +120,7 @@ by the Dockerfile. ![](../../images/scan-images-for-vulns-4.png){: .with-border} - The **Components** view lists the individual component libraries indexed by -the scanning system, in order of severity and number of vulnerabilities found, -most vulnerable first. +the scanning system, in order of severity and number of vulnerabilities found, with the most vulnerable library listed first. Click on an individual component to view details about the vulnerability it introduces, including a short summary and a link to the official CVE @@ -139,18 +138,17 @@ vulnerability and decide what to do. If you discover vulnerable components, you should check if there is an updated version available where the security vulnerability has been addressed. If -necessary, you might contact the component's maintainers to ensure that the -vulnerability is being addressed in a future version or patch update. +necessary, you can contact the component's maintainers to ensure that the +vulnerability is being addressed in a future version or a patch update. If the vulnerability is in a `base layer` (such as an operating system) you -might not be able to correct the issue in the image. In this case, you might -switch to a different version of the base layer, or you might find an -equivalent, less vulnerable base layer. You might also decide that the -vulnerability or exposure is acceptable. +might not be able to correct the issue in the image. In this case, you can +switch to a different version of the base layer, or you can find an +equivalent, less vulnerable base layer. Address vulnerabilities in your repositories by updating the images to use updated and corrected versions of vulnerable components, or by using a different -components that provide the same functionality. When you have updated the source +component offering the same functionality. When you have updated the source code, run a build to create a new image, tag the image, and push the updated image to your DTR instance. You can then re-scan the image to confirm that you have addressed the vulnerabilities. diff --git a/ee/ucp/admin/configure/ucp-configuration-file.md b/ee/ucp/admin/configure/ucp-configuration-file.md index da80635884..02f5bf16e1 100644 --- a/ee/ucp/admin/configure/ucp-configuration-file.md +++ b/ee/ucp/admin/configure/ucp-configuration-file.md @@ -176,7 +176,7 @@ components. Assigning these values overrides the settings in a container's | `profiling_enabled` | no | Set to `true` to enable specialized debugging endpoints for profiling UCP performance. The default is `false`. | | `kv_timeout` | no | Sets the key-value store timeout setting, in milliseconds. The default is `5000`. | | `kv_snapshot_count` | no | Sets the key-value store snapshot count setting. The default is `20000`. | -| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web interface. | +| `external_service_lb` | no | Specifies an optional external load balancer for default links to services with exposed ports in the web interface. | | `cni_installer_url` | no | Specifies the URL of a Kubernetes YAML file to be used for installing a CNI plugin. Applies only during initial installation. If empty, the default CNI plugin is used. | | `metrics_retention_time` | no | Adjusts the metrics retention time. | | `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. | @@ -184,11 +184,14 @@ components. Assigning these values overrides the settings in a container's | `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 512MB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. | | `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. | | `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. | +| `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. | +| `ipip_mtu` | no | Set the IPIP MTU size for the calico IPIP tunnel interface. | +| `azure_ip_count` | no | Set the IP count for azure allocator to allocate IPs per Azure virtual machine. | | `nodeport_range` | yes | Set the port range that for Kubernetes services of type NodePort can be exposed in. Default is `32768-35535`. | | `custom_kube_api_server_flags` | no | Set the configuration options for the Kubernetes API server. | -| `custom_kube_controller_manager_flags` | no | Set the configuration options for the Kubernetes controller manager | -| `custom_kubelet_flags` | no | Set the configuration options for Kubelets | -| `custom_kube_scheduler_flags` | no | Set the configuration options for the Kubernetes scheduler | +| `custom_kube_controller_manager_flags` | no | Set the configuration options for the Kubernetes controller manager. | +| `custom_kubelet_flags` | no | Set the configuration options for Kubelets. | +| `custom_kube_scheduler_flags` | no | Set the configuration options for the Kubernetes scheduler. | | `local_volume_collection_mapping` | no | Store data about collections for volumes in UCP's local KV store instead of on the volume labels. This is used for enforcing access control on volumes. | | `manager_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on manager nodes. | | `worker_kube_reserved_resources` | no | Reserve resources for Docker UCP and Kubernetes components which are running on worker nodes. | diff --git a/ee/ucp/admin/install/index.md b/ee/ucp/admin/install/index.md index a049f3533d..e754fea706 100644 --- a/ee/ucp/admin/install/index.md +++ b/ee/ucp/admin/install/index.md @@ -86,7 +86,7 @@ To install UCP: This runs the install command in interactive mode, so that you're prompted for any necessary configuration values. To find what other options are available in the install command, check the - [reference documentation](/reference/ucp/3.0/cli/install.md). + [reference documentation](/reference/ucp/3.1/cli/install.md). > Custom CNI plugins > diff --git a/ee/ucp/admin/install/install-on-azure.md b/ee/ucp/admin/install/install-on-azure.md index 568eda70e8..b57d2be624 100644 --- a/ee/ucp/admin/install/install-on-azure.md +++ b/ee/ucp/admin/install/install-on-azure.md @@ -4,37 +4,125 @@ description: Learn how to install Docker Universal Control Plane in a Microsoft keywords: Universal Control Plane, UCP, install, Docker EE, Azure, Kubernetes --- +Docker UCP closely integrates into Microsoft Azure for its Kubernetes Networking +and Persistent Storage feature set. UCP deploys the Calico CNI provider. In Azure +the Calico CNI leverages the Azure networking infrastructure for data path +networking and the Azure IPAM for IP address management. There are +infrastructure prerequisites that are required prior to UCP installation for the +Calico / Azure integration. + +## Docker UCP Networking + Docker UCP configures the Azure IPAM module for Kubernetes to allocate -IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure -VM that's part of the Kubernetes cluster to be configured with a pool of +IP addresses to Kubernetes pods. The Azure IPAM module requires each Azure +virtual machine that's part of the Kubernetes cluster to be configured with a pool of IP addresses. -You have two options for deploying the VMs for the Kubernetes cluster on Azure: -- Install the cluster on Azure stand-alone virtual machines. Docker UCP provides - an [automated mechanism](#configure-ip-pools-for-azure-stand-alone-vms) - to configure and maintain IP pools for stand-alone Azure VMs. -- Install the cluster on an Azure virtual machine scale set. Configure the - IP pools by using an ARM template like [this one](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set). +There are two options for provisoning IPs for the Kubernetes cluster on Azure +- Docker UCP provides an automated mechanism to configure and maintain IP pools + for standalone Azure virtual machines. This service runs within the calico-node daemonset + and by default will provision 128 IP address for each node. This value can be + configured through the `azure_ip_count`in the UCP + [configuration file](../configure/ucp-configuration-file) before or after the + UCP installation. Note that if this value is reduced post-installation, existing + virtual machines will not be reconciled, and you will have to manually edit the IP count + in Azure. +- Manually provision additional IP address for each Azure virtual machine. This could be done + as part of an Azure Virtual Machine Scale Set through an ARM template. You can find an example [here](#set-up-ip-configurations-on-an-azure-virtual-machine-scale-set). + Note that the `azure_ip_count` value in the UCP + [configuration file](../configure/ucp-configuration-file) will need to be set + to 0, otherwise UCP's IP Allocator service will provision the IP Address on top of + those you have already provisioned. -The steps for setting up IP address management are different in the two -environments. If you're using a scale set, you set up `ipConfigurations` -in an ARM template. If you're using stand-alone VMs, you set up IP pools -for each VM by using a utility container that's configured to run as a -global Swarm service, which Docker provides. +## Azure Prerequisites -## Considerations for size of IP pools +You must meet these infrastructure prerequisites in order +to successfully deploy Docker UCP on Azure + +- All UCP Nodes (Managers and Workers) need to be deployed into the same +Azure Resource Group. The Azure Networking (Vnets, Subnets, Security Groups) +components could be deployed in a second Azure Resource Group. +- All UCP Nodes (Managers and Workers) need to be attached to the same +Azure Subnet. +- All UCP (Managers and Workers) need to be tagged in Azure with the +`Orchestrator` tag. Note the value for this tag is the Kubernetes version number +in the format `Orchestrator=Kubernetes:x.y.z`. This value may change in each +UCP release. To find the relevant version please see the UCP +[Release Notes](../../release-notes). For example for UCP 3.1.0 the tag +would be `Orchestrator=Kubernetes:1.11.2`. +- The Azure Computer Name needs to match the Node Operating System's Hostname. +Note this applies to the FQDN of the host including domain names. +- An Azure Service Principal with `Contributor` access to the Azure Resource +Group hosting the UCP Nodes. Note, if using a separate networking Resource +Group the same Service Principal will need `Network Contributor` access to this +Resource Group. + +UCP requires the following information for the installation: + +- `subscriptionId` - The Azure Subscription ID in which the UCP +objects are being deployed. +- `tenantId` - The Azure Active Directory Tenant ID in which the UCP +objects are being deployed. +- `aadClientId` - The Azure Service Principal ID +- `aadClientSecret` - The Azure Service Principal Secret Key + +### Azure Configuration File + +For Docker UCP to integrate into Microsoft Azure, you need to place an Azure configuration file +within each UCP node in your cluster, at `/etc/kubernetes/azure.json`. + +See the template below. Note entries that do not contain `****` should not be +changed. + +``` +{ + "cloud":"AzurePublicCloud", + "tenantId": "***", + "subscriptionId": "***", + "aadClientId": "***", + "aadClientSecret": "***", + "resourceGroup": "***", + "location": "****", + "subnetName": "/****", + "securityGroupName": "****", + "vnetName": "****", + "cloudProviderBackoff": false, + "cloudProviderBackoffRetries": 0, + "cloudProviderBackoffExponent": 0, + "cloudProviderBackoffDuration": 0, + "cloudProviderBackoffJitter": 0, + "cloudProviderRatelimit": false, + "cloudProviderRateLimitQPS": 0, + "cloudProviderRateLimitBucket": 0, + "useManagedIdentityExtension": false, + "useInstanceMetadata": true +} +``` + +There are some optional values for Azure deployments: + +- `"primaryAvailabilitySetName": "****",` - The Worker Nodes availability set. +- `"vnetResourceGroup": "****",` - If your Azure Network objects live in a +seperate resource group. +- `"routeTableName": "****",` - If you have defined multiple Route tables within +an Azure subnet. + +More details on this configuration file can be found +[here](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/azure/azure.go). + +## Considerations for IPAM Configuration The subnet and the virtual network associated with the primary interface of -the Azure VMs need to be configured with a large enough address prefix/range. +the Azure virtual machines need to be configured with a large enough address prefix/range. The number of required IP addresses depends on the number of pods running on each node and the number of nodes in the cluster. For example, in a cluster of 256 nodes, to run a maximum of 128 pods concurrently on a node, make sure that the address space of the subnet and the virtual network can allocate at least 128 * 256 IP addresses, _in addition to_ -initial IP allocations to VM NICs during Azure resource creation. +initial IP allocations to virtual machine NICs during Azure resource creation. -Accounting for IP addresses that are allocated to NICs during VM bring-up, set +Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set the address space of the subnet and virtual network to 10.0.0.0/16. This ensures that the network can dynamically allocate at least 32768 addresses, plus a buffer for initial allocations for primary IP addresses. @@ -49,98 +137,13 @@ plus a buffer for initial allocations for primary IP addresses. > requested. {: .important} -## Configure IP pools for Azure stand-alone VMs +## Manually provision IP address as part of an Azure virtual machine scale set -Follow these steps when the cluster is deployed using stand-alone Azure VMs. - -### Create an Azure resource group - -Create an Azure resource group with VMs representing the nodes of the cluster -by using the Azure Portal, CLI, or ARM template. - -### Configure multiple IP addresses per VM NIC - -Follow the steps below to configure multiple IP addresses per VM NIC. - -1. Create a Service Principal with “contributor” level access to the above - resource group you just created. You can do this by using the Azure Portal - or CLI. Also, you can also use a utility container from Docker to create a - Service Principal. If you have the Docker Engine installed, run the - `docker4x/create-sp-azure`. image. The output of `create-sp-azure` contains - the following fields near the end. - - ``` - AD App ID: <...> - AD App Secret: <...> - AD Tenant ID: <...> - ``` - - You'll use these field values in a later step, so make a note of them. - Also, make note of your Azure subscription ID. - -2. Initialize a swarm cluster comprising the virtual machines you created - earlier. On one of the nodes of the cluster, run: - - ```bash - docker swarm init - ``` - -3. Note the tokens for managers and workers. -4. Join two other nodes on the cluster as manager (recommended for HA) by running: - - ```bash - docker swarm join --token - ``` - -5. Join remaining nodes on the cluster as workers: - - ```bash - docker swarm join --token - ``` - -6. Create a file named "azure_ucp_admin.toml" that contains contents from - creating the Service Principal. - - ``` - AZURE_CLIENT_ID = "" - AZURE_TENANT_ID = "" - AZURE_SUBSCRIPTION_ID = "" - AZURE_CLIENT_SECRET = "" - ``` - -7. Create a Docker Swarm secret based on the "azure_ucp_admin.toml" file. - - ```bash - docker secret create azure_ucp_admin.toml azure_ucp_admin.toml - ``` - -8. Create a global swarm service using the [docker4x/az-nic-ips](https://hub.docker.com/r/docker4x/az-nic-ips/) - image on Docker Hub. Use the Swarm secret to prepopulate the virtual machines - with the desired number of IP addresses per VM from the VNET pool. Set the - number of IPs to allocate to each VM through the IPCOUNT environment variable. - For example, to configure 128 IP addresses per VM, run the following command: - - ```bash - docker service create \ - --mode=global \ - --secret=azure_ucp_admin.toml \ - --log-driver json-file \ - --log-opt max-size=1m \ - --env IP_COUNT=128 \ - --name ipallocator \ - --constraint "node.platform.os == linux" \ - docker4x/az-nic-ips - ``` - -[Install UCP on the cluster](#install-ucp-on-the-cluster). - -## Set up IP configurations on an Azure virtual machine scale set - -Configure IP Pools for each member of the VM scale set during provisioning by +Configure IP Pools for each member of the virtual machine scale set during provisioning by associating multiple `ipConfigurations` with the scale set’s `networkInterfaceConfigurations`. Here's an example `networkProfile` configuration for an ARM template that configures pools of 32 IP addresses -for each VM in the VM scale set. +for each virtual machine in the virtual machine scale set. ```json "networkProfile": { @@ -195,7 +198,7 @@ for each VM in the VM scale set. } ``` -## Install UCP on the cluster +## Install UCP Use the following command to install UCP on the manager node. The `--pod-cidr` option maps to the IP address range that you configured for @@ -213,3 +216,7 @@ docker container run --rm -it \ --pod-cidr \ --cloud-provider Azure ``` + +#### Additional Notes + +- The Kubernetes `pod-cidr` must match the Azure Vnet of the hosts. diff --git a/ee/ucp/admin/install/upgrade.md b/ee/ucp/admin/install/upgrade.md index 61800c0be0..512276911e 100644 --- a/ee/ucp/admin/install/upgrade.md +++ b/ee/ucp/admin/install/upgrade.md @@ -29,6 +29,9 @@ Learn about [UCP system requirements](system-requirements.md). Ensure that your cluster nodes meet the minimum requirements for port openings. [Ports used](system-requirements.md/#ports-used) are documented in the UCP system requirements. +> Note: If you are upgrading a cluster to UCP 3.0.2 or higher on Microsoft +> Azure then please ensure all of the Azure [prequisities](install-on-azure.md/#azure-prerequisites) +> are met. ## Back up your cluster diff --git a/ee/ucp/user-access/kubectl.md b/ee/ucp/user-access/kubectl.md index f7d73a825f..eeb4171706 100644 --- a/ee/ucp/user-access/kubectl.md +++ b/ee/ucp/user-access/kubectl.md @@ -29,7 +29,7 @@ To use kubectl, install the binary on a workstation which has access to your UCP {: .important} First, find which version of Kubernetes is running in your cluster. This can be found -within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.0/api/). +within the Universal Control Plane dashboard or at the UCP API endpoint [version](/reference/ucp/3.1/api/). From the UCP dashboard, click on **About Docker EE** within the **Admin** menu in the top left corner of the dashboard. Then navigate to **Kubernetes**. diff --git a/engine/security/https.md b/engine/security/https.md index 5e23fb18d9..18376b4a93 100644 --- a/engine/security/https.md +++ b/engine/security/https.md @@ -7,10 +7,10 @@ redirect_from: title: Protect the Docker daemon socket --- -By default, Docker runs via a non-networked Unix socket. It can also +By default, Docker runs through a non-networked UNIX socket. It can also optionally communicate using an HTTP socket. -If you need Docker to be reachable via the network in a safe manner, you can +If you need Docker to be reachable through the network in a safe manner, you can enable TLS by specifying the `tlsverify` flag and pointing Docker's `tlscacert` flag to a trusted CA certificate. @@ -73,7 +73,7 @@ to connect to Docker: Next, we're going to sign the public key with our CA: -Since TLS connections can be made via IP address as well as DNS name, the IP addresses +Since TLS connections can be made through IP address as well as DNS name, the IP addresses need to be specified when creating the certificate. For example, to allow connections using `10.10.10.20` and `127.0.0.1`: @@ -113,24 +113,24 @@ request: $ openssl req -subj '/CN=client' -new -key key.pem -out client.csr -To make the key suitable for client authentication, create an extensions +To make the key suitable for client authentication, create a new extensions config file: - $ echo extendedKeyUsage = clientAuth >> extfile.cnf + $ echo extendedKeyUsage = clientAuth > extfile-client.cnf Now, generate the signed certificate: $ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \ - -CAcreateserial -out cert.pem -extfile extfile.cnf + -CAcreateserial -out cert.pem -extfile extfile-client.cnf Signature ok subject=/CN=client Getting CA Private Key Enter pass phrase for ca-key.pem: After generating `cert.pem` and `server-cert.pem` you can safely remove the -two certificate signing requests: +two certificate signing requests and extensions config files: - $ rm -v client.csr server.csr + $ rm -v client.csr server.csr extfile.cnf extfile-client.cnf With a default `umask` of 022, your secret keys are *world-readable* and writable for you and your group. @@ -180,7 +180,7 @@ certificates and trusted CA: ## Secure by default If you want to secure your Docker client connections by default, you can move -the files to the `.docker` directory in your home directory -- and set the +the files to the `.docker` directory in your home directory --- and set the `DOCKER_HOST` and `DOCKER_TLS_VERIFY` variables as well (instead of passing `-H=tcp://$HOST:2376` and `--tlsverify` on every call). diff --git a/engine/swarm/configs.md b/engine/swarm/configs.md index f78c1c4616..79adf7516b 100644 --- a/engine/swarm/configs.md +++ b/engine/swarm/configs.md @@ -586,7 +586,7 @@ configuration file. ``` 4. Verify that the `nginx` service is fully re-deployed, using - `docker service ls nginx`. When it is, you can remove the old `site.conf` + `docker service ps nginx`. When it is, you can remove the old `site.conf` config. ```bash diff --git a/get-started/part3.md b/get-started/part3.md index 9a1436f571..95c5132bbd 100644 --- a/get-started/part3.md +++ b/get-started/part3.md @@ -84,7 +84,7 @@ services: restart_policy: condition: on-failure ports: - - "4000:80" + - "80:80" networks: - webnet networks: diff --git a/install/index.md b/install/index.md index d9be2836fc..72cea8fa51 100644 --- a/install/index.md +++ b/install/index.md @@ -152,8 +152,8 @@ to choose the best installation path for you. | Platform | x86_64 | |:----------------------------------------------------------------------------|:-----------------:| -| [Docker for Mac (macOS)](/docker-for-mac/install.md) | {{ green-check }} | -| [Docker for Windows (Microsoft Windows 10)](/docker-for-windows/install.md) | {{ green-check }} | +| [Docker for Mac (macOS)](/docker-for-mac/install/) | {{ green-check }} | +| [Docker for Windows (Microsoft Windows 10)](/docker-for-windows/install/) | {{ green-check }} | #### Server @@ -162,10 +162,10 @@ to choose the best installation path for you. | Platform | x86_64 / amd64 | ARM | ARM64 / AARCH64 | IBM Power (ppc64le) | IBM Z (s390x) | |:--------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------|:-------------------------------------------------------| -| [CentOS]({{ install-prefix-ce }}/centos.md) | [{{ green-check }}]({{ install-prefix-ce }}/centos.md) | | [{{ green-check }}]({{ install-prefix-ce }}/centos.md) | | | -| [Debian]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | [{{ green-check }}]({{ install-prefix-ce }}/debian.md) | | | -| [Fedora]({{ install-prefix-ce }}/fedora.md) | [{{ green-check }}]({{ install-prefix-ce }}/fedora.md) | | | | | -| [Ubuntu]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu.md) | +| [CentOS]({{ install-prefix-ce }}/centos/) | [{{ green-check }}]({{ install-prefix-ce }}/centos/) | | [{{ green-check }}]({{ install-prefix-ce }}/centos/) | | | +| [Debian]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | [{{ green-check }}]({{ install-prefix-ce }}/debian/) | | | +| [Fedora]({{ install-prefix-ce }}/fedora/) | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | | +| [Ubuntu]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | ### Backporting diff --git a/install/windows/docker-ee.md b/install/windows/docker-ee.md index df05f577d8..5eec795c98 100644 --- a/install/windows/docker-ee.md +++ b/install/windows/docker-ee.md @@ -198,6 +198,37 @@ Install-Package -Name docker -ProviderName DockerMsftProvider -RequiredVersion 1 ``` The required version must match any of the versions available in this json file: https://dockermsft.blob.core.windows.net/dockercontainer/DockerMsftIndex.json +## Uninstall Docker EE + + Use the following commands to completely remove the Docker Engine - Enterprise from a Windows Server: + +1. Leave any active Docker Swarm + ```PowerShell + docker swarm leave --force + ``` + +1. Remove all running and stopped containers + + ```PowerShell + docker rm -f $(docker ps --all --quiet) + ``` + +1. Prune container data + ```PowerShell + docker system prune --all --volumes + ``` + +1. Uninstall Docker PowerShell Package and Module + ```PowerShell + Uninstall-Package -Name docker -ProviderName DockerMsftProvider + Uninstall-Module -Name DockerMsftProvider + ``` + +1. Clean up Windows Networking and file system + ```PowerShell + Get-HNSNetwork | Remove-HNSNetwork + Remove-Item -Path "C:\ProgramData\Docker" -Recurse -Force + ``` ## Preparing a Docker EE Engine for use with UCP diff --git a/network/network-tutorial-macvlan.md b/network/network-tutorial-macvlan.md index 79280842bc..31000e99b7 100644 --- a/network/network-tutorial-macvlan.md +++ b/network/network-tutorial-macvlan.md @@ -47,7 +47,7 @@ on your network, your container appears to be physically attached to the network my-macvlan-net ``` - You can use `docker network ls` and `docker network inspect pub_net` + You can use `docker network ls` and `docker network inspect my-macvlan-net` commands to verify that the network exists and is a `macvlan` network. 2. Start an `alpine` container and attach it to the `my-macvlan-net` network. The @@ -138,7 +138,7 @@ be physically attached to the network. my-8021q-macvlan-net ``` - You can use `docker network ls` and `docker network inspect pub_net` + You can use `docker network ls` and `docker network inspect my-8021q-macvlan-net` commands to verify that the network exists, is a `macvlan` network, and has parent `eth0.10`. You can use `ip addr show` on the Docker host to verify that the interface `eth0.10` exists and has a separate IP address diff --git a/reference/dtr/2.6/cli/emergency-repair.md b/reference/dtr/2.6/cli/emergency-repair.md index be8b936349..601beee374 100644 --- a/reference/dtr/2.6/cli/emergency-repair.md +++ b/reference/dtr/2.6/cli/emergency-repair.md @@ -45,7 +45,7 @@ DTR replicas for high availability. | `--existing-replica-id` | $DTR_REPLICA_ID | The ID of an existing DTR replica. To add, remove or modify DTR, you must connect to an existing healthy replica's database. | | `--help-extended` | $DTR_EXTENDED_HELP | Display extended help text for a given command. | | `--overlay-subnet` | $DTR_OVERLAY_SUBNET | The subnet used by the dtr-ol overlay network. Example: `10.0.0.0/24`. For high-availability, DTR creates an overlay network between UCP nodes. This flag allows you to choose the subnet for that network. Make sure the subnet you choose is not used on any machine where DTR replicas are deployed. | -| `--prune` | $PRUNE | Delete the data volumes of all unhealthy replicas. With this option, the volume of the DTR replica you`re restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling DTR from those replicas. | +| `--prune` | $PRUNE | Delete the data volumes of all unhealthy replicas. With this option, the volume of the DTR replica you're restoring is preserved but the volumes for all other replicas are deleted. This has the same result as completely uninstalling DTR from those replicas. | | `--ucp-ca` | $UCP_CA | Use a PEM-encoded TLS CA certificate for UCP. Download the UCP TLS CA certificate from https:///ca, and use `--ucp-ca "$(cat ca.pem)"`. | | `--ucp-insecure-tls` | $UCP_INSECURE_TLS | Disable TLS verification for UCP. The installation uses TLS but always trusts the TLS certificate used by UCP, which can lead to MITM (man-in-the-middle) attacks. For production deployments, use `--ucp-ca "$(cat ca.pem)"` instead. | | `--ucp-password` | $UCP_PASSWORD | The UCP administrator password. | diff --git a/reference/dtr/2.6/cli/join.md b/reference/dtr/2.6/cli/join.md index 80373de3e2..d518a58877 100644 --- a/reference/dtr/2.6/cli/join.md +++ b/reference/dtr/2.6/cli/join.md @@ -4,13 +4,19 @@ description: Add a new replica to an existing DTR cluster keywords: dtr, cli, join --- -Add a new replica to an existing DTR cluster +Add a new replica to an existing DTR cluster. Use SSH to log into any node that is already part of UCP. +## Usage +```bash +docker run -it --rm \ + docker/dtr:2.6.0 join \ + --ucp-node \ + --ucp-insecure-tls +``` ## Description - This command creates a replica of an existing DTR on a node managed by Docker Universal Control Plane (UCP). diff --git a/storage/storagedriver/select-storage-driver.md b/storage/storagedriver/select-storage-driver.md index f6655101be..4df14fcc71 100644 --- a/storage/storagedriver/select-storage-driver.md +++ b/storage/storagedriver/select-storage-driver.md @@ -152,14 +152,14 @@ With regard to Docker, the backing filesystem is the filesystem where `/var/lib/docker/` is located. Some storage drivers only work with specific backing filesystems. -| Storage driver | Supported backing filesystems | +| Storage driver | Supported backing filesystems | |:----------------------|:------------------------------| -| `overlay2`, `overlay` | `xfs` with fstype=1, `ext4` | +| `overlay2`, `overlay` | `xfs` with ftype=1, `ext4` | | `aufs` | `xfs`, `ext4` | | `devicemapper` | `direct-lvm` | | `btrfs` | `btrfs` | | `zfs` | `zfs` | -| `vfs` | any filesystem | +| `vfs` | any filesystem | ## Other considerations