Merge branch 'master' into newsite-tabs
51
README.md
|
@ -136,23 +136,23 @@ You have three options:
|
|||
docker-compose down
|
||||
```
|
||||
|
||||
2. Use Jekyll directly.
|
||||
2. Use Jekyll directly.
|
||||
|
||||
a. Clone this repo by running:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/docker/docker.github.io.git
|
||||
```
|
||||
|
||||
|
||||
b. Install Ruby 2.3 or later as described in [Installing Ruby]
|
||||
(https://www.ruby-lang.org/en/documentation/installation/).
|
||||
|
||||
|
||||
c. Install Bundler:
|
||||
|
||||
```bash
|
||||
gem install bundler
|
||||
```
|
||||
|
||||
|
||||
d. If you use Ubuntu, install packages required for the Nokogiri HTML
|
||||
parser:
|
||||
|
||||
|
@ -165,7 +165,7 @@ You have three options:
|
|||
```bash
|
||||
bundle install
|
||||
```
|
||||
|
||||
|
||||
>**Note**: You may have to install some packages manually.
|
||||
|
||||
f. Change the directory to `docker.github.io`.
|
||||
|
@ -203,12 +203,43 @@ guidance about grammar, syntax, formatting, styling, language, or tone. If
|
|||
something isn't clear in the guide, please submit an issue to let us know or
|
||||
submit a pull request to help us improve it.
|
||||
|
||||
### Generate the man pages
|
||||
### Per-page front-matter
|
||||
|
||||
For information on generating man pages (short for manual page), see the README.md
|
||||
document in [the man page directory](https://github.com/docker/docker/tree/master/man)
|
||||
in this project.
|
||||
The front-matter of a given page is in a section at the top of the Markdown
|
||||
file that starts and ends with three hyphens. It includes YAML content. The
|
||||
following keys are supported. The title, description, and keywords are required.
|
||||
|
||||
| Key | Required | Description |
|
||||
|------------------------|-----------|-----------------------------------------|
|
||||
| title | yes | The page title. This is added to the HTML output as a `<h1>` level header. |
|
||||
| description | yes | A sentence that describes the page contents. This is added to the HTML metadata. |
|
||||
| keywords | yes | A comma-separated list of keywords. These are added to the HTML metadata. |
|
||||
| redirect_from | no | A YAML list of pages which should redirect to THIS page. At build time, each page listed here is created as a HTML stub containing a 302 redirect to this page. |
|
||||
| notoc | no | Either `true` or `false`. If `true`, no in-page TOC is generated for the HTML output of this page. Defaults to `false`. Appropriate for some landing pages that have no in-page headings.|
|
||||
| toc_min | no | Ignored if `notoc` is set to `true`. The minimum heading level included in the in-page TOC. Defaults to `2`, to show `<h2>` headings as the minimum. |
|
||||
| toc_max | no | Ignored if `notoc` is set to `false`. The maximum heading level included in the in-page TOC. Defaults to `3`, to show `<h3>` headings. Set to the same as `toc_min` to only show `toc_min` level of headings. |
|
||||
| tree | no | Either `true` or `false`. Set to `false` to disable the left-hand site-wide navigation for this page. Appropriate for some pages like the search page or the 404 page. |
|
||||
| no_ratings | no | Either `true` or `false`. Set to `true` to disable the page-ratings applet for this page. Defaults to `false`. |
|
||||
|
||||
The following is an example of valid (but contrived) page metadata. The order of
|
||||
the metadata elements in the front-matter is not important.
|
||||
|
||||
```liquid
|
||||
---
|
||||
description: Instructions for installing Docker on Ubuntu
|
||||
keywords: requirements, apt, installation, ubuntu, install, uninstall, upgrade, update
|
||||
redirect_from:
|
||||
- /engine/installation/ubuntulinux/
|
||||
- /installation/ubuntulinux/
|
||||
- /engine/installation/linux/ubuntulinux/
|
||||
title: Get Docker for Ubuntu
|
||||
toc_min: 1
|
||||
toc_max: 6
|
||||
tree: false
|
||||
no_ratings: true
|
||||
---
|
||||
```
|
||||
|
||||
## Copyright and license
|
||||
|
||||
Code and documentation copyright 2016 Docker, inc, released under the Apache 2.0 license.
|
||||
Code and documentation copyright 2017 Docker, inc, released under the Apache 2.0 license.
|
||||
|
|
|
@ -16,6 +16,7 @@ keep_files: ["v1.4", "v1.5", "v1.6", "v1.7", "v1.8", "v1.9", "v1.10", "v1.11", "
|
|||
gems:
|
||||
- jekyll-redirect-from
|
||||
- jekyll-seo-tag
|
||||
- jekyll-relative-links
|
||||
|
||||
webrick:
|
||||
headers:
|
||||
|
@ -98,14 +99,14 @@ defaults:
|
|||
path: "datacenter"
|
||||
values:
|
||||
ucp_latest_image: "docker/ucp:2.1.0"
|
||||
dtr_latest_image: "docker/dtr:2.2.0"
|
||||
dtr_latest_image: "docker/dtr:2.2.2"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/dtr/2.2"
|
||||
values:
|
||||
ucp_version: "2.1"
|
||||
dtr_version: "2.2"
|
||||
docker_image: "docker/dtr:2.2.1"
|
||||
docker_image: "docker/dtr:2.2.2"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/dtr/2.1"
|
||||
|
@ -134,6 +135,7 @@ defaults:
|
|||
hide_from_sitemap: true
|
||||
ucp_version: "2.0"
|
||||
dtr_version: "2.1"
|
||||
docker_image: "docker/ucp:2.0.3"
|
||||
-
|
||||
scope:
|
||||
path: "datacenter/ucp/1.1"
|
||||
|
|
|
@ -8,6 +8,8 @@
|
|||
tar-files:
|
||||
- description: "UCP 2.1.0"
|
||||
url: https://packages.docker.com/caas/ucp_images_2.1.0.tar.gz
|
||||
- description: "DTR 2.2.2"
|
||||
url: https://packages.docker.com/caas/dtr-2.2.2.tar.gz
|
||||
- description: "DTR 2.2.1"
|
||||
url: https://packages.docker.com/caas/dtr-2.2.1.tar.gz
|
||||
- description: "DTR 2.2.0"
|
||||
|
|
|
@ -331,6 +331,7 @@ examples: |-
|
|||
```bash
|
||||
$ docker build -t whenry/fedora-jboss:latest -t whenry/fedora-jboss:v2.1 .
|
||||
```
|
||||
|
||||
### Specify a Dockerfile (-f)
|
||||
|
||||
```bash
|
||||
|
|
|
@ -104,12 +104,12 @@ examples: |-
|
|||
|
||||
```bash
|
||||
{% raw %}
|
||||
$ docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"
|
||||
$ docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
|
||||
|
||||
CONTAINER CPU % PRIV WORKING SET
|
||||
1285939c1fd3 0.07% 796 KiB / 64 MiB
|
||||
9c76f7834ae2 0.07% 2.746 MiB / 64 MiB
|
||||
d1ea048f04e4 0.03% 4.583 MiB / 64 MiB
|
||||
NAME CPU % PRIV WORKING SET
|
||||
fervent_panini 0.07% 796 KiB / 64 MiB
|
||||
ecstatic_kilby 0.07% 2.746 MiB / 64 MiB
|
||||
quizzical_nobel 0.03% 4.583 MiB / 64 MiB
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
|
|
|
@ -60,6 +60,8 @@ guides:
|
|||
title: Upgrading
|
||||
- path: /docker-for-aws/deploy/
|
||||
title: Deploy your app
|
||||
- path: /docker-for-aws/persistent-data-volumes/
|
||||
title: Persistent data volumes
|
||||
- path: /docker-for-aws/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-aws/opensource/
|
||||
|
@ -76,6 +78,8 @@ guides:
|
|||
title: Upgrading
|
||||
- path: /docker-for-azure/deploy/
|
||||
title: Deploy your app
|
||||
- path: /docker-for-azure/persistent-data-volumes/
|
||||
title: Persistent data volumes
|
||||
- path: /docker-for-azure/faqs/
|
||||
title: FAQs
|
||||
- path: /docker-for-azure/opensource/
|
||||
|
@ -144,6 +148,8 @@ guides:
|
|||
title: Try out the voting app
|
||||
- path: /engine/getstarted-voting-app/customize-app/
|
||||
title: Customize the app and redeploy
|
||||
- path: /engine/getstarted-voting-app/cleanup/
|
||||
title: Graceful shutdown, reboot, and clean-up
|
||||
- sectiontitle: Learn by example
|
||||
section:
|
||||
- path: /engine/tutorials/networkingcontainers/
|
||||
|
@ -875,8 +881,8 @@ manuals:
|
|||
title: Release notes
|
||||
- sectiontitle: 1.12
|
||||
section:
|
||||
- path: /cs-engine/1.12/install/
|
||||
title: Install CS Docker Engine
|
||||
- path: /cs-engine/1.12/
|
||||
title: Install
|
||||
- path: /cs-engine/1.12/upgrade/
|
||||
title: Upgrade
|
||||
- sectiontitle: Release notes
|
||||
|
@ -1085,18 +1091,18 @@ manuals:
|
|||
title: unpause
|
||||
- path: /compose/reference/up/
|
||||
title: up
|
||||
- sectiontitle: Compose File Reference
|
||||
section:
|
||||
- path: /compose/compose-file/
|
||||
title: Version 3
|
||||
- path: /compose/compose-file/compose-file-v2/
|
||||
title: Version 2
|
||||
- path: /compose/compose-file/compose-file-v1/
|
||||
title: Version 1
|
||||
- path: /compose/compose-file/compose-versioning/
|
||||
title: About versions and upgrading
|
||||
- path: /compose/faq/
|
||||
title: Frequently Asked Questions
|
||||
- sectiontitle: Compose File Reference
|
||||
section:
|
||||
- path: /compose/compose-file/
|
||||
title: Version 3
|
||||
- path: /compose/compose-file/compose-file-v2/
|
||||
title: Version 2
|
||||
- path: /compose/compose-file/compose-file-v1/
|
||||
title: Version 1
|
||||
- path: /compose/compose-file/compose-versioning/
|
||||
title: About versions and upgrading
|
||||
- path: /compose/faq/
|
||||
title: Frequently Asked Questions
|
||||
- path: /compose/bundles/
|
||||
title: Docker Stacks and Distributed Application Bundles
|
||||
- path: /compose/swarm/
|
||||
|
@ -1163,6 +1169,8 @@ manuals:
|
|||
title: Use a load balancer
|
||||
- path: /datacenter/ucp/2.1/guides/admin/configure/add-labels-to-cluster-nodes/
|
||||
title: Add labels to cluster nodes
|
||||
- path: /datacenter/ucp/2.1/guides/admin/configure/add-sans-to-cluster/
|
||||
title: Add SANs to cluster certificates
|
||||
- path: /datacenter/ucp/2.1/guides/admin/configure/store-logs-in-an-external-system/
|
||||
title: Store logs in an external system
|
||||
- path: /datacenter/ucp/2.1/guides/admin/configure/restrict-services-to-worker-nodes/
|
||||
|
@ -1191,6 +1199,8 @@ manuals:
|
|||
section:
|
||||
- path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/
|
||||
title: Monitor the cluster status
|
||||
- path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages/
|
||||
title: Troubleshoot node messages
|
||||
- path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs/
|
||||
title: Troubleshoot with logs
|
||||
- path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations/
|
||||
|
@ -1297,6 +1307,8 @@ manuals:
|
|||
title: Set up vulnerability scans
|
||||
- path: /datacenter/dtr/2.2/guides/admin/configure/deploy-a-cache/
|
||||
title: Deploy a cache
|
||||
- path: /datacenter/dtr/2.2/guides/admin/configure/garbage-collection/
|
||||
title: Garbage collection
|
||||
- sectiontitle: Manage users
|
||||
section:
|
||||
- path: /datacenter/dtr/2.2/guides/admin/manage-users/
|
||||
|
|
|
@ -1,19 +1,25 @@
|
|||
{% capture aws_button_latest %}
|
||||
<a href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/stable/Docker.tmpl" data-rel="Stable-1" target="blank" class="aws-deploy"></a>
|
||||
<a href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/stable/Docker.tmpl" data-rel="Stable-2" target="blank" class="aws-deploy"></a>
|
||||
{% endcapture %}
|
||||
{% capture aws_blue_latest %}
|
||||
<a class="button primary-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/stable/Docker.tmpl" data-rel="Stable-1" target="blank">Deploy Docker for AWS (stable)</a>
|
||||
<a class="button darkblue-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/stable/Docker.tmpl" data-rel="Stable-2" target="blank">Deploy Docker for AWS (stable)</a>
|
||||
{% endcapture %}
|
||||
{% capture aws_blue_beta %}
|
||||
<a class="button primary-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/edge/Docker.tmpl" data-rel="Beta-14" target="blank">Deploy Docker for AWS (beta)</a>
|
||||
{% capture aws_blue_edge %}
|
||||
<a class="button darkblue-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/edge/Docker.tmpl" data-rel="Beta-18" target="blank">Deploy Docker for AWS (beta)</a>
|
||||
{% endcapture %}
|
||||
{% capture aws_blue_vpc_latest %}
|
||||
<a class="button darkblue-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/stable/Docker-no-vpc.tmpl" data-rel="Stable-1" target="blank">Deploy Docker for AWS (stable)<br/><small>uses your existing VPC</small></a>
|
||||
{% endcapture %}
|
||||
{% capture aws_blue_vpc_edge %}
|
||||
<a class="button darkblue-btn aws-deploy" href="https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=Docker&templateURL=https://editions-us-east-1.s3.amazonaws.com/aws/edge/Docker-no-vpc.tmpl" data-rel="Beta-18" target="blank">Deploy Docker for AWS (beta)<br/><small>uses your existing VPC</small></a>
|
||||
{% endcapture %}
|
||||
|
||||
{% capture azure_blue_latest %}
|
||||
<a class="button darkblue-btn azure-deploy" href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fstable%2FDocker.tmpl" data-rel="Stable-1" target="blank">Deploy Docker for Azure (stable)</a>
|
||||
<a class="button darkblue-btn azure-deploy" href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fstable%2FDocker.tmpl" data-rel="Stable-2" target="blank">Deploy Docker for Azure (stable)</a>
|
||||
{% endcapture %}
|
||||
{% capture azure_blue_beta %}
|
||||
<a class="button darkblue-btn azure-deploy" href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fedge%2FDocker.tmpl" data-rel="Beta-14" target="blank">Deploy Docker for Azure (beta)</a>
|
||||
{% capture azure_blue_edge %}
|
||||
<a class="button darkblue-btn azure-deploy" href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fedge%2FDocker.tmpl" data-rel="Beta-18" target="blank">Deploy Docker for Azure (beta)</a>
|
||||
{% endcapture %}
|
||||
{% capture azure_button_latest %}
|
||||
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fstable%2FDocker.tmpl" data-rel="Stable-1" target="_blank" class="azure-deploy"></a>
|
||||
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fdownload.docker.com%2Fazure%2Fstable%2FDocker.tmpl" data-rel="Stable-2" target="_blank" class="azure-deploy"></a>
|
||||
{% endcapture %}
|
||||
|
|
|
@ -45,7 +45,7 @@
|
|||
{% assign html_id = _idWorkspace[1] %}
|
||||
|
||||
{% capture _hAttrToStrip %}{{ headerLevel }} id="{{ html_id }}">{% endcapture %}
|
||||
{% assign header = _workspace[0] | replace: _hAttrToStrip, '' %}
|
||||
{% assign header = _workspace[0] | replace: _hAttrToStrip, '' | remove_first: "1>" %}
|
||||
|
||||
{% assign space = '' %}
|
||||
{% for i in (1..indentAmount) %}
|
||||
|
|
|
@ -4,6 +4,8 @@ keywords: fig, composition, compose version 1, docker
|
|||
redirect_from:
|
||||
- /compose/yml
|
||||
title: Compose file version 1 reference
|
||||
toc_max: 4
|
||||
toc_min: 1
|
||||
---
|
||||
|
||||
These topics describe version 1 of the Compose file format. This is the oldest
|
||||
|
@ -473,7 +475,7 @@ is specified, then read-write will be used.
|
|||
- service_name
|
||||
- service_name:ro
|
||||
|
||||
### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, oom_score_adj, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir
|
||||
### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir
|
||||
|
||||
Each of these is a single value, analogous to its
|
||||
[docker run](/engine/reference/run.md) counterpart.
|
||||
|
|
|
@ -4,6 +4,8 @@ keywords: fig, composition, compose version 3, docker
|
|||
redirect_from:
|
||||
- /compose/yml
|
||||
title: Compose file version 2 reference
|
||||
toc_max: 4
|
||||
toc_min: 1
|
||||
---
|
||||
|
||||
These topics describe version 2 of the Compose file format.
|
||||
|
@ -911,9 +913,6 @@ Each of these is a single value, analogous to its
|
|||
stdin_open: true
|
||||
tty: true
|
||||
|
||||
> **Note:** The following options are only available for
|
||||
> [version 2](compose-versioning.md#version-2) and up:
|
||||
> `oom_score_adj`
|
||||
|
||||
## Specifying durations
|
||||
|
||||
|
|
|
@ -227,6 +227,8 @@ several options have been removed:
|
|||
`deploy`. Note that `deploy` configuration only takes effect when using
|
||||
`docker stack deploy`, and is ignored by `docker-compose`.
|
||||
|
||||
- `extends`: This option has been removed for `version: "3.x"` Compose files.
|
||||
|
||||
### Version 1 to 2.x
|
||||
|
||||
In the majority of cases, moving from version 1 to 2 is a very simple process:
|
||||
|
|
|
@ -5,6 +5,8 @@ redirect_from:
|
|||
- /compose/yml
|
||||
- /compose/compose-file-v3.md
|
||||
title: Compose file version 3 reference
|
||||
toc_max: 4
|
||||
toc_min: 1
|
||||
---
|
||||
|
||||
These topics describe version 3 of the Compose file format. This is the newest
|
||||
|
@ -34,8 +36,8 @@ As with `docker run`, options specified in the Dockerfile (e.g., `CMD`,
|
|||
specify them again in `docker-compose.yml`.
|
||||
|
||||
You can use environment variables in configuration values with a Bash-like
|
||||
`${VARIABLE}` syntax - see [variable
|
||||
substitution](compose-file.md#variable-substitution) for full details.
|
||||
`${VARIABLE}` syntax - see
|
||||
[variable substitution](#variable-substitution) for full details.
|
||||
|
||||
This section contains a list of all configuration options supported by a service
|
||||
definition in version 3.
|
||||
|
@ -45,8 +47,8 @@ definition in version 3.
|
|||
Configuration options that are applied at build time.
|
||||
|
||||
`build` can be specified either as a string containing a path to the build
|
||||
context, or an object with the path specified under [context](compose-file.md#context) and
|
||||
optionally [dockerfile](compose-file.md#dockerfile) and [args](compose-file.md#args).
|
||||
context, or an object with the path specified under [context](#context) and
|
||||
optionally [dockerfile](#dockerfile) and [args](#args).
|
||||
|
||||
build: ./dir
|
||||
|
||||
|
@ -263,15 +265,15 @@ resources:
|
|||
#### restart_policy
|
||||
|
||||
Configures if and how to restart containers when they exit. Replaces
|
||||
[`restart`](compose-file.md#restart).
|
||||
[`restart`](compose-file-v2.md#cpushares-cpuquota-cpuset-domainname-hostname-ipc-macaddress-memlimit-memswaplimit-oomscoreadj-privileged-readonly-restart-shmsize-stdinopen-tty-user-workingdir).
|
||||
|
||||
- `condition`: One of `none`, `on-failure` or `any` (default: `any`).
|
||||
- `delay`: How long to wait between restart attempts, specified as a
|
||||
[duration](compose-file.md#specifying-durations) (default: 0).
|
||||
[duration](#specifying-durations) (default: 0).
|
||||
- `max_attempts`: How many times to attempt to restart a container before giving
|
||||
up (default: never give up).
|
||||
- `window`: How long to wait before deciding if a restart has succeeded,
|
||||
specified as a [duration](compose-file.md#specifying-durations) (default:
|
||||
specified as a [duration](#specifying-durations) (default:
|
||||
decide immediately).
|
||||
|
||||
```
|
||||
|
@ -459,9 +461,9 @@ beginning with `#` (i.e. comments) are ignored, as are blank lines.
|
|||
# Set Rails/Rack environment
|
||||
RACK_ENV=development
|
||||
|
||||
> **Note:** If your service specifies a [build](compose-file.md#build) option, variables
|
||||
> **Note:** If your service specifies a [build](#build) option, variables
|
||||
> defined in environment files will _not_ be automatically visible during the
|
||||
> build. Use the [args](compose-file.md#args) sub-option of `build` to define build-time
|
||||
> build. Use the [args](#args) sub-option of `build` to define build-time
|
||||
> environment variables.
|
||||
|
||||
The value of `VAL` is used as is and not modified at all. For example if the value is
|
||||
|
@ -487,9 +489,9 @@ machine Compose is running on, which can be helpful for secret or host-specific
|
|||
- SHOW=true
|
||||
- SESSION_SECRET
|
||||
|
||||
> **Note:** If your service specifies a [build](compose-file.md#build) option, variables
|
||||
> **Note:** If your service specifies a [build](#build) option, variables
|
||||
> defined in `environment` will _not_ be automatically visible during the
|
||||
> build. Use the [args](compose-file.md#args) sub-option of `build` to define build-time
|
||||
> build. Use the [args](#args) sub-option of `build` to define build-time
|
||||
> environment variables.
|
||||
|
||||
### expose
|
||||
|
@ -501,40 +503,6 @@ accessible to linked services. Only the internal port can be specified.
|
|||
- "3000"
|
||||
- "8000"
|
||||
|
||||
### extends
|
||||
|
||||
Extend another service, in the current file or another, optionally overriding
|
||||
configuration.
|
||||
|
||||
You can use `extends` on any service together with other configuration keys.
|
||||
The `extends` value must be a dictionary defined with a required `service`
|
||||
and an optional `file` key.
|
||||
|
||||
extends:
|
||||
file: common.yml
|
||||
service: webapp
|
||||
|
||||
The `service` the name of the service being extended, for example
|
||||
`web` or `database`. The `file` is the location of a Compose configuration
|
||||
file defining that service.
|
||||
|
||||
If you omit the `file` Compose looks for the service configuration in the
|
||||
current file. The `file` value can be an absolute or relative path. If you
|
||||
specify a relative path, Compose treats it as relative to the location of the
|
||||
current file.
|
||||
|
||||
You can extend a service that itself extends another. You can extend
|
||||
indefinitely. Compose does not support circular references and `docker-compose`
|
||||
returns an error if it encounters one.
|
||||
|
||||
For more on `extends`, see the
|
||||
[the extends documentation](../extends.md#extending-services).
|
||||
|
||||
> **Note:** This option is not yet supported when
|
||||
> [deploying a stack in swarm mode](/engine/reference/commandline/stack_deploy.md)
|
||||
> with a (version 3) Compose file. Use `docker-compose config` to generate a
|
||||
> configuration with all `extends` options resolved, and deploy from that.
|
||||
|
||||
### external_links
|
||||
|
||||
Link to containers started outside this `docker-compose.yml` or even outside
|
||||
|
@ -595,7 +563,7 @@ used.
|
|||
|
||||
### healthcheck
|
||||
|
||||
> [Version 2.1 file format](compose-file.md#version-21) and up.
|
||||
> [Version 2.1 file format](compose-versioning.md#version-21) and up.
|
||||
|
||||
Configure a check that's run to determine whether or not containers for this
|
||||
service are "healthy". See the docs for the
|
||||
|
@ -609,7 +577,7 @@ for details on how healthchecks work.
|
|||
retries: 3
|
||||
|
||||
`interval` and `timeout` are specified as
|
||||
[durations](compose-file.md#specifying-durations).
|
||||
[durations](#specifying-durations).
|
||||
|
||||
`test` must be either a string or a list. If it's a list, the first item must be
|
||||
either `NONE`, `CMD` or `CMD-SHELL`. If it's a string, it's equivalent to
|
||||
|
@ -640,7 +608,7 @@ a partial image ID.
|
|||
image: a4bc65fd
|
||||
|
||||
If the image does not exist, Compose attempts to pull it, unless you have also
|
||||
specified [build](compose-file.md#build), in which case it builds it using the specified
|
||||
specified [build](#build), in which case it builds it using the specified
|
||||
options and tags it with the specified tag.
|
||||
|
||||
### isolation
|
||||
|
@ -682,9 +650,9 @@ Containers for the linked service will be reachable at a hostname identical to
|
|||
the alias, or the service name if no alias was specified.
|
||||
|
||||
Links also express dependency between services in the same way as
|
||||
[depends_on](compose-file.md#dependson), so they determine the order of service startup.
|
||||
[depends_on](#dependson), so they determine the order of service startup.
|
||||
|
||||
> **Note:** If you define both links and [networks](compose-file.md#networks), services with
|
||||
> **Note:** If you define both links and [networks](#networks), services with
|
||||
> links between them must share at least one network in common in order to
|
||||
> communicate.
|
||||
|
||||
|
@ -741,7 +709,7 @@ the special form `service:[service name]`.
|
|||
### networks
|
||||
|
||||
Networks to join, referencing entries under the
|
||||
[top-level `networks` key](compose-file.md#network-configuration-reference).
|
||||
[top-level `networks` key](#network-configuration-reference).
|
||||
|
||||
services:
|
||||
some-service:
|
||||
|
@ -803,7 +771,7 @@ In the example below, three services are provided (`web`, `worker`, and `db`), a
|
|||
|
||||
Specify a static IP address for containers for this service when joining the network.
|
||||
|
||||
The corresponding network configuration in the [top-level networks section](compose-file.md#network-configuration-reference) must have an `ipam` block with subnet configurations covering each static address. If IPv6 addressing is desired, the [`enable_ipv6`](compose-file.md#enableipv6) option must be set.
|
||||
The corresponding network configuration in the [top-level networks section](#network-configuration-reference) must have an `ipam` block with subnet configurations covering each static address. If IPv6 addressing is desired, the [`enable_ipv6`](#enableipv6) option must be set.
|
||||
|
||||
An example:
|
||||
|
||||
|
@ -882,6 +850,102 @@ port (a random host port will be chosen).
|
|||
- "127.0.0.1:5000-5010:5000-5010"
|
||||
- "6060:6060/udp"
|
||||
|
||||
### secrets
|
||||
|
||||
Grant access to secrets on a per-service basis using the per-service `secrets`
|
||||
configuration. Two different syntax variants are supported.
|
||||
|
||||
> **Note**: The secret must already exist or be
|
||||
> [defined in the top-level `secrets` configuration](#secrets-configuration-reference)
|
||||
> of this stack file, or stack deployment will fail.
|
||||
|
||||
#### Short syntax
|
||||
|
||||
The short syntax variant only specifies the secret name. This grants the
|
||||
container access to the secret and mounts it at `/run/secrets/<secret_name>`
|
||||
within the container. The source name and destination mountpoint are both set
|
||||
to the secret name.
|
||||
|
||||
> **Warning**: Due to a bug in Docker 1.13.1, using the short syntax currently
|
||||
> mounts the secret with permissions `000`, which means secrets defined using
|
||||
> the short syntax are unreadable within the container if the command does not
|
||||
> run as the `root` user. The workaround is to use the long syntax instead if
|
||||
> you use Docker 1.13.1 and the secret must be read by a non-`root` user.
|
||||
|
||||
The following example uses the short syntax to grant the `redis` service
|
||||
access to the `my_secret` and `my_other_secret` secrets. The value of
|
||||
`my_secret` is set to the contents of the file `./my_secret.txt`, and
|
||||
`my_other_secret` is defined as an external resource, which means that it has
|
||||
already been defined in Docker, either by running the `docker secret create`
|
||||
command or by another stack deployment. If the external secret does not exist,
|
||||
the stack deployment fails with a `secret not found` error.
|
||||
|
||||
```none
|
||||
version: "3.1"
|
||||
services:
|
||||
redis:
|
||||
image: redis:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
secrets:
|
||||
- my_secret
|
||||
- my_other_secret
|
||||
secrets:
|
||||
my_secret:
|
||||
file: ./my_secret.txt
|
||||
my_other_secret:
|
||||
external: true
|
||||
```
|
||||
|
||||
#### Long syntax
|
||||
|
||||
The long syntax provides more granularity in how the secret is created within
|
||||
the service's task containers.
|
||||
|
||||
- `source`: The name of the secret as it exists in Docker.
|
||||
- `target`: The name of the file that will be mounted in `/run/secrets/` in the
|
||||
service's task containers. Defaults to `source` if not specified.
|
||||
- `uid` and `gid`: The numeric UID or GID which will own the file within
|
||||
`/run/secrets/` in the service's task containers. Both default to `0` if not
|
||||
specified.
|
||||
- `mode`: The permissions for the file that will be mounted in `/run/secrets/`
|
||||
in the service's task containers, in octal notation. For instance, `0444`
|
||||
represents world-readable. The default in Docker 1.13.1 is `0000`, but will
|
||||
be `0444` in the future. Secrets cannot be writable because they are mounted
|
||||
in a temporary filesystem, so if you set the writable bit, it is ignored. The
|
||||
executable bit can be set. If you aren't familiar with UNIX file permission
|
||||
modes, you may find this
|
||||
[permissions calculator](http://permissions-calculator.org/){: target="_blank" class="_" }
|
||||
useful.
|
||||
|
||||
The following example sets name of the `my_secret` to `redis_secret` within the
|
||||
container, sets the mode to `0440` (group-readable) and sets the user and group
|
||||
to `103`. The `redis` service does not have access to the `my_other_secret`
|
||||
secret.
|
||||
|
||||
```none
|
||||
version: "3.1"
|
||||
services:
|
||||
redis:
|
||||
image: redis:latest
|
||||
deploy:
|
||||
replicas: 1
|
||||
secrets:
|
||||
- source: my_secret
|
||||
target: redis_secret
|
||||
uid: '103'
|
||||
gid: '103'
|
||||
mode: 0440
|
||||
secrets:
|
||||
my_secret:
|
||||
file: ./my_secret.txt
|
||||
my_other_secret:
|
||||
external: true
|
||||
```
|
||||
|
||||
You can grant a service access to multiple secrets and you can mix long and
|
||||
short syntax. Defining a secret does not imply granting a service access to it.
|
||||
|
||||
### security_opt
|
||||
|
||||
Override the default labeling scheme for each container.
|
||||
|
@ -898,8 +962,8 @@ Override the default labeling scheme for each container.
|
|||
|
||||
Specify how long to wait when attempting to stop a container if it doesn't
|
||||
handle SIGTERM (or whatever stop signal has been specified with
|
||||
[`stop_signal`](compose-file.md#stopsignal)), before sending SIGKILL. Specified
|
||||
as a [duration](compose-file.md#specifying-durations).
|
||||
[`stop_signal`](#stopsignal)), before sending SIGKILL. Specified
|
||||
as a [duration](#specifying-durations).
|
||||
|
||||
stop_grace_period: 1s
|
||||
stop_grace_period: 1m30s
|
||||
|
@ -963,7 +1027,7 @@ more information.
|
|||
### volumes, volume\_driver
|
||||
|
||||
> **Note:** The top-level
|
||||
> [`volumes` option](compose-file.md#volume-configuration-reference) defines
|
||||
> [`volumes` option](#volume-configuration-reference) defines
|
||||
> a named volume and references it from each service's `volumes` list. This replaces `volumes_from` in earlier versions of the Compose file format.
|
||||
|
||||
Mount paths or named volumes, optionally specifying a path on the host machine
|
||||
|
@ -1013,10 +1077,33 @@ There are several things to note, depending on which
|
|||
See [Docker Volumes](/engine/userguide/dockervolumes.md) and
|
||||
[Volume Plugins](/engine/extend/plugins_volume.md) for more information.
|
||||
|
||||
### domainname, hostname, ipc, mac\_address, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir
|
||||
|
||||
Each of these is a single value, analogous to its
|
||||
[docker run](/engine/reference/run.md) counterpart.
|
||||
|
||||
user: postgresql
|
||||
working_dir: /code
|
||||
|
||||
domainname: foo.com
|
||||
hostname: foo
|
||||
ipc: host
|
||||
mac_address: 02:42:ac:11:65:43
|
||||
|
||||
privileged: true
|
||||
|
||||
restart: always
|
||||
|
||||
read_only: true
|
||||
shm_size: 64M
|
||||
stdin_open: true
|
||||
tty: true
|
||||
|
||||
|
||||
## Specifying durations
|
||||
|
||||
Some configuration options, such as the `interval` and `timeout` sub-options for
|
||||
[`healthcheck`](compose-file.md#healthcheck), accept a duration as a string in a
|
||||
[`healthcheck`](#healthcheck), accept a duration as a string in a
|
||||
format that looks like this:
|
||||
|
||||
2.5s
|
||||
|
@ -1136,7 +1223,7 @@ conflicting with those used by other software.
|
|||
|
||||
The top-level `networks` key lets you specify networks to be created. For a full
|
||||
explanation of Compose's use of Docker networking features, see the
|
||||
[Networking guide](networking.md).
|
||||
[Networking guide](../networking.md).
|
||||
|
||||
### driver
|
||||
|
||||
|
@ -1245,6 +1332,33 @@ refer to it within the Compose file:
|
|||
external:
|
||||
name: actual-name-of-network
|
||||
|
||||
## secrets configuration reference
|
||||
|
||||
The top-level `secrets` declaration defines or references
|
||||
[secrets](/engine/swarm/secrets.md) which can be granted to the services in this
|
||||
stack. The source of the secret is either `file` or `external`.
|
||||
|
||||
- `file`: The secret is created with the contents of the file at the specified
|
||||
path.
|
||||
- `external`: If set to true, specifies that this secret has already been
|
||||
created. Docker will not attempt to create it, and if it does not exist, a
|
||||
`secret not found` error occurs.
|
||||
|
||||
In this example, `my_first_secret` will be created (as
|
||||
`<stack_name>_my_first_secret)`when the stack is deployed,
|
||||
and `my_second_secret` already exists in Docker.
|
||||
|
||||
```none
|
||||
secrets:
|
||||
my_first_secret:
|
||||
file: ./secret_data
|
||||
my_second_secret
|
||||
external: true
|
||||
```
|
||||
|
||||
You still need to [grant access to the secrets](#secrets) to each service in the
|
||||
stack.
|
||||
|
||||
## Variable substitution
|
||||
|
||||
{% include content/compose-var-sub.md %}
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Declare default environment variables in a file
|
||||
keywords: fig, composition, compose, docker, orchestration, environment, env file
|
||||
title: Declare default environment variables in file
|
||||
notoc: true
|
||||
---
|
||||
|
||||
Compose supports declaring default environment variables in an environment
|
||||
|
@ -34,4 +35,4 @@ file, but can also be used to define the following
|
|||
|
||||
- [User guide](index.md)
|
||||
- [Command line reference](./reference/index.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
|
|
|
@ -159,6 +159,8 @@ backup, include the `docker-compose.admin.yml` as well.
|
|||
|
||||
## Extending services
|
||||
|
||||
> Up to version 2.1 , version 3.x does not support `extends` yet.
|
||||
|
||||
Docker Compose's `extends` keyword enables sharing of common configurations
|
||||
among different files, or even different projects entirely. Extending services
|
||||
is useful if you have several services that reuse a common set of configuration
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Introduction and Overview of Compose
|
||||
keywords: documentation, docs, docker, compose, orchestration, containers
|
||||
title: Docker Compose
|
||||
notoc: true
|
||||
---
|
||||
|
||||
Compose is a tool for defining and running multi-container Docker applications. To learn more about Compose refer to the following documentation:
|
||||
|
|
|
@ -34,7 +34,7 @@ To install Compose, do the following:
|
|||
The following is an example command illustrating the format:
|
||||
|
||||
```bash
|
||||
$ curl -L "https://github.com/docker/compose/releases/download/1.10.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||
$ curl -L "https://github.com/docker/compose/releases/download/1.11.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||
```
|
||||
|
||||
If you have problems installing with `curl`, see
|
||||
|
@ -54,7 +54,7 @@ To install Compose, do the following:
|
|||
```bash
|
||||
$ docker-compose --version
|
||||
|
||||
docker-compose version: 1.10.0
|
||||
docker-compose version: 1.11.1
|
||||
```
|
||||
|
||||
## Alternative install options
|
||||
|
@ -80,7 +80,7 @@ Compose can also be run inside a container, from a small bash script wrapper.
|
|||
To install compose as a container run:
|
||||
|
||||
```bash
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.10.0/run.sh > /usr/local/bin/docker-compose
|
||||
$ curl -L https://github.com/docker/compose/releases/download/1.11.1/run.sh > /usr/local/bin/docker-compose
|
||||
$ chmod +x /usr/local/bin/docker-compose
|
||||
```
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ keywords: fig, composition, compose, docker, orchestration, cli, reference
|
|||
redirect_from:
|
||||
- /compose/env
|
||||
title: Link environment variables (superseded)
|
||||
notoc: true
|
||||
---
|
||||
|
||||
> **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details.
|
||||
|
@ -39,4 +40,4 @@ Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1`
|
|||
- [User guide](index.md)
|
||||
- [Installing Compose](install.md)
|
||||
- [Command line reference](./reference/index.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
- [Compose file reference](compose-file.md)
|
||||
|
|
|
@ -4,7 +4,7 @@ keywords: documentation, docs, docker, compose, orchestration, containers, netwo
|
|||
title: Networking in Compose
|
||||
---
|
||||
|
||||
> **Note:** This document only applies if you're using [version 2 of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files.
|
||||
> **Note:** This document only applies if you're using [version 2 or higher of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files.
|
||||
|
||||
By default Compose sets up a single
|
||||
[network](/engine/reference/commandline/network_create/) for your app. Each
|
||||
|
@ -143,4 +143,4 @@ If you want your containers to join a pre-existing network, use the [`external`
|
|||
external:
|
||||
name: my-pre-existing-network
|
||||
|
||||
Instead of attempting to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it.
|
||||
Instead of attempting to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it.
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
description: docker-compose build
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, build
|
||||
title: docker-compose build
|
||||
notoc: true
|
||||
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -15,4 +17,4 @@ Options:
|
|||
|
||||
Services are built once and then tagged as `project_service`, e.g.,
|
||||
`composetest_db`. If you change a service's Dockerfile or the contents of its
|
||||
build directory, run `docker-compose build` to rebuild it.
|
||||
build directory, run `docker-compose build` to rebuild it.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Create a distributed application bundle from the Compose file.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, bundle
|
||||
title: docker-compose bundle
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -21,4 +22,4 @@ Images must have digests stored, which requires interaction with a
|
|||
Docker registry. If digests aren't stored for all images, you can fetch
|
||||
them with `docker-compose pull` or `docker-compose push`. To push images
|
||||
automatically when bundling, pass `--push-images`. Only services with
|
||||
a `build` option specified will have their images pushed.
|
||||
a `build` option specified will have their images pushed.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Config validates and view the compose file.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, config
|
||||
title: docker-compose config
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```:
|
||||
|
@ -13,4 +14,4 @@ Options:
|
|||
--services Print the service names, one per line.
|
||||
```
|
||||
|
||||
Validate and view the compose file.
|
||||
Validate and view the compose file.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Create creates containers for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, create
|
||||
title: docker-compose create
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -16,4 +17,4 @@ Options:
|
|||
Incompatible with --force-recreate.
|
||||
--no-build Don't build an image, even if it's missing.
|
||||
--build Build images before creating containers.
|
||||
```
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: docker-compose down
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, down
|
||||
title: docker-compose down
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -28,4 +29,4 @@ By default, the only things removed are:
|
|||
- Networks defined in the `networks` section of the Compose file
|
||||
- The default network, if one is used
|
||||
|
||||
Networks and volumes defined as `external` are never removed.
|
||||
Networks and volumes defined as `external` are never removed.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Compose CLI environment variables
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, reference
|
||||
title: Compose CLI environment variables
|
||||
notoc: true
|
||||
---
|
||||
|
||||
Several environment variables are available for you to configure the Docker Compose command-line behaviour.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Receive real time events from containers.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, events
|
||||
title: docker-compose events
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -24,4 +25,4 @@ format:
|
|||
"image": "alpine:edge",
|
||||
"time": "2015-11-20T18:01:03.615550",
|
||||
}
|
||||
```
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: docker-compose exec
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, exec
|
||||
title: docker-compose exec
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -19,4 +20,4 @@ Options:
|
|||
|
||||
This is equivalent of `docker exec`. With this subcommand you can run arbitrary
|
||||
commands in your services. Commands are by default allocating a TTY, so you can
|
||||
do e.g. `docker-compose exec web sh` to get an interactive prompt.
|
||||
do e.g. `docker-compose exec web sh` to get an interactive prompt.
|
||||
|
|
|
@ -2,10 +2,11 @@
|
|||
description: docker-compose help
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, help
|
||||
title: docker-compose help
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
Usage: help COMMAND
|
||||
```
|
||||
|
||||
Displays help and usage instructions for a command.
|
||||
Displays help and usage instructions for a command.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Compose CLI reference
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, reference
|
||||
title: Compose command-line reference
|
||||
notoc: true
|
||||
---
|
||||
|
||||
The following pages describe the usage information for the [docker-compose](overview.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line.
|
||||
|
@ -35,4 +36,4 @@ The following pages describe the usage information for the [docker-compose](over
|
|||
## Where to go next
|
||||
|
||||
* [CLI environment variables](envvars.md)
|
||||
* [docker-compose Command](overview.md)
|
||||
* [docker-compose Command](overview.md)
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Forces running containers to stop.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, kill
|
||||
title: docker-compose kill
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Displays log output from services.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, logs
|
||||
title: docker-compose logs
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -15,4 +16,4 @@ Options:
|
|||
for each container.
|
||||
```
|
||||
|
||||
Displays log output from services.
|
||||
Displays log output from services.
|
||||
|
|
|
@ -4,6 +4,7 @@ keywords: fig, composition, compose, docker, orchestration, cli, docker-compose
|
|||
redirect_from:
|
||||
- /compose/reference/docker-compose/
|
||||
title: Overview of docker-compose CLI
|
||||
notoc: true
|
||||
---
|
||||
|
||||
This page provides the usage information for the `docker-compose` Command.
|
||||
|
|
|
@ -2,10 +2,11 @@
|
|||
description: Pauses running containers for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, pause
|
||||
title: docker-compose pause
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
Usage: pause [SERVICE...]
|
||||
```
|
||||
|
||||
Pauses running containers of a service. They can be unpaused with `docker-compose unpause`.
|
||||
Pauses running containers of a service. They can be unpaused with `docker-compose unpause`.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Prints the public port for a port binding.s
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, port
|
||||
title: docker-compose port
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -13,4 +14,4 @@ Options:
|
|||
instances of a service [default: 1]
|
||||
```
|
||||
|
||||
Prints the public port for a port binding.
|
||||
Prints the public port for a port binding.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Lists containers.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, ps
|
||||
title: docker-compose ps
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```none
|
||||
|
@ -19,4 +20,4 @@ $ docker-compose ps
|
|||
--------------------------------------------------------------------------------------------
|
||||
mywordpress_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
|
||||
mywordpress_wordpress_1 /entrypoint.sh apache2-for ... Restarting 0.0.0.0:8000->80/tcp
|
||||
```
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Pulls service images.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, pull
|
||||
title: docker-compose pull
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -11,4 +12,4 @@ Options:
|
|||
--ignore-pull-failures Pull what it can and ignores images with pull failures.
|
||||
```
|
||||
|
||||
Pulls service images.
|
||||
Pulls service images.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Pushes service images.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, push
|
||||
title: docker-compose push
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -11,4 +12,4 @@ Options:
|
|||
--ignore-push-failures Push what it can and ignores images with push failures.
|
||||
```
|
||||
|
||||
Pushes images for services.
|
||||
Pushes images for services.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Restarts Docker Compose services.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, restart
|
||||
title: docker-compose restart
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Removes stopped service containers.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, rm
|
||||
title: docker-compose rm
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -19,4 +20,4 @@ Removes stopped service containers.
|
|||
By default, anonymous volumes attached to containers will not be removed. You
|
||||
can override this with `-v`. To list all volumes, use `docker volume ls`.
|
||||
|
||||
Any data which is not in a volume will be lost.
|
||||
Any data which is not in a volume will be lost.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Runs a one-off command on a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, run
|
||||
title: docker-compose run
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Sets the number of containers to run for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, scale
|
||||
title: docker-compose scale
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -13,3 +14,9 @@ Sets the number of containers to run for a service.
|
|||
Numbers are specified as arguments in the form `service=num`. For example:
|
||||
|
||||
docker-compose scale web=2 worker=3
|
||||
|
||||
>**Tip:** Alternatively, in
|
||||
[Compose file version 3.x](/compose/compose-file/index.md), you can specify
|
||||
[`replicas`](/compose/compose-file/index.md#replicas)
|
||||
under [`deploy`](/compose/compose-file/index.md#deploy) as part of the
|
||||
service configuration for [Swarm mode](/engine/swarm/).
|
||||
|
|
|
@ -2,10 +2,11 @@
|
|||
description: Starts existing containers for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, start
|
||||
title: docker-compose start
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
Usage: start [SERVICE...]
|
||||
```
|
||||
|
||||
Starts existing containers for a service.
|
||||
Starts existing containers for a service.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: 'Stops running containers without removing them. '
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, stop
|
||||
title: docker-compose stop
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -12,4 +13,4 @@ Options:
|
|||
```
|
||||
|
||||
Stops running containers without removing them. They can be started again with
|
||||
`docker-compose start`.
|
||||
`docker-compose start`.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Displays the running processes.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, top
|
||||
title: docker-compose top
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```none
|
||||
|
@ -22,4 +23,4 @@ compose_service_b_1
|
|||
PID USER TIME COMMAND
|
||||
----------------------------
|
||||
4115 root 0:00 top
|
||||
```
|
||||
```
|
||||
|
|
|
@ -2,10 +2,11 @@
|
|||
description: Unpauses paused containers for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, unpause
|
||||
title: docker-compose unpause
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
Usage: unpause [SERVICE...]
|
||||
```
|
||||
|
||||
Unpauses paused containers of a service.
|
||||
Unpauses paused containers of a service.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: Builds, (re)creates, starts, and attaches to containers for a service.
|
||||
keywords: fig, composition, compose, docker, orchestration, cli, up
|
||||
title: docker-compose up
|
||||
notoc: true
|
||||
---
|
||||
|
||||
```
|
||||
|
@ -45,4 +46,4 @@ volumes). To prevent Compose from picking up changes, use the `--no-recreate`
|
|||
flag.
|
||||
|
||||
If you want to force Compose to stop and recreate all containers, use the
|
||||
`--force-recreate` flag.
|
||||
`--force-recreate` flag.
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
description: How to control service startup order in Docker Compose
|
||||
keywords: documentation, docs, docker, compose, startup, order
|
||||
title: Controlling startup order in Compose
|
||||
notoc: true
|
||||
---
|
||||
|
||||
You can control the order of service startup with the
|
||||
|
|
|
@ -1,14 +1,381 @@
|
|||
---
|
||||
description: Learn more about the Commercially Supported Docker Engine.
|
||||
keywords: docker, engine, documentation
|
||||
description: Learn how to install the commercially supported version of Docker Engine.
|
||||
keywords: docker, engine, dtr, install
|
||||
title: Install CS Docker Engine
|
||||
redirect_from:
|
||||
- /docker-trusted-registry/cs-engine/
|
||||
- /cs-engine/
|
||||
title: Commercially Supported Docker Engine
|
||||
- /cs-engine/1.12/install/
|
||||
---
|
||||
|
||||
This section includes the following topics:
|
||||
Follow these instructions to install CS Docker Engine, the commercially
|
||||
supported version of Docker Engine.
|
||||
|
||||
* [Install CS Docker Engine](install.md)
|
||||
* [Upgrade](upgrade.md)
|
||||
* [Release notes](release-notes/release-notes.md)
|
||||
CS Docker Engine can be installed on the following operating systems:
|
||||
|
||||
* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)](install.md#install-on-centos-7172--rhel-707172-yum-based-systems)
|
||||
* [Ubuntu 14.04 LTS](install.md#install-on-ubuntu-1404-lts)
|
||||
* [SUSE Linux Enterprise 12](install.md#install-on-suse-linux-enterprise-123)
|
||||
|
||||
You can install CS Docker Engine using a repository or using packages.
|
||||
|
||||
- If you [use a repository](#install-using-a-repository), your operating system
|
||||
will notify you when updates are available and you can upgrade or downgrade
|
||||
easily, but you need an internet connection. This approach is recommended.
|
||||
|
||||
- If you [use packages](#install-using-packages), you can install CS Docker
|
||||
Engine on air-gapped systems that have no internet connection. However, you
|
||||
are responsible for manually checking for updates and managing upgrades.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To install CS Docker Engine, you need root or sudo privileges and you need
|
||||
access to a command line on the system.
|
||||
|
||||
## Install using a repository
|
||||
|
||||
### Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)
|
||||
|
||||
This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3. Only
|
||||
these versions are supported. CentOS 7.0 is **not** supported. On RHEL,
|
||||
depending on your current level of updates, you may need to reboot your server
|
||||
to update its RHEL kernel.
|
||||
|
||||
1. Add the Docker public key for CS Docker Engine packages:
|
||||
|
||||
```bash
|
||||
$ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
|
||||
```
|
||||
|
||||
2. Install yum-utils if necessary:
|
||||
|
||||
```bash
|
||||
$ sudo yum install -y yum-utils
|
||||
```
|
||||
|
||||
3. Add the Docker repository:
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/centos/7
|
||||
```
|
||||
|
||||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
4. Install Docker CS Engine:
|
||||
|
||||
- **Latest version**:
|
||||
|
||||
```bash
|
||||
$ sudo yum makecache fast
|
||||
|
||||
$ sudo yum install docker-engine
|
||||
```
|
||||
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ yum list docker-engine.x86_64 --showduplicates |sort -r
|
||||
```
|
||||
|
||||
The second column represents the version.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by a hyphen (`-`):
|
||||
|
||||
```bash
|
||||
$ sudo yum install docker-engine-<version>
|
||||
```
|
||||
|
||||
5. Configure `devicemapper`:
|
||||
|
||||
By default, the `devicemapper` graph driver does not come pre-configured in
|
||||
a production-ready state. Follow the documented step by step instructions to
|
||||
[configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration)
|
||||
to achieve the best performance and reliability for your environment.
|
||||
|
||||
6. Configure the Docker daemon to start automatically when the system starts,
|
||||
and start it now.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
7. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
8. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
9. Log out and log back in to have your new permissions take effect.
|
||||
|
||||
### Install on Ubuntu 14.04 LTS or 16.04 LTS
|
||||
|
||||
1. Install packages to allow `apt` to use a repository over HTTPS:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install --no-install-recommends \
|
||||
apt-transport-https \
|
||||
curl \
|
||||
software-properties-common
|
||||
```
|
||||
|
||||
Optionally, install additional kernel modules to add AUFS support.
|
||||
|
||||
```bash
|
||||
$ sudo apt-get install -y --no-install-recommends \
|
||||
linux-image-extra-$(uname -r) \
|
||||
linux-image-extra-virtual
|
||||
```
|
||||
|
||||
2. Download and import Docker's public key for CS packages:
|
||||
|
||||
```bash
|
||||
$ curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add -
|
||||
```
|
||||
|
||||
3. Add the repository. In the command below, the `lsb_release -cs` sub-command
|
||||
returns the name of your Ubuntu version, like `xenial` or `trusty`.
|
||||
|
||||
```bash
|
||||
$ sudo add-apt-repository \
|
||||
"deb https://packages.docker.com/1.12/apt/repo/ \
|
||||
ubuntu-$(lsb_release -cs) \
|
||||
main"
|
||||
```
|
||||
|
||||
4. Install CS Docker Engine:
|
||||
|
||||
- **Latest version**:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get -y install docker-engine
|
||||
```
|
||||
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update
|
||||
|
||||
$ apt-cache madison docker-engine
|
||||
```
|
||||
|
||||
The second column represents the version.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by an equals sign (`=`):
|
||||
|
||||
```bash
|
||||
$ sudo apt-get install docker-engine=<version>
|
||||
```
|
||||
|
||||
5. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
6. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
Log out and log back in to have your new permissions take effect.
|
||||
|
||||
### Install on SUSE Linux Enterprise 12.3
|
||||
|
||||
1. Refresh your repository:
|
||||
|
||||
```bash
|
||||
$ sudo zypper update
|
||||
```
|
||||
|
||||
2. Add the Docker repository and public key:
|
||||
|
||||
```bash
|
||||
$ sudo zypper ar -t YUM https://packages.docker.com/1.12/yum/repo/main/opensuse/12.3 docker-1.13
|
||||
$ sudo rpm --import 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e'
|
||||
```
|
||||
|
||||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
3. Install CS Docker Engine.
|
||||
|
||||
- **Latest version**:
|
||||
|
||||
```bash
|
||||
$ sudo zypper refresh
|
||||
|
||||
$ sudo zypper install docker-engine
|
||||
```
|
||||
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ sudo zypper refresh
|
||||
|
||||
$ zypper search -s --match-exact -t package docker-engine
|
||||
```
|
||||
|
||||
The third column is the version string.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by a hyphen (`-`):
|
||||
|
||||
```bash
|
||||
$ sudo zypper install docker-engine-<version>
|
||||
```
|
||||
|
||||
4. Configure the Docker daemon to start automatically when the system starts,
|
||||
and start it now.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
5. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
6. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
Log out and log back in to have your new permissions take effect.
|
||||
|
||||
## Install using packages
|
||||
|
||||
If you need to install Docker on an air-gapped system with no access to the
|
||||
internet, use the [package download link table](#package-download-links) to
|
||||
download the Docker package for your operating system, then install it using the
|
||||
[appropriate command](#general-commands). You are responsible for manually
|
||||
upgrading Docker when a new version is available, and also for satisfying
|
||||
Docker's dependencies.
|
||||
|
||||
### General commands
|
||||
|
||||
To install Docker from packages, use the following commands:
|
||||
|
||||
| Operating system | Command |
|
||||
|-----------------------|---------|
|
||||
| RHEL / CentOS / SLES | `$ sudo yum install /path/to/package.rpm` |
|
||||
| Ubuntu | `$ sudo dpkg -i /path/to/package.deb` |
|
||||
|
||||
### Package download links
|
||||
|
||||
{% assign rpm-prefix = "https://packages.docker.com/1.12/yum/repo/main" %}
|
||||
{% assign deb-prefix = "https://packages.docker.com/1.12/apt/repo/pool/main/d/docker-engine" %}
|
||||
|
||||
#### CS Docker Engine 1.12.6
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.12.6.cs8-1" %}
|
||||
{% assign rpm-rhel-version = "1.12.6.cs8-1" %}
|
||||
{% assign deb-version = "1.12.6~cs8-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| RHEL 7.2 (only use if you have problems with `selinux` with the packages above) | [docker-engine]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-debuginfo-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-selinux-{{ rpm-rhel-version }}.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
||||
#### CS Docker Engine 1.12.5
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.12.5.cs5-1" %}
|
||||
{% assign deb-version = "1.12.5~cs5-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
||||
#### CS Docker Engine 1.12.3
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.12.3.cs4-1" %}
|
||||
{% assign deb-version = "1.12.3~cs4-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
||||
#### CS Docker Engine 1.12.2
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.12.2.cs2-1" %}
|
||||
{% assign deb-version = "1.12.2~cs2-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
||||
#### CS Docker Engine 1.12.1
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.12.1.cs1-1" %}
|
||||
{% assign deb-version = "1.12.1~cs1-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
|
|
@ -1,190 +0,0 @@
|
|||
---
|
||||
description: Learn how to install the commercially supported version of Docker Engine.
|
||||
keywords: docker, engine, dtr, install
|
||||
redirect_from:
|
||||
- /docker-trusted-registry/install/engine-ami-launch/
|
||||
- /docker-trusted-registry/install/install-csengine/
|
||||
- /docker-trusted-registry/cs-engine/install/
|
||||
- /cs-engine/install/
|
||||
title: Install Commercially Supported Docker Engine
|
||||
---
|
||||
|
||||
Follow these instructions to install CS Docker Engine, the commercially
|
||||
supported version of Docker Engine.
|
||||
|
||||
CS Docker Engine can be installed on the following operating systems:
|
||||
|
||||
|
||||
* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)](install.md#install-on-centos-7172--rhel-707172-yum-based-systems)
|
||||
* [Ubuntu 14.04 LTS](install.md#install-on-ubuntu-1404-lts)
|
||||
* [SUSE Linux Enterprise 12](install.md#install-on-suse-linux-enterprise-123)
|
||||
|
||||
|
||||
## Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)
|
||||
|
||||
This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2. Only
|
||||
these versions are supported. CentOS 7.0 is **not** supported. On RHEL,
|
||||
depending on your current level of updates, you may need to reboot your server
|
||||
to update its RHEL kernel.
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Add the Docker public key for CS packages:
|
||||
|
||||
```bash
|
||||
$ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
|
||||
```
|
||||
|
||||
3. Install yum-utils if necessary:
|
||||
|
||||
```bash
|
||||
$ sudo yum install -y yum-utils
|
||||
```
|
||||
|
||||
4. Add the Docker repository:
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/centos/7
|
||||
```
|
||||
|
||||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
> **Note**: For users on RHEL 7.2 who have issues with installing the selinux
|
||||
> policy, use the following command instead of the one above:
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/rhel/7.2
|
||||
```
|
||||
|
||||
5. Install Docker CS Engine:
|
||||
|
||||
```bash
|
||||
$ sudo yum install docker-engine
|
||||
```
|
||||
|
||||
6. Configure devicemapper:
|
||||
|
||||
By default, the `devicemapper` graph driver does not come pre-configured in a production ready state. Follow the documented step by step instructions to [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) in order to achieve the best performance and reliability for your environment.
|
||||
|
||||
7. Enable the Docker daemon as a service and start it.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
8. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
9. Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
10. Log out and log back in to have your new permissions take effect.
|
||||
|
||||
## Install on Ubuntu 14.04 LTS
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Add Docker's public key for CS packages:
|
||||
|
||||
```bash
|
||||
$ curl -s 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add --import
|
||||
```
|
||||
|
||||
3. Install the HTTPS helper for apt (your system may already have it):
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update && sudo apt-get install apt-transport-https
|
||||
```
|
||||
|
||||
4. Install additional kernel modules to add AUFS support.
|
||||
|
||||
```bash
|
||||
$ sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
|
||||
```
|
||||
|
||||
5. Add the repository for the new version:
|
||||
|
||||
```bash
|
||||
$ echo "deb https://packages.docker.com/1.12/apt/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
|
||||
```
|
||||
|
||||
6. Run the following to install commercially supported Docker Engine and its
|
||||
dependencies:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update && sudo apt-get install docker-engine
|
||||
```
|
||||
|
||||
7. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
8. Optionally, add non-sudo access to the Docker socket by adding your
|
||||
user to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
Log out and log back in to have your new permissions take effect.
|
||||
|
||||
|
||||
## Install on SUSE Linux Enterprise 12.3
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Refresh your repository so that curl commands and CA certificates
|
||||
are available:
|
||||
|
||||
```bash
|
||||
$ sudo zypper ref
|
||||
```
|
||||
|
||||
3. Add the Docker repository and public key:
|
||||
|
||||
```bash
|
||||
$ sudo zypper ar -t YUM https://packages.docker.com/1.12/yum/repo/main/opensuse/12.3 docker-1.12
|
||||
$ sudo rpm --import 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e'
|
||||
```
|
||||
|
||||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
4. Install the Docker daemon package:
|
||||
|
||||
```bash
|
||||
$ sudo zypper install docker-engine
|
||||
```
|
||||
|
||||
5. Enable the Docker daemon as a service and then start it:
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
6. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
7. Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
8. Log out and log back in to have your new permissions take effect.
|
|
@ -4,7 +4,7 @@ keywords: docker, documentation, about, technology, understanding, enterprise, h
|
|||
redirect_from:
|
||||
- /docker-trusted-registry/cse-prior-release-notes/
|
||||
- /docker-trusted-registry/cs-engine/release-notes/prior-release-notes/
|
||||
- - /cs-engine/release-notes/prior-release-notes/
|
||||
- /cs-engine/release-notes/prior-release-notes/
|
||||
title: Release notes archive for Commercially Supported Docker Engine.
|
||||
---
|
||||
|
||||
|
|
|
@ -1,7 +1,12 @@
|
|||
---
|
||||
title: Install CS Docker Engine 1.13
|
||||
title: Install CS Docker Engine
|
||||
description: Learn how to install the commercially supported version of Docker Engine.
|
||||
keywords: docker, engine, install
|
||||
redirect_from:
|
||||
- /docker-trusted-registry/install/engine-ami-launch/
|
||||
- /docker-trusted-registry/install/install-csengine/
|
||||
- /docker-trusted-registry/cs-engine/install/
|
||||
- /cs-engine/install/
|
||||
---
|
||||
|
||||
Follow these instructions to install CS Docker Engine, the commercially
|
||||
|
@ -9,34 +14,47 @@ supported version of Docker Engine.
|
|||
|
||||
CS Docker Engine can be installed on the following operating systems:
|
||||
|
||||
* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)](#install-on-centos-7172--rhel-70717273-yum-based-systems)
|
||||
* [Ubuntu 14.04 LTS or 16.04 LTS](#install-on-ubuntu-1404-lts-or-1604-lts)
|
||||
* [SUSE Linux Enterprise 12](#install-on-suse-linux-enterprise-123)
|
||||
|
||||
* CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)
|
||||
* Ubuntu 14.04 LTS or 16.04 LTS
|
||||
* SUSE Linux Enterprise 12
|
||||
You can install CS Docker Engine using a repository or using packages.
|
||||
|
||||
- If you [use a repository](#install-using-a-repository), your operating system
|
||||
will notify you when updates are available and you can upgrade or downgrade
|
||||
easily, but you need an internet connection. This approach is recommended.
|
||||
|
||||
## Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)
|
||||
- If you [use packages](#install-using-packages), you can install CS Docker
|
||||
Engine on air-gapped systems that have no internet connection. However, you
|
||||
are responsible for manually checking for updates and managing upgrades.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To install CS Docker Engine, you need root or sudo privileges and you need
|
||||
access to a command line on the system.
|
||||
|
||||
## Install using a repository
|
||||
|
||||
### Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)
|
||||
|
||||
This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3. Only
|
||||
these versions are supported. CentOS 7.0 is **not** supported. On RHEL,
|
||||
depending on your current level of updates, you may need to reboot your server
|
||||
to update its RHEL kernel.
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Add the Docker public key for CS packages:
|
||||
1. Add the Docker public key for CS Docker Engine packages:
|
||||
|
||||
```bash
|
||||
$ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
|
||||
```
|
||||
|
||||
3. Install yum-utils if necessary:
|
||||
2. Install yum-utils if necessary:
|
||||
|
||||
```bash
|
||||
$ sudo yum install -y yum-utils
|
||||
```
|
||||
|
||||
4. Add the Docker repository:
|
||||
3. Add the Docker repository:
|
||||
|
||||
```bash
|
||||
$ sudo yum-config-manager --add-repo https://packages.docker.com/1.13/yum/repo/main/centos/7
|
||||
|
@ -45,89 +63,145 @@ to update its RHEL kernel.
|
|||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
5. Install Docker CS Engine:
|
||||
4. Install Docker CS Engine:
|
||||
|
||||
```bash
|
||||
$ sudo yum install docker-engine
|
||||
```
|
||||
- **Latest version**:
|
||||
|
||||
6. Configure devicemapper:
|
||||
```bash
|
||||
$ sudo yum makecache fast
|
||||
|
||||
By default, the `devicemapper` graph driver does not come pre-configured in a production ready state. Follow the documented step by step instructions to [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) in order to achieve the best performance and reliability for your environment.
|
||||
$ sudo yum install docker-engine
|
||||
```
|
||||
|
||||
7. Enable the Docker daemon as a service and start it.
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ yum list docker-engine.x86_64 --showduplicates |sort -r
|
||||
```
|
||||
|
||||
The second column represents the version.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by a hyphen (`-`):
|
||||
|
||||
```bash
|
||||
$ sudo yum install docker-engine-<version>
|
||||
```
|
||||
|
||||
5. Configure `devicemapper`:
|
||||
|
||||
By default, the `devicemapper` graph driver does not come pre-configured in
|
||||
a production-ready state. Follow the documented step by step instructions to
|
||||
[configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration)
|
||||
to achieve the best performance and reliability for your environment.
|
||||
|
||||
6. Configure the Docker daemon to start automatically when the system starts,
|
||||
and start it now.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
8. Confirm the Docker daemon is running:
|
||||
7. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
9. Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
8. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
10. Log out and log back in to have your new permissions take effect.
|
||||
9. Log out and log back in to have your new permissions take effect.
|
||||
|
||||
## Install on Ubuntu 14.04 LTS or 16.04 LTS
|
||||
### Install on Ubuntu 14.04 LTS or 16.04 LTS
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Add Docker's public key for CS packages:
|
||||
1. Install packages to allow `apt` to use a repository over HTTPS:
|
||||
|
||||
```bash
|
||||
$ curl -s 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add --import
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install --no-install-recommends \
|
||||
apt-transport-https \
|
||||
curl \
|
||||
software-properties-common
|
||||
```
|
||||
|
||||
3. Install the HTTPS helper for apt (your system may already have it):
|
||||
Optionally, install additional kernel modules to add AUFS support.
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update && sudo apt-get install apt-transport-https
|
||||
$ sudo apt-get install -y --no-install-recommends \
|
||||
linux-image-extra-$(uname -r) \
|
||||
linux-image-extra-virtual
|
||||
```
|
||||
|
||||
4. Install additional kernel modules to add AUFS support.
|
||||
2. Download and import Docker's public key for CS packages:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual
|
||||
$ curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add -
|
||||
```
|
||||
|
||||
5. Add the repository for the new version:
|
||||
|
||||
for 14.04:
|
||||
3. Add the repository. In the command below, the `lsb_release -cs` sub-command
|
||||
returns the name of your Ubuntu version, like `xenial` or `trusty`.
|
||||
|
||||
```bash
|
||||
$ echo "deb https://packages.docker.com/1.13/apt/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
|
||||
```
|
||||
|
||||
for 16.04:
|
||||
|
||||
```bash
|
||||
$ echo "deb https://packages.docker.com/1.13/apt/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list
|
||||
$ sudo add-apt-repository \
|
||||
"deb https://packages.docker.com/1.13/apt/repo/ \
|
||||
ubuntu-$(lsb_release -cs) \
|
||||
main"
|
||||
```
|
||||
|
||||
6. Run the following to install commercially supported Docker Engine and its
|
||||
dependencies:
|
||||
4. Install CS Docker Engine:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update && sudo apt-get install docker-engine
|
||||
```
|
||||
- **Latest version**:
|
||||
|
||||
7. Confirm the Docker daemon is running:
|
||||
```bash
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get -y install docker-engine
|
||||
```
|
||||
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get update
|
||||
|
||||
$ apt-cache madison docker-engine
|
||||
```
|
||||
|
||||
The second column represents the version.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by an equals sign (`=`):
|
||||
|
||||
```bash
|
||||
$ sudo apt-get install docker-engine=<version>
|
||||
```
|
||||
|
||||
5. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
8. Optionally, add non-sudo access to the Docker socket by adding your
|
||||
user to the `docker` group.
|
||||
6. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
|
@ -135,19 +209,15 @@ user to the `docker` group.
|
|||
|
||||
Log out and log back in to have your new permissions take effect.
|
||||
|
||||
### Install on SUSE Linux Enterprise 12.3
|
||||
|
||||
## Install on SUSE Linux Enterprise 12.3
|
||||
|
||||
1. Log into the system as a user with root or sudo permissions.
|
||||
|
||||
2. Refresh your repository so that curl commands and CA certificates
|
||||
are available:
|
||||
1. Refresh your repository:
|
||||
|
||||
```bash
|
||||
$ sudo zypper ref
|
||||
$ sudo zypper update
|
||||
```
|
||||
|
||||
3. Add the Docker repository and public key:
|
||||
2. Add the Docker repository and public key:
|
||||
|
||||
```bash
|
||||
$ sudo zypper ar -t YUM https://packages.docker.com/1.13/yum/repo/main/opensuse/12.3 docker-1.13
|
||||
|
@ -157,30 +227,97 @@ are available:
|
|||
This adds the repository of the latest version of CS Docker Engine. You can
|
||||
customize the URL to install an older version.
|
||||
|
||||
4. Install the Docker daemon package:
|
||||
3. Install CS Docker Engine.
|
||||
|
||||
```bash
|
||||
$ sudo zypper install docker-engine
|
||||
```
|
||||
- **Latest version**:
|
||||
|
||||
5. Enable the Docker daemon as a service and then start it:
|
||||
```bash
|
||||
$ sudo zypper refresh
|
||||
|
||||
$ sudo zypper install docker-engine
|
||||
```
|
||||
|
||||
- **Specific version**:
|
||||
|
||||
On production systems, you should install a specific version rather than
|
||||
relying on the latest.
|
||||
|
||||
1. List the available versions:
|
||||
|
||||
```bash
|
||||
$ sudo zypper refresh
|
||||
|
||||
$ zypper search -s --match-exact -t package docker-engine
|
||||
```
|
||||
|
||||
The third column is the version string.
|
||||
|
||||
2. Install a specific version by adding the version after `docker-engine`,
|
||||
separeated by a hyphen (`-`):
|
||||
|
||||
```bash
|
||||
$ sudo zypper install docker-engine-<version>
|
||||
```
|
||||
|
||||
4. Configure the Docker daemon to start automatically when the system starts,
|
||||
and start it now.
|
||||
|
||||
```bash
|
||||
$ sudo systemctl enable docker.service
|
||||
$ sudo systemctl start docker.service
|
||||
```
|
||||
|
||||
6. Confirm the Docker daemon is running:
|
||||
5. Confirm the Docker daemon is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
```
|
||||
|
||||
7. Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
6. Only users with `sudo` access will be able to run `docker` commands.
|
||||
Optionally, add non-sudo access to the Docker socket by adding your user
|
||||
to the `docker` group.
|
||||
|
||||
```bash
|
||||
$ sudo usermod -a -G docker $USER
|
||||
```
|
||||
|
||||
8. Log out and log back in to have your new permissions take effect.
|
||||
Log out and log back in to have your new permissions take effect.
|
||||
|
||||
## Install using packages
|
||||
|
||||
If you need to install Docker on an air-gapped system with no access to the
|
||||
internet, use the [package download link table](#package-download-links) to
|
||||
download the Docker package for your operating system, then install it using the
|
||||
[appropriate command](#general-commands). You are responsible for manually
|
||||
upgrading Docker when a new version is available, and also for satisfying
|
||||
Docker's dependencies.
|
||||
|
||||
### General commands
|
||||
|
||||
To install Docker from packages, use the following commands:
|
||||
|
||||
| Operating system | Command |
|
||||
|-----------------------|---------|
|
||||
| RHEL / CentOS / SLES | `$ sudo yum install /path/to/package.rpm` |
|
||||
| Ubuntu | `$ sudo dpkg -i /path/to/package.deb` |
|
||||
|
||||
### Package download links
|
||||
|
||||
{% assign rpm-prefix = "https://packages.docker.com/1.13/yum/repo/main" %}
|
||||
{% assign deb-prefix = "https://packages.docker.com/1.13/apt/repo/pool/main/d/docker-engine" %}
|
||||
|
||||
#### CS Docker Engine 1.13.1
|
||||
|
||||
{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %}
|
||||
{% assign rpm-version = "1.13.1.cs2-1" %}
|
||||
{% assign rpm-rhel-version = "1.13.1.cs2-1" %}
|
||||
{% assign deb-version = "1.13.1~cs2-0" %}
|
||||
|
||||
| Operating system | Package links |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) |
|
||||
| RHEL 7.2 (only use if you have problems with `selinux` with the packages above) | [docker-engine]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-debuginfo-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-selinux-{{ rpm-rhel-version }}.el7.centos.noarch.rpm) |
|
||||
| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) |
|
||||
| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) |
|
||||
| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) |
|
||||
| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) |
|
||||
|
|
|
@ -14,6 +14,49 @@ back-ported fixes (security-related and priority defects) from the open source.
|
|||
It incorporates defect fixes that you can use in environments where new features
|
||||
cannot be adopted as quickly for consistency and compatibility reasons.
|
||||
|
||||
|
||||
## CS Engine 1.13.1-cs2
|
||||
(23 Feb 2017)
|
||||
|
||||
### Client
|
||||
|
||||
* Fix panic in `docker stats --format` [#30776](https://github.com/docker/docker/pull/30776)
|
||||
|
||||
### Contrib
|
||||
|
||||
* Update various `bash` and `zsh` completion scripts [#30823](https://github.com/docker/docker/pull/30823), [#30945](https://github.com/docker/docker/pull/30945) and more...
|
||||
* Block obsolete socket families in default seccomp profile - mitigates unpatched kernels' CVE-2017-6074 [#29076](https://github.com/docker/docker/pull/29076)
|
||||
|
||||
### Networking
|
||||
|
||||
* Fix bug on overlay encryption keys rotation in cross-datacenter swarm [#30727](https://github.com/docker/docker/pull/30727)
|
||||
* Fix side effect panic in overlay encryption and network control plane communication failure ("No installed keys could decrypt the message") on frequent swarm leader re-election [#25608](https://github.com/docker/docker/pull/25608)
|
||||
* Several fixes around system responsiveness and datapath programming when using overlay network with external kv-store [docker/libnetwork#1639](https://github.com/docker/libnetwork/pull/1639), [docker/libnetwork#1632](https://github.com/docker/libnetwork/pull/1632) and more...
|
||||
* Discard incoming plain vxlan packets for encrypted overlay network [#31170](https://github.com/docker/docker/pull/31170)
|
||||
* Release the network attachment on allocation failure [#31073](https://github.com/docker/docker/pull/31073)
|
||||
* Fix port allocation when multiple published ports map to the same target port [docker/swarmkit#1835](https://github.com/docker/swarmkit/pull/1835)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Fix a deadlock in docker logs [#30223](https://github.com/docker/docker/pull/30223)
|
||||
* Fix cpu spin waiting for log write events [#31070](https://github.com/docker/docker/pull/31070)
|
||||
* Fix a possible crash when using journald [#31231](https://github.com/docker/docker/pull/31231) [#31263](https://github.com/docker/docker/pull/31231)
|
||||
* Fix a panic on close of nil channel [#31274](https://github.com/docker/docker/pull/31274)
|
||||
* Fix duplicate mount point for `--volumes-from` in `docker run` [#29563](https://github.com/docker/docker/pull/29563)
|
||||
* Fix `--cache-from` does not cache last step [#31189](https://github.com/docker/docker/pull/31189)
|
||||
* Fix issue with lock contention while performing container size calculation [#31159](https://github.com/docker/docker/pull/31159)
|
||||
|
||||
### Swarm Mode
|
||||
|
||||
* Shutdown leaks an error when the container was never started [#31279](https://github.com/docker/docker/pull/31279)
|
||||
|
||||
### Swarm Mode
|
||||
|
||||
* Fix possibility of tasks getting stuck in the "NEW" state during a leader failover [docker/swarmkit#1938](https://github.com/docker/swarmkit/pull/1938)
|
||||
* Fix extraneous task creations for global services that led to confusing replica counts in `docker service ls` [docker/swarmkit#1957](https://github.com/docker/swarmkit/pull/1957)
|
||||
* Fix problem that made rolling updates slow when `task-history-limit` was set to 1 [docker/swarmkit#1948](https://github.com/docker/swarmkit/pull/1948)
|
||||
* Restart tasks elsewhere, if appropriate, when they are shut down as a result of nodes no longer satisfying constraints [docker/swarmkit#1958](https://github.com/docker/swarmkit/pull/1958)
|
||||
|
||||
## CS Engine 1.13.1-cs1
|
||||
|
||||
(08 Feb 2017)
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
---
|
||||
description: Learn how to install, configure, and use Docker Trusted Registry.
|
||||
keywords: docker, registry, repository, images
|
||||
redirect_from:
|
||||
- /docker-hub-enterprise/
|
||||
- /docker-trusted-registry/overview/
|
||||
- /docker-trusted-registry/
|
||||
title: Docker Trusted Registry overview
|
||||
---
|
||||
|
||||
|
@ -35,4 +39,4 @@ access to your Docker images.
|
|||
## Where to go next
|
||||
|
||||
* [DTR architecture](architecture.md)
|
||||
* [Install DTR](install/index.md)
|
||||
* [Install DTR](install/index.md)
|
|
@ -49,7 +49,16 @@ Where the `--ucp-node` is the hostname of the UCP node where you want to deploy
|
|||
DTR. `--ucp-insecure-tls` tells the installer to trust the certificates used
|
||||
by UCP.
|
||||
|
||||
The install command has other flags for customizing DTR at install time.
|
||||
By default the install command runs in interactive mode and prompts for
|
||||
additional information like:
|
||||
|
||||
* DTR external url: the url clients use to read DTR. If you're using a load
|
||||
balancer for DTR, this is the IP address or DNS name of the load balancer
|
||||
* UCP url: the url clients use to reach UCP
|
||||
* UCP username and password: administrator credentials for UCP
|
||||
|
||||
You can also provide this information to the installer command so that it
|
||||
runs without prompting.
|
||||
Check the [reference documentation to learn more](../../reference/cli/install.md).
|
||||
|
||||
|
||||
|
|
|
@ -4,10 +4,10 @@ keywords: docker, registry, high-availability, backup, recovery
|
|||
title: DTR backups and recovery
|
||||
---
|
||||
|
||||
When you use Docker Trusted Registry on a production setting, you should first
|
||||
configure it for high availability.
|
||||
|
||||
The next step is creating a backup policy and disaster recovery plan.
|
||||
DTR replicas rely on having a majority available at any given time for writes to
|
||||
succeed. Therefore if a majority of replicas are permanently lost, the only way
|
||||
to restore DTR to a working state is to recover from backups. This is why it's
|
||||
very important to perform periodic backups.
|
||||
|
||||
## DTR data persistence
|
||||
|
||||
|
@ -17,6 +17,9 @@ Docker Trusted Registry persists:
|
|||
that is replicated through all DTR replicas.
|
||||
* **Repository metadata**: the information about the repositories and
|
||||
images deployed. This information is replicated through all DTR replicas.
|
||||
* **Access control**: permissions for teams and repos.
|
||||
* **Notary data**: notary tags and signatures.
|
||||
* **Scan results**: security scanning results for images.
|
||||
* **Certificates and keys**: the certificates, public keys, and private keys
|
||||
that are used for mutual TLS communication.
|
||||
|
||||
|
@ -33,21 +36,38 @@ command creates a backup of DTR:
|
|||
|
||||
* Configurations,
|
||||
* Repository metadata,
|
||||
* Access control,
|
||||
* Notary data,
|
||||
* Scan results,
|
||||
* Certificates and keys used by DTR.
|
||||
|
||||
These files are added to a tar archive, and the result is streamed to stdout.
|
||||
This data is added to a tar archive, and the result is streamed to stdout. This
|
||||
is done while DTR is running without shutting down any containers.
|
||||
|
||||
Things DTR's backup command doesn't back up:
|
||||
|
||||
* The vulnerability database (if using image scanning)
|
||||
* Image contents
|
||||
* Users, orgs, teams
|
||||
|
||||
There is no way to back up the vulnerability database. You can re-download it
|
||||
after restoring or re-apply your offline tar update if offline.
|
||||
|
||||
The backup command does not create a backup of Docker images. You should
|
||||
implement a separate backup policy for the Docker images, taking in
|
||||
consideration whether your DTR installation is configured to store images on the
|
||||
filesystem or using a cloud provider.
|
||||
filesystem or using a cloud provider. During restore, you need to separately
|
||||
restore the image contents.
|
||||
|
||||
The backup command also doesn't create a backup of the users and organizations.
|
||||
That data is managed by UCP, so when you create a UCP backup you're creating
|
||||
a backup of the users and organizations metadata.
|
||||
a backup of the users and organizations. For this reason, when restoring DTR,
|
||||
you must do it on the same UCP cluster (or one created by restoring from
|
||||
backups) or else all DTR resources such as repos will be owned by non-existent
|
||||
users and will not be usable despite technically existing in the database.
|
||||
|
||||
When creating a backup, the resulting .tar file contains sensitive information
|
||||
like private keys. You should ensure the backups are stored securely.
|
||||
such as private keys. You should ensure the backups are stored securely.
|
||||
|
||||
You can check the
|
||||
[reference documentation](../../reference/cli/backup.md), for the
|
||||
|
@ -71,6 +91,23 @@ Where:
|
|||
* `--existing-replica-id` is the id of the replica to backup,
|
||||
* `--ucp-username`, and `--ucp-password` are the credentials of a UCP administrator.
|
||||
|
||||
To avoid having to pass the password as a command line parameter, you may
|
||||
instead use the following approach in bash:
|
||||
|
||||
```none
|
||||
$ read -sp 'ucp password: ' PASS; UCP_PASSWORD=$PASS docker run -i --rm -e UCP_PASSWORD docker/dtr backup \
|
||||
--ucp-url <ucp-url> \
|
||||
--ucp-insecure-tls \
|
||||
--existing-replica-id <replica-id> \
|
||||
--ucp-username <ucp-admin> > /tmp/backup.tar
|
||||
```
|
||||
|
||||
This puts the password into a shell variable which is then passed into the
|
||||
docker client command with the -e flag which in turn relays the password to the
|
||||
DTR bootstrapper.
|
||||
|
||||
## Testing backups
|
||||
|
||||
To validate that the backup was correctly performed, you can print the contents
|
||||
of the tar file created:
|
||||
|
||||
|
@ -78,27 +115,59 @@ of the tar file created:
|
|||
$ tar -tf /tmp/backup.tar
|
||||
```
|
||||
|
||||
The structure of the archive should look something like this:
|
||||
|
||||
```none
|
||||
dtr-backup-v2.2.1/
|
||||
dtr-backup-v2.2.1/rethink/
|
||||
dtr-backup-v2.2.1/rethink/properties/
|
||||
dtr-backup-v2.2.1/rethink/properties/0
|
||||
...
|
||||
```
|
||||
|
||||
To really test that the backup works, you must make a copy of your UCP cluster
|
||||
by backing it up and restoring it onto separate machines. Then you can restore
|
||||
DTR there from your backup and verify that it has all the data you expect to
|
||||
see.
|
||||
|
||||
## Restore DTR data
|
||||
|
||||
You can restore a DTR node from a backup using the `restore`
|
||||
command.
|
||||
This command performs a fresh installation of DTR, and reconfigures it with
|
||||
the configuration created during a backup.
|
||||
You can restore a DTR node from a backup using the `restore` command.
|
||||
|
||||
The command starts by installing DTR, restores the configurations stored on
|
||||
etcd, and then restores the repository metadata stored on RethinkDB. You
|
||||
can use the `--config-only` option, to only restore the configurations stored
|
||||
on etcd.
|
||||
Note that backups are tied to specific DTR versions and are guaranteed to work
|
||||
only with those DTR versions. You can backup/restore across patch versions
|
||||
at your own risk, but not across minor versions as those require more complex
|
||||
migrations.
|
||||
|
||||
This command does not restore Docker images. You should implement a separate
|
||||
restore procedure for the Docker images stored in your registry, taking in
|
||||
consideration whether your DTR installation is configured to store images on
|
||||
the filesystem or using a cloud provider.
|
||||
Before restoring DTR, make sure that you are restoring it on the same UCP
|
||||
cluster or you've also restored UCP using its restore command. DTR does not
|
||||
manage users, orgs or teams so if you try to
|
||||
restore DTR on a cluster other than the one it was backed up on, DTR
|
||||
repositories will be associated with users that don't exist and it will appear
|
||||
as if the restore operation didn't work.
|
||||
|
||||
Note that to restore DTR, you must first remove any left over containers from
|
||||
the previous installation. To do this, see the [uninstall
|
||||
documentation](../install/uninstall.md).
|
||||
|
||||
The restore command performs a fresh installation of DTR, and reconfigures it with
|
||||
the configuration created during a backup. The command starts by installing DTR.
|
||||
Then it restores the configurations from the backup and then restores the
|
||||
repository metadata. Finally, it applies all of the configs specified as flags to
|
||||
the restore command.
|
||||
|
||||
After restoring DTR, you must make sure that it's configured to use the same
|
||||
storage backend where it can find the image data. If the image data was backed
|
||||
up separately, you must restore it now.
|
||||
|
||||
Finally, if you are using security scanning, you must re-fetch the security
|
||||
scanning database through the online update or by uploading the offline tar. See
|
||||
the [security scanning configuration](../admin/configure/set-up-vulnerability-scans.md)
|
||||
for more detail.
|
||||
|
||||
You can check the
|
||||
[reference documentation](../../reference/cli/backup.md), for the
|
||||
backup command to learn about all the available flags.
|
||||
|
||||
[reference documentation](../../reference/cli/restore.md), for the
|
||||
restore command to learn about all the available flags.
|
||||
|
||||
As an example, to install DTR on the host and restore its
|
||||
state from an existing backup:
|
||||
|
@ -121,6 +190,15 @@ Where:
|
|||
* `--ucp-username`, and `--ucp-password` are the credentials of a UCP administrator,
|
||||
* `--dtr-load-balancer` is the domain name or ip where DTR can be reached.
|
||||
|
||||
Note that if you want to avoid typing your password into the terminal you must pass
|
||||
it in as an environment variable using the same approach as for the backup command:
|
||||
|
||||
```none
|
||||
$ read -sp 'ucp password: ' PASS; UCP_PASSWORD=$PASS docker run -i --rm -e UCP_PASSWORD docker/dtr restore ...
|
||||
```
|
||||
|
||||
After you successfully restore DTR, you can join new replicas the same way you
|
||||
would after a fresh installation.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -4,8 +4,6 @@ description: Learn how to integrate Docker Trusted Registry with NFS
|
|||
keywords: registry, dtr, storage, nfs
|
||||
---
|
||||
|
||||
<!-- TODO: review page for v2.2 -->
|
||||
|
||||
You can configure DTR to store Docker images in an NFS directory.
|
||||
|
||||
Before installing or configuring DTR to use an NFS directory, make sure that:
|
||||
|
@ -70,4 +68,4 @@ add it back again.
|
|||
|
||||
## Where to go next
|
||||
|
||||
* [Configure where images are stored](configure-storage.md)
|
||||
* [Configure where images are stored](index.md)
|
||||
|
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
description: Configure garbage collection in Docker Trusted Registry
|
||||
keyworkds: docker, registry, garbage collection, gc, space, disk space
|
||||
title: Docker Trusted Registry 2.2 Garbage Collection
|
||||
---
|
||||
|
||||
#### TL;DR
|
||||
|
||||
1. Garbage Collection (GC) reclaims disk space from your storage by deleting
|
||||
unused layers
|
||||
2. GC can be configured to run automatically with a cron schedule, and can also
|
||||
be run manually. Only admins can configure these
|
||||
3. When GC runs DTR will be placed in read-only mode. Pulls will work but
|
||||
pushes will fail
|
||||
4. The UI will show when GC is running, and an admin can stop GC within the UI
|
||||
|
||||
**Important notes**
|
||||
|
||||
The GC cron schedule is set to run in **UTC time**. Containers typically run in
|
||||
UTC time (unless the system time is mounted), therefore remember that the cron
|
||||
schedule will run based off of UTC time when configuring.
|
||||
|
||||
GC puts DTR into read only mode; pulls succeed while pushes fail. Pushing an
|
||||
image while GC runs may lead to undefined behaviour and data loss, therefore
|
||||
this is disabled for safety. For this reason it's generally best practice to
|
||||
ensure GC runs in the early morning on a Saturday or Sunday night.
|
||||
|
||||
|
||||
## Setting up garbage collection
|
||||
|
||||
You can set up GC if you're an admin by hitting "Settings" in the UI then
|
||||
choosing "Garbage Collection". By default, GC will be disabled, showing this
|
||||
screen:
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Here you can configure GC to run **until it's done** or **with a timeout**.
|
||||
The timeout ensures that your registry will be in read-only mode for a maximum
|
||||
amount of time.
|
||||
|
||||
Select an option (either "Until done" or "For N minutes") and you'll have the
|
||||
option to configure GC to run via a cron job, with several default crons
|
||||
provided:
|
||||
|
||||
{: .with-border}
|
||||
|
||||
You can also choose "Do not repeat" to disable the cron schedule entirely.
|
||||
|
||||
Once the cron schedule has been configured (or disabled), you have the option to
|
||||
the schedule ("Save") or save the schedule *and* start GC immediately ("Save
|
||||
& Start").
|
||||
|
||||
## Stopping GC while it's running
|
||||
|
||||
When GC runs the garbage collection settings page looks as follows:
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Note the global banner visible to all users, ensuring everyone knows that GC is
|
||||
running.
|
||||
|
||||
An admin can stop the current GC process by hitting "Stop". This safely shuts
|
||||
down the running GC job and moves the registry into read-write mode, ensuring
|
||||
pushes work as expected.
|
||||
|
||||
## How does garbage collection work?
|
||||
|
||||
### Background: how images are stored
|
||||
|
||||
Each image stored in DTR is made up of multiple files:
|
||||
|
||||
- A list of "layers", which represent the image's filesystem
|
||||
- The "config" file, which dictates the OS, architecture and other image
|
||||
metadata
|
||||
- The "manifest", which is pulled first and lists all layers and the config file
|
||||
for the image.
|
||||
|
||||
All of these files are stored in a content-addressible manner. We take the
|
||||
sha256 hash of the file's content and use the hash as the filename. This means
|
||||
that if tag `example.com/user/blog:1.11.0` and `example.com/user/blog:latest`
|
||||
use the same layers we only store them once.
|
||||
|
||||
### How this impacts GC
|
||||
|
||||
Let's continue from the above example, where `example.com/user/blog:latest` and
|
||||
`example.com/user/blog:1.11.0` point to the same image and use the same layers.
|
||||
If we delete `example.com/user/blog:latest` but *not*
|
||||
`example.com/user/blog:1.11.0` we expect that `example.com/user/blog:1.11.0`
|
||||
can still be pulled.
|
||||
|
||||
This means that we can't delete layers when tags or manifests are deleted.
|
||||
Instead, we need to pause writing and take reference counts to see how many
|
||||
times a file is used. If the file is never used only then is it safe to delete.
|
||||
|
||||
This is the basis of our "mark and sweep" collection:
|
||||
|
||||
1. Iterate over all manifests in registry and record all files that are
|
||||
referenced
|
||||
2. Iterate over all file stored and check if the file is referenced by any
|
||||
manifest
|
||||
3. If the file is *not* referenced, delete it
|
|
@ -4,8 +4,9 @@ description: Learn how to configure a load balancer to balance user requests acr
|
|||
keywords: docker, dtr, load balancer
|
||||
---
|
||||
|
||||
Once you’ve joined multiple DTR replicas nodes for high-availability, you can
|
||||
configure your own load balancer to balance user requests across all replicas.
|
||||
Once you’ve joined multiple DTR replicas nodes for
|
||||
[high-availability](set-up-high-availability.md), you can configure your own
|
||||
load balancer to balance user requests across all replicas.
|
||||
|
||||

|
||||
|
||||
|
@ -14,18 +15,75 @@ This allows users to access DTR using a centralized domain name. If a replica
|
|||
goes down, the load balancer can detect that and stop forwarding requests to
|
||||
it, so that the failure goes unnoticed by users.
|
||||
|
||||
## Load-balancing DTR
|
||||
## Load balancing DTR
|
||||
|
||||
DTR does not provide a load balancing service. You can use an on-premises
|
||||
or cloud-based load balancer to balance requests across multiple DTR replicas.
|
||||
|
||||
Make sure you configure your load balancer to:
|
||||
|
||||
* Load-balance TCP traffic on ports 80 and 443
|
||||
* Load balance TCP traffic on ports 80 and 443
|
||||
* Not terminate HTTPS connections
|
||||
* Use the `/health` endpoint on each DTR replica, to check if
|
||||
the replica is healthy and if it should remain on the load balancing pool or
|
||||
not
|
||||
* Use the unauthenticated `/health` endpoint (note the lack of an `/api/v0/` in
|
||||
the path) on each DTR replica, to check if the replica is healthy and if it
|
||||
should remain in the load balancing pool or not
|
||||
|
||||
## Health check endpoints
|
||||
|
||||
The `/health` endpoint returns a JSON object for the replica being queried of
|
||||
the form:
|
||||
|
||||
```json
|
||||
{
|
||||
"Error": "error message",
|
||||
"Health": true
|
||||
}
|
||||
```
|
||||
|
||||
A response of `"Healthy": true` means the replica is suitable for taking
|
||||
requests. It is also sufficient to check whether the HTTP status code is 200.
|
||||
|
||||
An unhealthy replica will return 503 as the status code and populate `"Error"`
|
||||
with more details on any one of these services:
|
||||
|
||||
* Storage container (registry)
|
||||
* Authorization (garant)
|
||||
* Metadata persistence (rethinkdb)
|
||||
* Content trust (notary)
|
||||
|
||||
Note that this endpoint is for checking the health of a *single* replica. To get
|
||||
the health of every replica in a cluster, querying each replica individiually is
|
||||
the preferred way to do it in real time.
|
||||
|
||||
The `/api/v0/meta/cluster_status`
|
||||
[endpoint](https://docs.docker.com/datacenter/dtr/2.2/reference/api/)
|
||||
returns a JSON object for the entire cluster *as observed* by the replica being
|
||||
queried, and it takes the form:
|
||||
|
||||
```json
|
||||
{
|
||||
"replica_health": {
|
||||
"replica id": "OK",
|
||||
"another replica id": "error message"
|
||||
},
|
||||
"replica_timestamp": {
|
||||
"replica id": "2006-01-02T15:04:05Z07:00",
|
||||
"another replica id": "2006-01-02T15:04:05Z07:00"
|
||||
},
|
||||
// other fields
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Health statuses for the replicas is available in the `"replica_health"` object.
|
||||
These statuses are taken from a cache which is last updated by each replica
|
||||
individually at the time specified in the `"replica_timestamp"` object.
|
||||
|
||||
The response also contains information about the internal DTR storage state,
|
||||
which is around 45 KB of data. This, combined with the fact that the endpoint
|
||||
requires admin credentials, means it is not particularly appropriate for load
|
||||
balance checks. Use `/health` instead for those kinds of checks.
|
||||
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -49,7 +49,16 @@ Where the `--ucp-node` is the hostname of the UCP node where you want to deploy
|
|||
DTR. `--ucp-insecure-tls` tells the installer to trust the TLS certificates used
|
||||
by UCP.
|
||||
|
||||
The install command has other flags for customizing DTR at install time.
|
||||
By default the install command runs in interactive mode and prompts for
|
||||
additional information like:
|
||||
|
||||
* DTR external url: the url clients use to read DTR. If you're using a load
|
||||
balancer for DTR, this is the IP address or DNS name of the load balancer
|
||||
* UCP url: the url clients use to reach UCP
|
||||
* UCP username and password: administrator credentials for UCP
|
||||
|
||||
You can also provide this information to the installer command so that it
|
||||
runs without prompting.
|
||||
Check the [reference documentation to learn more](../../../reference/cli/install.md).
|
||||
|
||||
## Step 4. Check that DTR is running
|
||||
|
|
|
@ -53,6 +53,17 @@ For each machine where you want to install DTR:
|
|||
Now that the offline hosts have all the images needed to install DTR,
|
||||
you can [install DTR on that host](index.md).
|
||||
|
||||
### Preventing outgoing connections
|
||||
|
||||
DTR makes outgoing connections to:
|
||||
|
||||
* report analytics,
|
||||
* check for new versions,
|
||||
* check online licenses,
|
||||
* update the vulnerability scanning database
|
||||
|
||||
All of these uses of online connections are optional. You can choose to
|
||||
disable or not use any or all of these features on the admin settings page.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -4,18 +4,8 @@ keywords: docker, dtr, install, uninstall
|
|||
title: Uninstall Docker Trusted Registry
|
||||
---
|
||||
|
||||
Uninstalling DTR is a two-step process. You first scale your DTR deployment down
|
||||
to a single replica. Then you uninstall the last DTR replica, which permanently
|
||||
removes DTR and deletes all its data.
|
||||
|
||||
Start by [scaling down your DTR deployment](../configure/set-up-high-availability.md) to a
|
||||
single replica.
|
||||
|
||||
When your DTR deployment is down to a single replica, you can use the
|
||||
`docker/dtr destroy` command to permanently remove DTR and all its data:
|
||||
|
||||
1. Use ssh to log into any node that is part of UCP.
|
||||
2. Uninstall DTR:
|
||||
Uninstalling DTR can be done by simply removing all data associated with each
|
||||
replica. To do that, you just run the destroy command once per replica:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
|
@ -23,6 +13,9 @@ docker run -it --rm \
|
|||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
You will be prompted for the UCP URL, UCP credentials and which replica to
|
||||
destroy.
|
||||
|
||||
To see what options are available in the destroy command, check the
|
||||
[destroy command reference documentation](../../../reference/cli/destroy.md).
|
||||
|
||||
|
|
|
@ -4,61 +4,85 @@ keywords: docker, registry, monitor, troubleshoot
|
|||
title: Troubleshoot Docker Trusted Registry
|
||||
---
|
||||
|
||||
<!-- TODO: review page for v2.2 -->
|
||||
This guide contains tips and tricks for troubleshooting DTR problems.
|
||||
|
||||
## Troubleshoot overlay networks
|
||||
|
||||
High availability in DTR depends on having overlay networking working in UCP.
|
||||
To manually test that overlay networking is working in UCP run the following
|
||||
commands on two different UCP machines.
|
||||
One way to test if overlay networks are working correctly you can deploy
|
||||
containers in different nodes, that are attached to the same overlay network
|
||||
and see if they can ping one another.
|
||||
|
||||
Use SSH to log into a UCP node, and run:
|
||||
|
||||
```none
|
||||
docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr
|
||||
docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test1 \
|
||||
--entrypoint sh docker/dtr
|
||||
```
|
||||
|
||||
You can create new overlay network for this test with `docker network create -d overaly network-name`.
|
||||
You can also use any images that contain `sh` and `ping` for this test.
|
||||
Then use SSH to log into another UCP node and run:
|
||||
|
||||
If the second command succeeds, overlay networking is working.
|
||||
|
||||
## DTR doesn't come up after a Docker restart
|
||||
|
||||
This is a known issue with Docker restart policies when DTR is running on the same
|
||||
machine as a UCP controller. If this happens, you can simply restart the DTR replica
|
||||
from the UCP UI under "Applications". The best workaround right now is to not run
|
||||
DTR on the same node as a UCP controller.
|
||||
|
||||
## Etcd refuses to start after a Docker restart
|
||||
|
||||
If you see the following log message in etcd's logs after a DTR restart it means that
|
||||
your DTR replicas are on machines that don't have their clocks synchronized. Etcd requires
|
||||
synchronized clocks to function correctly.
|
||||
|
||||
```
|
||||
2016-04-27 17:56:34.086748 W | rafthttp: the clock difference against peer aa4fdaf4c562342d is too high [8.484795885s > 1s]
|
||||
```none
|
||||
docker run -it --rm \
|
||||
--net dtr-ol --name overlay-test2 \
|
||||
--entrypoint ping docker/dtr -c 3 overlay-test1
|
||||
```
|
||||
|
||||
## Accessing the RethinkDB Admin UI
|
||||
If the second command succeeds, it means that overlay networking is working
|
||||
correctly.
|
||||
|
||||
> Warning: This command will expose your database to the internet with no authentication. Use with caution.
|
||||
You can run this test with any overlay network, and any Docker image that has
|
||||
`sh` and `ping`.
|
||||
|
||||
Run this on the UCP node that has a DTR replica with the given replica id:
|
||||
|
||||
```
|
||||
docker run --rm -it --net dtr-br -p 9999:8080 svendowideit/ambassador dtr-rethinkdb-$REPLICA_ID 8080
|
||||
## Access RethinkDB directly
|
||||
|
||||
DTR uses RethinkDB for persisting data and replicating it across replicas.
|
||||
It might be helpful to connect directly to the RethinkDB instance running on a
|
||||
DTR replica to check the DTR internal state.
|
||||
|
||||
Use SSH to log into a node that is running a DTR replica, and run the following
|
||||
command, replacing `$REPLICA_ID` by the id of the DTR replica running on that
|
||||
node:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
--net dtr-ol \
|
||||
-v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \
|
||||
$REPLICA_ID
|
||||
```
|
||||
|
||||
Options to make this more secure:
|
||||
|
||||
* Use `-p 127.0.0.1:9999:8080` to expose the admin UI only to localhost
|
||||
* Use an SSH tunnel in combination with exposing the port only to localhost
|
||||
* Use a firewall to limit which IPs are allowed to connect
|
||||
* Use a second proxy with TLS and basic auth to provide secure access over the Internet
|
||||
|
||||
## Accessing etcd directly
|
||||
|
||||
You can execute etcd commands on a UCP node hosting a DTR replica using etcdctl
|
||||
via the following docker command:
|
||||
This starts an interactive prompt where you can run RethinkDB queries like:
|
||||
|
||||
```none
|
||||
> r.db('dtr2').table('repositories')
|
||||
```
|
||||
docker run --rm -v dtr-ca-$REPLICA_ID:/ca --net dtr-br -it --entrypoint /etcdctl docker/dtr-etcd:v2.2.4 --endpoint https://dtr-etcd-$REPLICA_ID.dtr-br:2379 --ca-file /ca/etcd/cert.pem --key-file /ca/etcd-client/key.pem --cert-file /ca/etcd-client/cert.pem
|
||||
|
||||
[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/).
|
||||
|
||||
## Recover from an unhealthy replica
|
||||
|
||||
When a DTR replica is unhealthy or down, the DTR web UI displays a warning:
|
||||
|
||||
```none
|
||||
Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy.
|
||||
```
|
||||
|
||||
To fix this, you should remove the unhealthy replica from the DTR cluster,
|
||||
and join a new one. Start by running:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
{{ page.docker_image }} remove \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
||||
And then:
|
||||
|
||||
```none
|
||||
docker run -it --rm \
|
||||
{{ page.docker_image }} join \
|
||||
--ucp-node <ucp-node-name> \
|
||||
--ucp-insecure-tls
|
||||
```
|
||||
|
|
|
@ -4,23 +4,53 @@ keywords: docker, dtr, upgrade, install
|
|||
title: Upgrade DTR
|
||||
---
|
||||
|
||||
The first step in upgrading to a new minor version or patch release of DTR 2.2,
|
||||
is ensuring you're running DTR 2.1. If that's not the case, start by upgrading
|
||||
your installation to version 2.1, and then upgrade to 2.2.
|
||||
DTR uses [semantic versioning](http://semver.org/) and we aim to achieve specific
|
||||
guarantees while upgrading between versions. We never support downgrading. We
|
||||
support upgrades according to the following rules:
|
||||
|
||||
There is no downtime when upgrading a highly-available DTR cluster. If your
|
||||
DTR deployment has a single replica, schedule the upgrade to take place outside
|
||||
business peak hours to ensure the impact on your business is close to none.
|
||||
* When upgrading from one patch version to another you can skip patch versions
|
||||
because no data migraiton is done for patch versions.
|
||||
* When upgrading between minor versions, you can't skip versions, but you can
|
||||
upgrade from any patch versions of the previous minor version to any patch
|
||||
version of the current minor version.
|
||||
* When upgrading between major versions you also have to upgrade one major
|
||||
version at a time, but you have to upgrade to the earliest available minor
|
||||
version. We also strongly recommend upgrading to the latest minor/patch
|
||||
version for your major version first.
|
||||
|
||||
## Step 1. Upgrade DTR to 2.1
|
||||
|From| To| Description| Supported|
|
||||
|:----|:---|:------------|----------|
|
||||
| 2.2.0 | 2.2.1 | patch upgrade | yes |
|
||||
| 2.2.0 | 2.2.2 | skip patch version | yes |
|
||||
| 2.2.2 | 2.2.1 | patch downgrade | no |
|
||||
| 2.1.0 | 2.2.0 | minor upgrade | yes |
|
||||
| 2.1.1 | 2.2.0 | minor upgrade | yes |
|
||||
| 2.1.2 | 2.2.2 | minor upgrade | yes |
|
||||
| 2.0.1 | 2.2.0 | skip minor version | no |
|
||||
| 2.2.0 | 2.1.0 | minor downgrade | no |
|
||||
| 1.4.3 | 2.0.0 | major upgrade | yes |
|
||||
| 1.4.3 | 2.0.3 | major upgrade | yes |
|
||||
| 1.4.3 | 3.0.0 | skip major version | no |
|
||||
| 1.4.1 | 2.0.3 | major upgrade from an old version | no |
|
||||
| 1.4.3 | 2.1.0 | major upgrade skipping minor version | no |
|
||||
| 2.0.0 | 1.4.3 | major downgrade | no |
|
||||
|
||||
There may be at most a few seconds of interruption during the upgrade of a
|
||||
DTR cluster. Schedule the upgrade to take place outside business peak hours
|
||||
to ensure the impact on your business is close to none.
|
||||
|
||||
## Minor upgrade
|
||||
|
||||
Before starting your upgrade planning, make sure that the version of UCP you are
|
||||
using is supported by the version of DTR you are trying to upgrade to. <!--(TODO:
|
||||
link to the compatibility matrix)-->
|
||||
|
||||
### Step 1. Upgrade DTR to 2.1 if necessary
|
||||
|
||||
Make sure you're running DTR 2.1. If that's not the case, [upgrade your installation to the 2.1 version](/datacenter/dtr/2.1/guides/install/upgrade/.md).
|
||||
|
||||
## Step 2. Upgrade DTR
|
||||
### Step 2. Upgrade DTR
|
||||
|
||||
|
||||
|
||||
To upgrade DTR, **login with ssh** into a node that's part of the UCP cluster.
|
||||
Then pull the latest version of DTR:
|
||||
|
||||
```none
|
||||
|
@ -28,10 +58,11 @@ $ docker pull {{ page.docker_image }}
|
|||
```
|
||||
|
||||
If the node you're upgrading doesn't have access to the internet, you can
|
||||
use a machine with internet connection to
|
||||
[pull all the DTR images](../install/install-offline.md).
|
||||
follow the [offline installation documentation](../install/install-offline.md)
|
||||
to get the images.
|
||||
|
||||
Once you have the latest images on the node, run the upgrade command:
|
||||
Once you have the latest image on your machine (and the images on the target
|
||||
nodes if upgrading offline), run the upgrade command:
|
||||
|
||||
```none
|
||||
$ docker run -it --rm \
|
||||
|
@ -43,6 +74,16 @@ By default the upgrade command runs in interactive mode and prompts you for
|
|||
any necessary information. You can also check the
|
||||
[reference documentation](../../../reference/cli/index.md) for other existing flags.
|
||||
|
||||
The upgrade command will start replacing every container in your DTR cluster,
|
||||
one replica at a time. It will also perform certain data migrations. If anything
|
||||
fails or the upgrade is interrupted for any reason, you can re-run the upgrade
|
||||
command and it will resume from where it left off.
|
||||
|
||||
## Patch upgrade
|
||||
|
||||
A patch upgrade changes only the DTR containers and it's always safer than a minor
|
||||
upgrade. The command is the same as for a minor upgrade.
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [Release notes](release-notes.md)
|
||||
|
|
|
@ -10,6 +10,37 @@ known issues for each DTR version.
|
|||
You can then use [the upgrade instructions](index.md),
|
||||
to upgrade your installation to the latest release.
|
||||
|
||||
## DTR 2.2.2
|
||||
|
||||
(27 Feb 2017)
|
||||
|
||||
**New features**
|
||||
|
||||
* The web UI now displays a banner to administrators when a tag migration job
|
||||
is running
|
||||
|
||||

|
||||
|
||||
**General improvements**
|
||||
|
||||
* Upgraded DTR security scanner
|
||||
* Security scanner now generates less verbose logs
|
||||
* Made `docker/dtr join` more resilient when using an NFS storage backend
|
||||
* Made tag migrations more stable
|
||||
* Made updates to the vulnerability database more stable
|
||||
|
||||
**Bugs fixed**
|
||||
|
||||
* Fixed a problem when trying to use Scality as storage backend. This problem
|
||||
affected DTR 2.2.0 and 2.2.1
|
||||
* You can now use the web UI to create and manage teams that have slashes in
|
||||
their name
|
||||
* Fixed an issue causing RethinkDB to not start due to DNS errors when
|
||||
the RethinkDB containers were not restarted at the same time
|
||||
* The web UI now shows the security scanning button if the vulnerability database
|
||||
or security scanner have been updated
|
||||
|
||||
|
||||
## DTR 2.2.1
|
||||
|
||||
(9 Feb 2017)
|
||||
|
|
After Width: | Height: | Size: 50 KiB |
After Width: | Height: | Size: 58 KiB |
After Width: | Height: | Size: 51 KiB |
After Width: | Height: | Size: 16 KiB |
After Width: | Height: | Size: 200 KiB |
After Width: | Height: | Size: 248 KiB |
After Width: | Height: | Size: 161 KiB |
After Width: | Height: | Size: 243 KiB |
After Width: | Height: | Size: 433 KiB |
After Width: | Height: | Size: 380 KiB |
|
@ -2,10 +2,6 @@
|
|||
description: Learn how to install, configure, and use Docker Trusted Registry.
|
||||
keywords: docker, registry, repository, images
|
||||
title: Docker Trusted Registry 2.2 overview
|
||||
redirect_from:
|
||||
- /docker-hub-enterprise/
|
||||
- /docker-trusted-registry/overview/
|
||||
- /docker-trusted-registry/
|
||||
---
|
||||
|
||||
Docker Trusted Registry (DTR) is the enterprise-grade image storage solution
|
||||
|
@ -14,27 +10,44 @@ and manage the Docker images you use in your applications.
|
|||
|
||||
## Image management
|
||||
|
||||
Docker Trusted Registry can be installed on-premises, or on a virtual private
|
||||
DTR can be installed on-premises, or on a virtual private
|
||||
cloud. And with it, you can store your Docker images securely, behind your
|
||||
firewall.
|
||||
|
||||

|
||||
|
||||
You can use DTR as part of your continuous integration, and continuous
|
||||
delivery processes to build, run, and ship your applications.
|
||||
delivery processes to build, ship and run your applications.
|
||||
|
||||
DTR has a web based user interface that allows authorized users in your
|
||||
organization to browse docker images. It provides information about
|
||||
who pushed what image at what time. It even allows you to see what dockerfile
|
||||
lines were used to produce the image and, if security scanning is enabled, to
|
||||
see a list of all of the software installed in your images.
|
||||
|
||||
## Built-in security and access control
|
||||
## Built-in access control
|
||||
|
||||
DTR uses the same authentication mechanism as Docker Universal Control Plane.
|
||||
It has a built-in authentication mechanism, and also integrates with LDAP
|
||||
and Active Directory. It also supports Role Based Access Control (RBAC).
|
||||
Users can be managed manually or syched from LDAP or Active Directory. DTR
|
||||
uses [Role Based Access Control](admin/manage-users/index.md) (RBAC) to allow you to implement fine-grained
|
||||
access control policies for who has access to your Docker images.
|
||||
|
||||
This allows you to implement fine-grain access control policies on who has
|
||||
access to your Docker images.
|
||||
## Security scanning
|
||||
|
||||

|
||||
DTR has a built in security scanner that can be used to discover what versions
|
||||
of software are used in your images. It scans each layer and aggregates the
|
||||
results to give you a complete picture of what you are shipping as a part of
|
||||
your stack. Most importantly, it co-relates this information with a
|
||||
vulnerability database that is kept up to date through [periodic
|
||||
updates](admin/configure/set-up-vulnerability-scans.md). This
|
||||
gives you [unprecedented insight into your exposure to known security
|
||||
threats](user/manage-images/scan-images-for-vulnerabilities.md).
|
||||
|
||||
## Image signing
|
||||
|
||||
DTR ships with [Notary](../../../notary/getting_started/)
|
||||
built in so that you can use
|
||||
[Docker Content Trust](../../../engine/security/trust/content_trust/) to sign
|
||||
and verify images. For more information about managing Notary data in DTR see
|
||||
the [DTR-specific notary documentation](user/manage-images/manage-trusted-repositories.md).
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -13,7 +13,8 @@ Scanning. The results of these scans are reported for each image tag.
|
|||
Docker Security Scanning is available as an add-on to Docker Trusted Registry,
|
||||
and an administrator configures it for your DTR instance. If you do not see
|
||||
security scan results available on your repositories, your organization may not
|
||||
have purchased the Security Scanning feature or it may be disabled.
|
||||
have purchased the Security Scanning feature or it may be disabled. See [Set up
|
||||
Security Scanning in DTR](../../admin/configure/set-up-vulnerability-scans.md) for more details.
|
||||
|
||||
> **Tip**: Only users with write access to a repository can manually start a
|
||||
scan. Users with read-only access can view the scan results, but cannot start
|
||||
|
@ -21,19 +22,27 @@ a new scan.
|
|||
|
||||
## The Docker Security Scan process
|
||||
|
||||
Scans run either on demand when a user clicks the **Start Scan** links or
|
||||
**Scan** button, or automatically on any `docker push` to the repository.
|
||||
Scans run either on demand when a user clicks the **Start a Scan** links or
|
||||
**Scan** button (see [Manual scanning](#manual-scanning) below), or automatically
|
||||
on any `docker push` to the repository.
|
||||
|
||||
First the scanner performs a binary scan on each layer of the image, identifies
|
||||
the software components in each layer, and indexes the SHA of each component. A
|
||||
binary scan evaluates the components on a bit-by-bit level, so vulnerable
|
||||
components are discovered no matter what they're named or statically-linked.
|
||||
the software components in each layer, and indexes the SHA of each component in a
|
||||
bill-of-materials. A binary scan evaluates the components on a bit-by-bit level,
|
||||
so vulnerable components are discovered even if they are statically-linked or
|
||||
under a different name.
|
||||
|
||||
[//]: # (Placeholder for DSS workflow. @sarahpark is working on the diagram.)
|
||||
|
||||
The scan then compares the SHA of each component against the US National
|
||||
Vulnerability Database that is installed on your DTR instance. when
|
||||
Vulnerability Database that is installed on your DTR instance. When
|
||||
this database is updated, DTR reviews the indexed components for newly
|
||||
discovered vulnerabilities.
|
||||
|
||||
If you have subscribed to a webhook (see [Manage webhooks](../create-and-manage-webhooks.md))
|
||||
for scan completed/scan failed, then you will received the results of the scan
|
||||
as a json to the specified endpoint.
|
||||
|
||||
Most scans complete within an hour, however larger repositories may take longer
|
||||
to scan depending on your system resources.
|
||||
|
||||
|
@ -58,8 +67,15 @@ To start a security scan:
|
|||
2. Click the **Images** tab.
|
||||
3. Locate the image tag that you want to scan.
|
||||
4. In the **Vulnerabilities** column, click **Start a scan**.
|
||||
{: .with-border}
|
||||
|
||||
DTR begins the scanning process. You may need to refresh the page to see the
|
||||
You can also start a scan from the image details screen:
|
||||
|
||||
1. Click **View Details** on the desired image tag.
|
||||
2. Click **Scan** on the right-hand side, above the layers table.
|
||||
{: .with-border}
|
||||
|
||||
DTR begins the scanning process. You will need to refresh the page to see the
|
||||
results once the scan is complete.
|
||||
|
||||
## Change the scanning mode
|
||||
|
@ -77,6 +93,7 @@ To change the repository scanning mode:
|
|||
1. Navigate to the repository, and click the **Settings** tab.
|
||||
2. Scroll down to the **Image scanning** section.
|
||||
3. Select the desired scanning mode.
|
||||
{: .with-border}
|
||||
|
||||
## View security scan results
|
||||
|
||||
|
@ -85,6 +102,7 @@ Once DTR has run a security scan for an image, you can view the results.
|
|||
The **Images** tab for each repository includes a summary of the most recent
|
||||
scan results for each image.
|
||||
|
||||
{: .with-border}
|
||||
- A green shield icon with a check mark indicates that the scan did not find
|
||||
any vulnerabilities.
|
||||
- A red or orange shield icon indicates that vulnerabilities were found, and
|
||||
|
@ -113,6 +131,8 @@ by the Dockerfile.
|
|||
> **Tip**: The layers view can be long, so be sure
|
||||
to scroll down if you don't immediately see the reported vulnerabilities.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
- The **Components** view lists the individual component libraries indexed by
|
||||
the scanning system, in order of severity and number of vulnerabilities found,
|
||||
most vulnerable first.
|
||||
|
@ -123,6 +143,7 @@ most vulnerable first.
|
|||
the scan report provides details on each one. The component details also
|
||||
include the license type used by the component, and the filepath to the
|
||||
component in the image.
|
||||
{: .with-border}
|
||||
|
||||
### What do I do next?
|
||||
|
||||
|
|
Before Width: | Height: | Size: 82 KiB After Width: | Height: | Size: 199 KiB |
Before Width: | Height: | Size: 110 KiB After Width: | Height: | Size: 130 KiB |
Before Width: | Height: | Size: 89 KiB After Width: | Height: | Size: 292 KiB |
Before Width: | Height: | Size: 126 KiB After Width: | Height: | Size: 317 KiB |
Before Width: | Height: | Size: 140 KiB After Width: | Height: | Size: 489 KiB |
|
@ -7,11 +7,10 @@ keywords: docker, datacenter, install, orchestration, management
|
|||
{% assign launch_url = "https://console.aws.amazon.com/cloudformation/home?#/stacks/new?templateURL=" %}
|
||||
{% assign template_url = "https://s3.amazonaws.com/packages.docker.com/caas/docker/docker_for_aws_ddc_2.1.0.json" %}
|
||||
|
||||
Docker Datacenter on Docker for AWS is a one-click deploy of highly-scalable
|
||||
Docker Datacenter (Universal Control Plane and Docker Trusted Registry) based
|
||||
on Docker and AWS best-practices. It is based on
|
||||
[Docker for AWS](https://beta.docker.com/docs/) and currently should be used
|
||||
for evaluation purposes only.
|
||||
Docker Datacenter on Docker for Amazon AWS is an one-click deployment of DDC on
|
||||
AWS. It deploys multiple nodes with Docker CS Engine, and then installs
|
||||
highly available versions of Universal Control Plane and Docker Trusted
|
||||
Registry.
|
||||
|
||||

|
||||
|
||||
|
@ -92,7 +91,7 @@ Docker Datacenter Password
|
|||
Docker Datacenter License in JSON format or an S3 URL to download it. You can
|
||||
get a trial license [here](https://store.docker.com/bundles/docker-datacenter)
|
||||
|
||||
#### EnableSystemPrune
|
||||
**EnableSystemPrune**
|
||||
|
||||
Enable if you want Docker for AWS to automatically cleanup unused space on your swarm nodes.
|
||||
|
||||
|
@ -104,19 +103,16 @@ Pruning removes the following:
|
|||
- All dangling images
|
||||
- All unused networks
|
||||
|
||||
#### EnableCloudWatchLogs
|
||||
Enable if you want Docker to send your container logs to CloudWatch. ("yes", "no") Defaults to yes.
|
||||
|
||||
#### WorkerDiskSize
|
||||
**WorkerDiskSize**
|
||||
Size of Workers's ephemeral storage volume in GiB (20 - 1024).
|
||||
|
||||
#### WorkerDiskType
|
||||
**WorkerDiskType**
|
||||
Worker ephemeral storage volume type ("standard", "gp2").
|
||||
|
||||
#### ManagerDiskSize
|
||||
**ManagerDiskSize**
|
||||
Size of Manager's ephemeral storage volume in GiB (20 - 1024)
|
||||
|
||||
#### ManagerDiskType
|
||||
**ManagerDiskType**
|
||||
Manager ephemeral storage volume type ("standard", "gp2")
|
||||
|
||||
|
||||
|
@ -131,11 +127,11 @@ above configuration options.
|
|||
|
||||
- Click on **Launch Stack** below. This link will take you to AWS cloudformation portal.
|
||||
|
||||
[]({{ launch_url }}{{ template_url }})
|
||||
[]({{ launch_url }}{{ template_url }}){: .with-border}
|
||||
|
||||
- Confirm your AWS Region that you'd like to launch this stack in (top right corner)
|
||||
- Provide the required parameters and click **Next** (see below)
|
||||

|
||||
{: .with-border}
|
||||
- **Confirm** and **Launch**
|
||||
- Once the stack is successfully created (it does take between 10-15 mins), click on **Output** tab to see the URLs of UCP and DTR.
|
||||
|
||||
|
@ -217,15 +213,16 @@ which is a highly optimized AMI built specifically for running Docker on AWS
|
|||
Once the stack is successfully created, you can access UCP and DTR URLs in the
|
||||
output tab as follows:
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
When accessing UCP and DTR, log in using the username and password that you
|
||||
provided when you launched the cloudformation stack. You should see the below
|
||||
landing pages:
|
||||
|
||||
|
||||

|
||||

|
||||
{: .with-border}
|
||||
|
||||
{: .with-border}
|
||||
|
||||
> Note: During the installation process, a self-signed certificate is generated
|
||||
for both UCP and DTR. You can replace these certificates with your own
|
||||
|
@ -305,11 +302,11 @@ provides multiple advantages to easily deploy and access your application.
|
|||
```
|
||||
b. Notice the updated ELB configuration:
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
c. Access your application using **DefaultExternalTarget** DNS and published port:
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
|
||||
2. **Swarm Mode Routing Mesh**
|
||||
|
@ -342,7 +339,7 @@ provides multiple advantages to easily deploy and access your application.
|
|||
docker service create -p 8080 \
|
||||
--network ucp-hrm \
|
||||
--name demo-hrm-app \
|
||||
--label com.docker.ucp.mesh.http=8080=http://foo.example.com \
|
||||
--label com.docker.ucp.mesh.http.8080=external_route=http://foo.example.com,internal_port=8080 \
|
||||
ehazlett/docker-demo:dcus
|
||||
```
|
||||
|
||||
|
@ -385,12 +382,12 @@ created when filling out the CloudFormation template for Docker for AWS.
|
|||
Once you find it, click the checkbox, next to the name. Then Click on the
|
||||
"Edit" button on the lower detail pane.
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
Change the "Desired" field to the size of the worker pool that you would like,
|
||||
and hit "Save".
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
This will take a few minutes and add the new workers to your swarm
|
||||
automatically. To lower the number of workers back down, you just need to
|
||||
|
@ -402,7 +399,7 @@ Go to the CloudFormation management page, and click the checkbox next to the
|
|||
stack you want to update. Then Click on the action button at the top, and
|
||||
select "Update Stack".
|
||||
|
||||

|
||||
{: .with-border}
|
||||
|
||||
Pick "Use current template", and then click "Next". Fill out the same parameters
|
||||
you have specified before, but this time, change your worker count to the new
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
description: Learn about Docker Universal Control Plane, the enterprise-grade cluster
|
||||
management solution from Docker.
|
||||
keywords: docker, ucp, overview, orchestration, clustering
|
||||
redirect_from:
|
||||
- /ucp/overview/
|
||||
title: Universal Control Plane overview
|
||||
---
|
||||
|
||||
|
@ -64,4 +66,4 @@ the images you deploy have not been altered in any way.
|
|||
## Where to go next
|
||||
|
||||
* [Get started with UCP](install-sandbox.md)
|
||||
* [UCP architecture](architecture.md)
|
||||
* [UCP architecture](architecture.md)
|
|
@ -8,6 +8,11 @@ title: Integrate with LDAP
|
|||
Docker UCP integrates with LDAP services, so that you can manage users from a
|
||||
single place.
|
||||
|
||||
When you switch from built-in authentication to LDAP authentication,
|
||||
all manually created users whose usernames do not match any LDAP search results
|
||||
become inactive with the exception of the recovery admin user which can still
|
||||
login with the recovery admin password.
|
||||
|
||||
## Configure the LDAP integration
|
||||
|
||||
To configure UCP to authenticate users using an LDAP service, go to
|
||||
|
@ -89,5 +94,3 @@ You can also manually synchronize users by clicking the **Sync Now** button.
|
|||
|
||||
When a user is removed from LDAP, that user becomes inactive after the LDAP
|
||||
synchronization runs.
|
||||
Also, when you switch from the built-in authentication to using LDAP
|
||||
authentication, all manually created users become inactive.
|
||||
|
|
After Width: | Height: | Size: 88 KiB |
|
@ -15,10 +15,26 @@ If you need help, you can file a ticket via:
|
|||
|
||||
Be sure to use your company email when filing tickets.
|
||||
|
||||
## Download a support dump
|
||||
Docker Support engineers may ask you to provide a UCP support dump, which is an
|
||||
archive that contains UCP system logs and diagnostic information. To obtain a
|
||||
support dump:
|
||||
|
||||
Docker Support engineers may ask you to provide a UCP support dump. For this:
|
||||
|
||||
1. Log into UCP with an administrator account.
|
||||
## From the UI
|
||||
|
||||
1. Log into the UCP UI with an administrator account.
|
||||
2. On the top-right menu, **click your username**, and choose **Support Dump**.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
## From the CLI
|
||||
|
||||
To get the support dump from the CLI, use SSH to log into a UCP manager node
|
||||
and run:
|
||||
|
||||
```none
|
||||
docker run --rm \
|
||||
--name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} \
|
||||
support > docker-support.tgz
|
||||
```
|
||||
|
|
|
@ -43,8 +43,7 @@ users can see the containers they deploy in the cluster.
|
|||
|
||||
## Team permission levels
|
||||
|
||||
Teams allow you to define fine-grain permissions to services, containers, and
|
||||
networks that have the label `com.docker.ucp.access.label` applied to them.
|
||||
Teams and labels give the administrator fine-grained control over permissions. Each team can have multiple labels. Each label has a key of `com.docker.ucp.access.label`. The label is then applied to the containers, services, networks, secrets and volumes. Labels are not currently available for nodes and images. DTR has its own permissions.
|
||||
|
||||
There are four permission levels:
|
||||
|
||||
|
|
|
@ -14,29 +14,56 @@ The next step is creating a backup policy and disaster recovery plan.
|
|||
## Backup policy
|
||||
|
||||
As part of your backup policy you should regularly create backups of UCP.
|
||||
To create a backup of UCP, use the `docker/ucp backup` command. This creates
|
||||
a tar archive with the contents of the [volumes used by UCP](../architecture.md)
|
||||
to persist data, and streams it to stdout.
|
||||
|
||||
You need to run the backup command on a UCP manager node. Since UCP stores
|
||||
the same data on all manager nodes, you only need to create a backup of a
|
||||
single node.
|
||||
To create a UCP backup, you can run the `{{ page.docker_image }} backup` command
|
||||
on a single UCP manager. This command creates a tar archive with the
|
||||
contents of all the [volumes used by UCP](../architecture.md) to persist data
|
||||
and streams it to stdout.
|
||||
|
||||
You only need to run the backup command on a single UCP manager node. Since UCP
|
||||
stores the same data on all manager nodes, you only need to take periodic
|
||||
backups of a single manager node.
|
||||
|
||||
To create a consistent backup, the backup command temporarily stops the UCP
|
||||
containers running on the node where the backup is being performed. User
|
||||
containers and services are not affected by this.
|
||||
resources, such as services, containers and stacks are not affected by this
|
||||
operation and will continue operating as expected. Any long-lasting `exec`,
|
||||
`logs`, `events` or `attach` operations on the affected manager node will
|
||||
be disconnected.
|
||||
|
||||
To have minimal impact on your business, you should:
|
||||
Additionally, if UCP is not configured for high availability, you will be
|
||||
temporarily unable to:
|
||||
|
||||
* Log in to the UCP Web UI
|
||||
* Perform CLI operations using existing client bundles
|
||||
|
||||
To minimize the impact of the backup policy on your business, you should:
|
||||
|
||||
* Schedule the backup to take place outside business hours.
|
||||
* Configure UCP for high availability. This allows load-balancing user requests
|
||||
across multiple UCP controller nodes.
|
||||
across multiple UCP manager nodes.
|
||||
* Schedule the backup to take place outside business hours.
|
||||
|
||||
## Backup command
|
||||
|
||||
The example below shows how to create a backup of a UCP controller node:
|
||||
The example below shows how to create a backup of a UCP manager node and
|
||||
verify its contents:
|
||||
|
||||
```bash
|
||||
```none
|
||||
# Create a backup, encrypt it, and store it on /tmp/backup.tar
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} backup --interactive > /tmp/backup.tar
|
||||
|
||||
# Ensure the backup is a valid tar and list its contents
|
||||
# In a valid backup file, over 100 files should appear in the list
|
||||
# and the `./ucp-node-certs/key.pem` file should be present
|
||||
$ tar --list -f /tmp/backup.tar
|
||||
```
|
||||
|
||||
A backup file may optionally be encrypted using a passphrase, as in the
|
||||
following example:
|
||||
|
||||
```none
|
||||
# Create a backup, encrypt it, and store it on /tmp/backup.tar
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
|
@ -47,71 +74,83 @@ $ docker run --rm -i --name ucp \
|
|||
$ gpg --decrypt /tmp/backup.tar | tar --list
|
||||
```
|
||||
|
||||
## Restore command
|
||||
|
||||
The example below shows how to restore a UCP controller node from an existing
|
||||
backup:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} restore < backup.tar
|
||||
```
|
||||
|
||||
The restore command may also be invoked in interactive mode:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /path/to/backup.tar:/config/backup.tar \
|
||||
{{ page.docker_image }} restore -i
|
||||
```
|
||||
|
||||
## Restore your cluster
|
||||
|
||||
The restore command can be used to create a new UCP cluster from a backup file.
|
||||
After the restore operation is complete, the following data will be copied from
|
||||
the backup file:
|
||||
After the restore operation is complete, the following data will be recovered
|
||||
from the backup file:
|
||||
|
||||
* Users, Teams and Permissions.
|
||||
* Cluster Configuration, such as the default Controller Port or the KV store
|
||||
timeout.
|
||||
* DDC Subscription License.
|
||||
* Options on Scheduling, Content Trust, Authentication Methods and Reporting.
|
||||
* Users, teams and permissions.
|
||||
* All UCP configuration options available under `Admin Settings`, such as the
|
||||
DDC subscription license, scheduling options, Content Trust and authentication
|
||||
backends.
|
||||
|
||||
The restore operation may be performed against any Docker Engine, regardless of
|
||||
swarm membership, as long as the target Engine is not already managed by a UCP
|
||||
installation. If the Docker Engine is already part of a swarm, that swarm and
|
||||
all deployed containers and services will be managed by UCP after the restore
|
||||
operation completes.
|
||||
There are two ways to restore a UCP cluster:
|
||||
|
||||
As an example, if you have a cluster with three controller nodes, A, B, and C,
|
||||
and your most recent backup was of node A:
|
||||
* On a manager node of an existing swarm, which is not part of a UCP
|
||||
installation. In this case, a UCP cluster will be restored from the backup.
|
||||
* On a docker engine that is not participating in a swarm. In this case, a new
|
||||
swarm will be created and UCP will be restored on top
|
||||
|
||||
1. Uninstall UCP from the swarm using the `uninstall-ucp` operation.
|
||||
2. Restore one of the swarm managers, such as node B, using the most recent
|
||||
backup from node A.
|
||||
3. Wait for all nodes of the swarm to become healthy UCP nodes.
|
||||
In order to restore an existing UCP installation from a backup, you will need to
|
||||
first uninstall UCP from the cluster by using the `uninstall-ucp` command
|
||||
|
||||
You should now have your UCP cluster up and running.
|
||||
The example below shows how to restore a UCP cluster from an existing backup
|
||||
file, presumed to be located at `/tmp/backup.tar`:
|
||||
|
||||
Additionally, in the event where half or more controller nodes are lost and
|
||||
cannot be recovered to a healthy state, the system can only be restored through
|
||||
the following disaster recovery procedure. It is important to note that this
|
||||
proceedure is not guaranteed to succeed with no loss of either swarm services or
|
||||
UCP configuration data:
|
||||
```none
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} restore < /tmp/backup.tar
|
||||
```
|
||||
|
||||
If the backup file is encrypted with a passphrase, you will need to provide the
|
||||
passphrase to the restore operation:
|
||||
|
||||
```none
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
{{ page.docker_image }} restore --passphrase "secret" < /tmp/backup.tar
|
||||
```
|
||||
|
||||
The restore command may also be invoked in interactive mode, in which case the
|
||||
backup file should be mounted to the container rather than streamed through
|
||||
stdin:
|
||||
|
||||
```none
|
||||
$ docker run --rm -i --name ucp \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /tmp/backup.tar:/config/backup.tar \
|
||||
{{ page.docker_image }} restore -i
|
||||
```
|
||||
|
||||
## Disaster recovery
|
||||
|
||||
In the event where half or more manager nodes are lost and cannot be recovered
|
||||
to a healthy state, the system is considered to have lost quorum and can only be
|
||||
restored through the following disaster recovery procedure.
|
||||
|
||||
It is important to note that this proceedure is not guaranteed to succeed with
|
||||
no loss of running services or configuration data. To properly protect against
|
||||
manager failures, the system should be configured for [high availability](configure/set-up-high-availability.md).
|
||||
|
||||
1. On one of the remaining manager nodes, perform `docker swarm init
|
||||
--force-new-cluster`. This will instantiate a new single-manager swarm by
|
||||
recovering as much state as possible from the existing manager. This is a
|
||||
disruptive operation and any existing tasks will be either terminated or
|
||||
suspended.
|
||||
--force-new-cluster`. You may need to specify also need to specify an
|
||||
`--advertise-addr` parameter which is equivalent to the `--host-address`
|
||||
parameter of the `docker/ucp install` operation. This will instantiate a new
|
||||
single-manager swarm by recovering as much state as possible from the
|
||||
existing manager. This is a disruptive operation and existing tasks may be
|
||||
either terminated or suspended.
|
||||
2. Obtain a backup of one of the remaining manager nodes if one is not already
|
||||
available.
|
||||
3. Perform a restore operation on the recovered swarm manager node.
|
||||
4. For all other nodes of the cluster, perform a `docker swarm leave --force`
|
||||
and then a `docker swarm join` operation with the cluster's new join-token.
|
||||
5. Wait for all nodes of the swarm to become healthy UCP nodes.
|
||||
3. If UCP is still installed on the cluster, uninstall UCP using the
|
||||
`uninstall-ucp` command.
|
||||
4. Perform a restore operation on the recovered swarm manager node.
|
||||
5. Log in to UCP and browse to the nodes page, or use the CLI `docker node ls`
|
||||
command.
|
||||
6. If any nodes are listed as `down`, you'll have to manually [remove these
|
||||
nodes](../configure/scale-your-cluster.md) from the cluster and then re-join
|
||||
them using a `docker swarm join` operation with the cluster's new join-token.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
---
|
||||
title: Add SANs to cluster certificates
|
||||
description: Learn how to add new SANs to cluster nodes, allowing you to connect to UCP with a different hostname
|
||||
keywords: Docker, cluster, nodes, labels, certificates, SANs
|
||||
---
|
||||
|
||||
UCP always runs with HTTPS enabled.When you connect to UCP, you need to make
|
||||
sure that the hostname that you use to connect is recognized by UCP's
|
||||
certificates. If, for instance, you put UCP behind a load balancer that
|
||||
forwards its traffic to your UCP instance, your requests will be for the load
|
||||
balancer's hostname or IP address, not UCP's. UCP will reject these requests
|
||||
unless you include the load balancer's address as a Subject Alternative Name
|
||||
(or SAN) in UCP's certificates.
|
||||
|
||||
If you [use your own TLS certificates](use-your-own-tls-certificates.md), you
|
||||
need to make sure that they have the correct SAN values. You can learn more
|
||||
at the above link.
|
||||
|
||||
If you want to use the self-signed certificate that UCP has out of the box, you
|
||||
can set up the SANs when you install UCP with the `--san` argument. You can
|
||||
also add them after installation.
|
||||
|
||||
## Add new SANs to UCP after installation
|
||||
|
||||
Log in with administrator credentials in the **UCP web UI**, navigate to the
|
||||
**Nodes** page, and choose a node.
|
||||
|
||||
Click the **Add SAN** button, and add one or more SANs to the node.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Once you're done, click **Save Changes**.
|
||||
|
||||
You will have to do this on every manager node in the cluster, but once you
|
||||
have done so, the SANs will be automatically applied to any new manager nodes
|
||||
that join the cluster.
|
||||
|
||||
You can also do this from the CLI by first running:
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
$ docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
|
||||
default-cs,127.0.0.1,172.17.0.1
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
This will get the current set of SANs for the given manager node. Append your
|
||||
desired SAN to this list (e.g. `default-cs,127.0.0.1,172.17.0.1,example.com`)
|
||||
and then run:
|
||||
|
||||
```bash
|
||||
$ docker node update --label-add com.docker.ucp.SANs=<SANs-list> <node-id>
|
||||
```
|
||||
|
||||
`<SANs-list>` is the list of SANs with your new SAN appended at the end. As in
|
||||
the web UI, you must do this for every manager node.
|
|
@ -1,63 +1,109 @@
|
|||
---
|
||||
description: Learn how to integrate UCP with an LDAP service, so that you can manage
|
||||
users from a single place.
|
||||
keywords: LDAP, authentication, user management
|
||||
title: Integrate with LDAP
|
||||
description: Learn how to integrate UCP with an LDAP service, so that you can
|
||||
manage users from a single place.
|
||||
keywords: LDAP, directory, authentication, user management
|
||||
title: Integrate with an LDAP Directory
|
||||
---
|
||||
|
||||
Docker UCP integrates with LDAP services, so that you can manage users from a
|
||||
single place.
|
||||
Docker UCP integrates with LDAP directory services, so that you can manage
|
||||
users and groups from your organization's directory and it will automatically
|
||||
propagate that information to UCP and DTR.
|
||||
|
||||
When you switch from built-in authentication to LDAP authentication,
|
||||
all manually created users whose usernames do not match any LDAP search results
|
||||
become inactive with the exception of the recovery admin user which can still
|
||||
login with the recovery admin password.
|
||||
|
||||
## Configure the LDAP integration
|
||||
|
||||
To configure UCP to authenticate users using an LDAP service, go to
|
||||
the **UCP web UI**, navigate to the **Settings** page, and click the **Auth**
|
||||
tab.
|
||||
To configure UCP to create and authenticate users using an LDAP directory,
|
||||
go to the **UCP web UI**, navigate to the **Settings** page, and click the
|
||||
**Auth** tab.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Then configure your LDAP integration.
|
||||
Then configure your LDAP directory integration.
|
||||
|
||||
**Authentication**
|
||||
|
||||
| Field | Description |
|
||||
|:-------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Method | The method used to authenticate users. Managed authentication uses the UCP built-in authentication mechanism. LDAP uses an LDAP service to authenticate users. |
|
||||
| Default permission for newly discovered accounts | The permission level assigned by default to a new user. Learn more about default permission levels. |
|
||||
| Field | Description |
|
||||
|:-------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Method | The method used to create and authenticate users. The *LDAP* method uses a remote directory server to automatically create users and all logins will be forwarded to the directory server. |
|
||||
| Default permission for newly discovered accounts | The permission level assigned by default to a new user. [Learn more about default permission levels](../../manage-users/permission-levels.md). |
|
||||
|
||||
**LDAP server configuration**
|
||||
|
||||
| Field | Description |
|
||||
|:------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| LDAP server URL | The URL where the LDAP server can be reached. |
|
||||
| Recovery admin username | The username for a recovery user that can access UCP even when the integration with LDAP is misconfigured or the LDAP server is offline. |
|
||||
| Recovery admin password | The password for the recovery user. |
|
||||
| Reader DN | The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice this should be an LDAP read-only user. |
|
||||
| Reader password | The password of the account used for searching entries in the LDAP server. |
|
||||
| Field | Description |
|
||||
|:------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| LDAP server URL | The URL where the LDAP server can be reached. |
|
||||
| Recovery admin username | The username for a recovery user that can access UCP even when the integration with LDAP is misconfigured or the LDAP server is offline. |
|
||||
| Recovery admin password | The password for the recovery user which is securely salted and hashed and stored in UCP. The recovery admin user can use this password to login if the LDAP server is misconfigured or offline. |
|
||||
| Reader DN | The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice this should be an LDAP read-only user. |
|
||||
| Reader password | The password of the account used for searching entries in the LDAP server. |
|
||||
|
||||
**LDAP security options**
|
||||
|
||||
| Field | Description |
|
||||
|:----------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Skip verification of server certificate | Whether to verify or not the LDAP server certificate when using TLS. The connection is still encrypted, but vulnerable to man-in-the-middle attacks. |
|
||||
| Use StartTLS | Whether to connect to the LDAP server using TLS or not. If you set the LDAP Server URL field with `ldaps://`, this field is ignored. |
|
||||
| Field | Description |
|
||||
|:----------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Skip verification of server certificate | Whether to verify the LDAP server certificate when using TLS. The connection is still encrypted, but vulnerable to man-in-the-middle attacks. |
|
||||
| Use StartTLS | Whether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with `ldaps://`, this field is ignored. |
|
||||
|
||||
**User search configurations**
|
||||
|
||||
| Field | Description |
|
||||
|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Base DN | The distinguished name on the LDAP tree where the search should start looking for users. |
|
||||
| Username attribute | The LDAP attribute to use as username on UCP. |
|
||||
| Full name attribute | The LDAP attribute to use as user name on UCP. |
|
||||
| Filter | The LDAP search filter used to find LDAP users. If you leave this field empty, all LDAP entries on the Base DN, are imported as users. |
|
||||
| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
|
||||
| Field | Description |
|
||||
|:------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
|
||||
| Username attribute | The LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: `/` `\` `[` `]` `:` `;` `|` `=` `,` `+` `*` `?` `<` `>` `'` `"`. |
|
||||
| Full name attribute | The LDAP attribute to use as the user's full name for display purposes. If left empty, UCP will not create new users with a full name value. |
|
||||
| Filter | The LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. |
|
||||
| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
|
||||
| Match group members | Whether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support `memberOf` search filters. |
|
||||
| Iterate through group members | If `Match Group Members` is selected, this option searches for users by first iterating over the target group's membership and makes a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter or if your directory server does not support simple pagination of search results. |
|
||||
| Group DN | If `Match Group Members` is selected, this specifies the distinguished name of the group from which to select users. |
|
||||
| Group member attribute | If `Match Group Members` is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. |
|
||||
|
||||
{: .with-border}
|
||||
|
||||
Clicking **+ Add another user search configuration** will expand additional
|
||||
sections for configuring more user search queries. This is useful in cases
|
||||
where users may be found in multiple distinct subtrees of your organization's
|
||||
directory. Any user entry which matches at least one of the search
|
||||
configurations will be synced as a user.
|
||||
|
||||
**Advanced LDAP configuration**
|
||||
|
||||
| Field | Description |
|
||||
|:---------------------------|:----------------------------------------------------|
|
||||
| No simple pagination | If your LDAP server doesn't support pagination. |
|
||||
| Enable sync of admin users | Whether to import LDAP users as UCP administrators. |
|
||||
| Field | Description |
|
||||
|:---------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| No simple pagination | If your LDAP server doesn't support pagination. |
|
||||
| Enable sync of admin users | Whether to import LDAP users as UCP administrators. |
|
||||
| LDAP Match Method | If admin user sync is enabled, this option specifies whether to match admin user entries using a search query or by selecting them as members from a group. For the expanded options, refer to the options described below. |
|
||||
|
||||
|
||||
**Match LDAP Group Members**
|
||||
|
||||
This option specifies that system admins should be synced directly with members
|
||||
of a group in your organization's LDAP directory. The admins will be synced to
|
||||
match the membership of the group. The configured recovery admin user will also
|
||||
remain a system admin.
|
||||
|
||||
| Field | Description |
|
||||
|:-----------------------|:------------------------------------------------------------------------------------------------------|
|
||||
| Group DN | This specifies the distinguished name of the group from which to select users. |
|
||||
| Group member attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
|
||||
|
||||
**Match LDAP Search Results**
|
||||
|
||||
This option specifies that system admin should be synced using a search query
|
||||
against your organization's LDAP directory. The admins will by synced to match
|
||||
the users in the search results. The configured recovery admin user will also
|
||||
remain a system admin.
|
||||
|
||||
| Field | Description |
|
||||
|:--------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
|
||||
| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
|
||||
| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. |
|
||||
|
||||
|
||||
**Sync configuration**
|
||||
|
||||
|
@ -67,10 +113,10 @@ Then configure your LDAP integration.
|
|||
|
||||
**Test LDAP connection**
|
||||
|
||||
| Field | Description |
|
||||
|:-------------------|:---------------------------------------------------------------------|
|
||||
| LDAP test username | An LDAP user to test that the configuration is correctly configured. |
|
||||
| LDAP test password | The password of the LDAP user. |
|
||||
| Field | Description |
|
||||
|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Username | The username with which the user will login to this application. This value should correspond to the Username Attribute specified in the form above. |
|
||||
| Password | The user's password used to authenticate (BIND) to the directory server. |
|
||||
|
||||
Before you save the configuration changes, you should test that the integration
|
||||
is correctly configured. You can do this by providing the credentials of an
|
||||
|
@ -79,9 +125,9 @@ LDAP user, and clicking the **Test** button.
|
|||
## Synchronize users
|
||||
|
||||
Once you've configure the LDAP integration, UCP synchronizes users based on the
|
||||
interval you've defined. When the synchronization runs, UCP stores logs that
|
||||
can help you troubleshoot when something goes wrong.
|
||||
|
||||
interval you've defined starting at the top of the hour. When the
|
||||
synchronization runs, UCP stores logs that can help you troubleshoot when
|
||||
something goes wrong.
|
||||
|
||||
You can also manually synchronize users by clicking the **Sync Now** button.
|
||||
|
||||
|
@ -89,5 +135,20 @@ You can also manually synchronize users by clicking the **Sync Now** button.
|
|||
|
||||
When a user is removed from LDAP, that user becomes inactive after the LDAP
|
||||
synchronization runs.
|
||||
|
||||
Also, when you switch from the built-in authentication to using LDAP
|
||||
authentication, all manually created users become inactive.
|
||||
authentication, all manually created users whose usernames do not match any
|
||||
LDAP search results become inactive with the exception of the recovery admin
|
||||
user which can still login with the recovery admin password.
|
||||
|
||||
## Data synced from your organization's LDAP directory
|
||||
|
||||
UCP saves a minimum amount of user data required to operate. This includes
|
||||
the value of the username and full name attributes that you have specified in
|
||||
the configuration as well as the distinguished name of each synced user.
|
||||
UCP does not query, or store any additional data from the directory server.
|
||||
|
||||
## Syncing Teams
|
||||
|
||||
For syncing teams in UCP with a search query or group in your organization's
|
||||
LDAP directory, refer to [the documentation on creating and managing teams](../../manage-users/create-and-manage-teams.md).
|
||||
|
|
|
@ -9,7 +9,7 @@ Docker UCP is designed for scaling horizontally as your applications grow in
|
|||
size and usage. You can add or remove nodes from the UCP cluster to make it
|
||||
scale to your needs.
|
||||
|
||||

|
||||

|
||||
|
||||
Since UCP leverages the clustering functionality provided by Docker Engine,
|
||||
you use the [docker swarm join](/engine/swarm/swarm-tutorial/add-nodes.md)
|
||||
|
@ -47,7 +47,7 @@ the **Resources** page, and go to the **Nodes** section.
|
|||
|
||||
Click the **Add Node button** to add a new node.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
Check the 'Add node as a manager' option if you want to add the node as manager.
|
||||
Also, set the 'Use a custom listen address' option to specify the IP of the
|
||||
|
@ -56,12 +56,36 @@ host that you'll be joining to the cluster.
|
|||
Then you can copy the command displayed, use ssh to **log into the host** that
|
||||
you want to join to the cluster, and **run the command** on that host.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
After you run the join command in the node, the node starts being displayed
|
||||
in UCP.
|
||||
|
||||
## Pause, drain, and remove nodes
|
||||
## Remove nodes from the cluster
|
||||
|
||||
1. If the target node is a manager, you will need to first demote the node into
|
||||
a worker before proceeding with the removal:
|
||||
* From the UCP web UI, navigate to the **Resources** section and then go to
|
||||
the **Nodes** page. Select the node you wish to remove and switch its role
|
||||
to **Worker**, wait until the operation is completed and confirm that the
|
||||
node is no longer a manager.
|
||||
* From the CLI, perform `docker node ls` and identify the nodeID or hostname
|
||||
of the target node. Then, run `docker node demote <nodeID or hostname>`.
|
||||
|
||||
2. If the status of the worker node is `Ready`, you'll need to manually force
|
||||
the node to leave the swarm. To do this, connect to the target node through
|
||||
SSH and run `docker swarm leave --force` directly against the local docker
|
||||
engine. Warning: do not perform this step if the node is still a manager, as
|
||||
that may cause loss of quorum.
|
||||
|
||||
3. Now that the status of the node is reported as `Down`, you may remove the
|
||||
node:
|
||||
* From the UCP web UI, browse to the **Nodes** page, select the node and
|
||||
click on the **Remove Node** button. You will need to click on the button
|
||||
again within 5 seconds to confirm the operation.
|
||||
* From the CLI, perform `docker node rm <nodeID or hostname>`
|
||||
|
||||
## Pause and drain nodes
|
||||
|
||||
Once a node is part of the cluster you can change its role making a manager
|
||||
node into a worker and vice versa. You can also configure the node availability
|
||||
|
@ -72,12 +96,52 @@ so that it is:
|
|||
* Drained: the node won't receive new tasks. Existing tasks are stopped and
|
||||
replica tasks are launched in active nodes.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
If you're load-balancing user requests to UCP across multiple manager nodes,
|
||||
when demoting those nodes into workers, don't forget to remove them from your
|
||||
load-balancing pool.
|
||||
|
||||
## Scale your cluster from the CLI
|
||||
|
||||
You can also use the command line to do all of the above operations. To get the
|
||||
join token, run the following command on a manager node:
|
||||
|
||||
```none
|
||||
$ docker swarm join-token worker
|
||||
```
|
||||
|
||||
If you want to add a new manager node instead of a worker node, use
|
||||
`docker swarm join-token manager` instead. If you want to use a custom listen
|
||||
address, add the `--listen-addr` arg:
|
||||
|
||||
```none
|
||||
docker swarm join \
|
||||
--token SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \
|
||||
--listen-addr 234.234.234.234 \
|
||||
192.168.99.100:2377
|
||||
```
|
||||
|
||||
Once your node is added, you can see it by running `docker node ls` on a manager:
|
||||
|
||||
```none
|
||||
$ docker node ls
|
||||
```
|
||||
|
||||
To change the node's availability, use:
|
||||
|
||||
```
|
||||
$ docker node update --availability drain node2
|
||||
```
|
||||
|
||||
You can set the availability to `active`, `pause`, or `drain`.
|
||||
|
||||
To remove the node, use:
|
||||
|
||||
```
|
||||
$ docker node rm <node-hostname>
|
||||
```
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [Use your own TLS certificates](use-your-own-tls-certificates.md)
|
||||
|
|
|
@ -183,7 +183,7 @@ domains should be redirected to it. For example, a website that has been
|
|||
renamed might use this functionality. The following labels accomplish this for
|
||||
`new.example.com` and `old.example.com`
|
||||
|
||||
* `com.docker.ucp.mesh.http.1=external_route=http://old.example.com.com,redirect=http://new.example.com`
|
||||
* `com.docker.ucp.mesh.http.1=external_route=http://old.example.com,redirect=http://new.example.com`
|
||||
* `com.docker.ucp.mesh.http.2=external_route=http://new.example.com`
|
||||
|
||||
## Troubleshoot
|
||||
|
@ -192,4 +192,4 @@ If a service is not configured properly for use of the HTTP routing mesh, this
|
|||
information is available in the UI when inspecting the service.
|
||||
|
||||
More logging from the HTTP routing mesh is available in the logs of the
|
||||
`ucp-controller` containers on your UCP controller nodes.
|
||||
`ucp-controller` containers on your UCP manager nodes.
|
||||
|
|
|
@ -82,7 +82,7 @@ Now that UCP is installed, you need to license it. In your browser, navigate
|
|||
to the UCP web UI, login with your administrator credentials and upload your
|
||||
license.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
If you're registered in the beta program and don't have a license yet, you
|
||||
can get it from your [Docker Store subscriptions](https://store.docker.com/?overlay=subscriptions).
|
||||
|
@ -101,11 +101,11 @@ for worker nodes to execute.
|
|||
To join manager nodes to the swarm, go to the **UCP web UI**, navigate to
|
||||
the **Resources** page, and go to the **Nodes** section.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
Click the **Add Node button** to add a new node.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
Check the 'Add node as a manager' to turn this node into a manager and replicate
|
||||
UCP for high-availability.
|
||||
|
@ -119,7 +119,7 @@ can reach it.
|
|||
For each manager node that you want to join to UCP, login into that
|
||||
node using ssh, and run the join command that is displayed on UCP.
|
||||
|
||||
{: .with-border}
|
||||
{: .with-border}
|
||||
|
||||
After you run the join command in the node, the node starts being displayed
|
||||
in UCP.
|
||||
|
|
|
@ -53,7 +53,7 @@ cause poor performance or even failures.
|
|||
## Load balancing strategy
|
||||
|
||||
Docker UCP does not include a load balancer. You can configure your own
|
||||
load balancer to balance user requests across all controller nodes.
|
||||
load balancer to balance user requests across all manager nodes.
|
||||
|
||||
If you plan on using a load balancer, you need to decide whether you are going
|
||||
to add the nodes to the load balancer using their IP address, or their FQDN.
|
||||
|
@ -83,10 +83,10 @@ need to have a certificate bundle that has:
|
|||
* A ca.pem file with the root CA public certificate,
|
||||
* A cert.pem file with the server certificate and any intermediate CA public
|
||||
certificates. This certificate should also have SANs for all addresses used to
|
||||
reach the UCP controller,
|
||||
reach the UCP manager,
|
||||
* A key.pem file with server private key.
|
||||
|
||||
You can have a certificate for each controller, with a common SAN. As an
|
||||
You can have a certificate for each manager, with a common SAN. As an
|
||||
example, on a three node cluster you can have:
|
||||
|
||||
* node1.company.example.org with SAN ucp.company.org
|
||||
|
@ -94,9 +94,9 @@ example, on a three node cluster you can have:
|
|||
* node3.company.example.org with SAN ucp.company.org
|
||||
|
||||
Alternatively, you can also install UCP with a single externally-signed
|
||||
certificate for all controllers rather than one for each controller node.
|
||||
certificate for all managers rather than one for each manager node.
|
||||
In that case, the certificate files will automatically be copied to any new
|
||||
controller nodes joining the cluster or being promoted into controllers.
|
||||
manager nodes joining the cluster or being promoted into managers.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -54,6 +54,14 @@ Docker Datacenter is a software subscription that includes 3 products:
|
|||
|
||||
[Learn more about the maintenance lifecycle for these products](http://success.docker.com/Get_Help/Compatibility_Matrix_and_Maintenance_Lifecycle).
|
||||
|
||||
## Version compatibility
|
||||
|
||||
UCP 2.1 requires minimum versions of the following Docker components:
|
||||
|
||||
- Docker Engine 1.13.0
|
||||
- Docker Remote API 1.25
|
||||
- Compose 1.9
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [UCP architecture](../../architecture.md)
|
||||
|
|
|
@ -5,21 +5,25 @@ title: Uninstall UCP
|
|||
---
|
||||
|
||||
Docker UCP is designed to scale as your applications grow in size and usage.
|
||||
You can [add and remove nodes](../configure/scale-your-cluster.md) from the cluster, to make
|
||||
it scale to your needs.
|
||||
You can [add and remove nodes](../configure/scale-your-cluster.md) from the
|
||||
cluster, to make it scale to your needs.
|
||||
|
||||
You can also uninstall Docker Universal Control plane from your cluster. In this
|
||||
case the UCP services are stopped and removed, but your Docker Engines will
|
||||
continue running in swarm mode. You applications will continue running normally.
|
||||
|
||||
If you wish to remove a single node from the UCP cluster, you should instead
|
||||
[Remove that node from the cluster](../configure/scale-your-cluster.md).
|
||||
|
||||
After you uninstall UCP from the cluster, you'll no longer be able to enforce
|
||||
role-based access control to the cluster, or have a centralized way to monitor
|
||||
and manage the cluster.
|
||||
|
||||
After uninstalling UCP from the cluster, you will no longer be able to
|
||||
join new nodes using `docker swarm join` unless you reinstall UCP.
|
||||
After uninstalling UCP from the cluster, you will no longer be able to join new
|
||||
nodes using `docker swarm join` unless you reinstall UCP.
|
||||
|
||||
To uninstall UCP, log in into a manager node using ssh, and run:
|
||||
To uninstall UCP, log in into a manager node using ssh, and run the following
|
||||
command:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -it \
|
||||
|
@ -29,9 +33,20 @@ $ docker run --rm -it \
|
|||
```
|
||||
|
||||
This runs the uninstall command in interactive mode, so that you are prompted
|
||||
for any necessary configuration values.
|
||||
[Check the reference documentation](../../../reference/cli/index.md) to learn the options
|
||||
available in the `uninstall-ucp` command.
|
||||
for any necessary configuration values. Running this command on a single manager
|
||||
node will uninstall UCP from the entire cluster. [Check the reference
|
||||
documentation](../../../reference/cli/index.md) to learn the options available
|
||||
in the `uninstall-ucp` command.
|
||||
|
||||
## Swarm mode CA
|
||||
|
||||
After uninstalling UCP, the nodes in your cluster will still be in swarm mode, but you cannot
|
||||
join new nodes until you reinstall UCP, because swarm mode was relying on UCP to provide the
|
||||
CA certificates that allow nodes in the cluster to identify each other. Additionally, since
|
||||
swarm mode is no longer controlling its own certificates, if the certificates expire after
|
||||
you uninstall UCP the nodes in the cluster will not be able to communicate at all. To fix this,
|
||||
either reinstall UCP before the certificates expire or disable swarm mode by running
|
||||
`docker swarm leave --force` on every node.
|
||||
|
||||
## Where to go next
|
||||
|
||||
|
|
|
@ -1,14 +1,15 @@
|
|||
---
|
||||
description: Learn how to create and manage user permissions, using teams in your
|
||||
Docker Universal Control Plane cluster.
|
||||
keywords: authorize, authentication, users, teams, UCP, Docker
|
||||
description: Learn how to create and manage user permissions, using teams in
|
||||
your Docker Universal Control Plane cluster.
|
||||
keywords: authorize, authentication, users, teams, groups, sync, UCP, Docker
|
||||
title: Create and manage teams
|
||||
---
|
||||
|
||||
You can extend the user's default permissions by granting them fine-grain
|
||||
permissions over resources. You do this by adding the user to a team.
|
||||
A team defines the permissions users have for resources that have the label
|
||||
`com.docker.ucp.access.label` applied to them.
|
||||
`com.docker.ucp.access.label` applied to them. Keep in mind that a label can
|
||||
be applied to multiple teams with different permission levels.
|
||||
|
||||
To create a new team, go to the **UCP web UI**, and navigate to the
|
||||
**Users & Teams** page.
|
||||
|
@ -27,6 +28,47 @@ Then choose the list of users that you want to add to the team.
|
|||
|
||||
{: .with-border}
|
||||
|
||||
## Sync team members with your organization's LDAP directory.
|
||||
|
||||
If UCP is configured to sync users with your organization's LDAP directory
|
||||
server, you will have the option to enable syncing the new team's members when
|
||||
creating a new team or when modifying settings of an existing team.
|
||||
[Learn how to configure integration with an LDAP directory](../configure/external-auth/index.md).
|
||||
Enabling this option will expand the form with additional field for configuring
|
||||
the sync of team members.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
There are two methods for matching group members from an LDAP directory:
|
||||
|
||||
**Match LDAP Group Members**
|
||||
|
||||
This option specifies that team members should be synced directly with members
|
||||
of a group in your organization's LDAP directory. The team's membership will by
|
||||
synced to match the membership of the group.
|
||||
|
||||
| Field | Description |
|
||||
|:-----------------------|:------------------------------------------------------------------------------------------------------|
|
||||
| Group DN | This specifies the distinguished name of the group from which to select users. |
|
||||
| Group member attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. |
|
||||
|
||||
**Match LDAP Search Results**
|
||||
|
||||
This option specifies that team members should be synced using a search query
|
||||
against your organization's LDAP directory. The team's membership will be
|
||||
synced to match the users in the search results.
|
||||
|
||||
| Field | Description |
|
||||
|:--------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. |
|
||||
| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. |
|
||||
| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. |
|
||||
|
||||
**Immediately Sync Team Members**
|
||||
|
||||
Select this option to immediately run an LDAP sync operation after saving the
|
||||
configuration for the team. It may take a moment before the members of the team
|
||||
are fully synced.
|
||||
|
||||
## Manage team permissions
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
description: Learn about the permission levels available in Docker Universal Control
|
||||
Plane.
|
||||
description: Learn about the permission levels available in Docker Universal
|
||||
Control Plane.
|
||||
keywords: authorization, authentication, users, teams, UCP
|
||||
title: Permission levels
|
||||
---
|
||||
|
@ -34,13 +34,11 @@ access to full control over the resources.
|
|||
| `Restricted Control` | The user can view and edit volumes, networks, and images. They can create containers, but can't see other users containers, run `docker exec`, or run containers that require privileged access to the host. |
|
||||
| `Full Control` | The user can view and edit volumes, networks, and images, They can create containers without any restriction, but can't see other users containers. |
|
||||
|
||||
When a user only has a default permission assigned, only them and admin
|
||||
users can see the containers they deploy in the cluster.
|
||||
If a user has Restricted Control or Full Control default permissions, they can create resources without labels, and only the user and Admins can see and access the resources. Default permissions also affect ability for a user to access things that can't have labels, images and nodes.
|
||||
|
||||
## Team permission levels
|
||||
|
||||
Teams allow you to define fine-grain permissions to services, containers, and
|
||||
networks that have the label `com.docker.ucp.access.label` applied to them.
|
||||
Teams and labels give the administrator fine-grained control over permissions. Each team can have multiple labels. Each label has a key of `com.docker.ucp.access.label`. The label is then applied to the containers, services, networks, secrets and volumes. Labels are not currently available for nodes and images. DTR has its own permissions.
|
||||
|
||||
There are four permission levels:
|
||||
|
||||
|
@ -55,3 +53,4 @@ There are four permission levels:
|
|||
|
||||
* [Create and manage users](create-and-manage-users.md)
|
||||
* [Create and manage teams](create-and-manage-teams.md)
|
||||
* [Docker Reference Architecture: Securing Docker Datacenter and Security Best Practices](https://success.docker.com/KBase/Docker_Reference_Architecture%3A_Securing_Docker_Datacenter_and_Security_Best_Practices)
|
||||
|
|
|
@ -1,70 +1,65 @@
|
|||
---
|
||||
title: Monitor the cluster status
|
||||
description: Monitor your Docker Universal Control Plane installation, and learn how
|
||||
to troubleshoot it.
|
||||
keywords: Docker, UCP, troubleshoot
|
||||
title: Monitor your cluster
|
||||
---
|
||||
|
||||
This article gives you an overview of how to monitor your Docker UCP.
|
||||
You can monitor the status of UCP by using the web UI or the CLI.
|
||||
You can also use the `_ping` endpoint to build monitoring automation.
|
||||
|
||||
## Check the cluster status from the UI
|
||||
## Check status from the UI
|
||||
|
||||
To monitor your UCP cluster, the first thing to check is the **Nodes**
|
||||
screen on the UCP web app.
|
||||
The first place to check the status of UCP is the **UCP web UI**, since it
|
||||
shows warnings for situations that require your immediate attention.
|
||||
Administrators might see more warnings than regular users.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
You can also navigate to the **Nodes** page, to see if all the nodes
|
||||
managed by UCP are healthy or not.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
In the nodes screen you can see if all the nodes in the cluster are healthy, or
|
||||
if there is any problem.
|
||||
Each node has a status message explaining any problems with the node.
|
||||
[Learn more about node status](troubleshoot-node-messages.md).
|
||||
|
||||
If you're an administrator you can also check the state and logs of the
|
||||
UCP internal services.
|
||||
|
||||
To check the state of the `ucp-agent` service, navigate to the **Services** page
|
||||
and toggle the **Show system services** option.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
The `ucp-agent` service monitors the node where it is running, deploys other
|
||||
UCP internal components, and ensures they keep running. The UCP components that
|
||||
are deployed on a node, depend on whether the node is a manager or worker.
|
||||
[Learn more about the UCP architecture](../../architecture.md)
|
||||
|
||||
To check the state and logs of other UCP internal components, go to the
|
||||
**Containers** page, and appply the **System containers** filter.
|
||||
This can help validate that all UCP internal components are up and running.
|
||||
|
||||
{: .with-border}
|
||||
|
||||
It's normal for the `ucp-reconcile` to be stopped. This container only runs when
|
||||
the `ucp-agent` detects that a UCP internal component should be running but for
|
||||
some reason it's not. In this case the `ucp-agent` starts the `ucp-reconcile`
|
||||
service to start all UCP services that need to be running. Once that is done,
|
||||
the `ucp-agent` stops.
|
||||
|
||||
## Check the cluster status from the CLI
|
||||
## Check status from the CLI
|
||||
|
||||
You can also monitor the status of a UCP cluster using the Docker CLI client.
|
||||
There are two ways to do this, using a
|
||||
[client certificate bundle](../../user/access-ucp/cli-based-access.md), or logging into
|
||||
one of the manager nodes using ssh.
|
||||
Download a UCP client certificate bundle](../../user/access-ucp/cli-based-access.md)
|
||||
and then run:
|
||||
|
||||
Then you can use regular Docker CLI commands to check the status and logs
|
||||
of the [UCP internal services and containers](../../architecture.md).
|
||||
```none
|
||||
$ docker node ls
|
||||
```
|
||||
|
||||
## Automated status checking
|
||||
As a rule of thumb, if the status message starts with `[Pending]`, then the
|
||||
current state is transient and the node is expected to correct itself back
|
||||
into a healthy state. [Learn more about node status](troubleshoot-node-messages.md).
|
||||
|
||||
You can use the `https://<ucp-url>/_ping` endpoint to perform automated
|
||||
monitoring tasks. When you access this endpoint, UCP validates that all its
|
||||
internal components are working, and returns the following HTTP error codes:
|
||||
|
||||
## Monitoring automation
|
||||
|
||||
You can use the `https://<ucp-manager-url>/_ping` endpoint to check the health
|
||||
of a single UCP manager node. When you access this endpoint, the UCP manager
|
||||
validates that all its internal components are working, and returns one of the
|
||||
following HTTP error codes:
|
||||
|
||||
* 200, if all components are healthy
|
||||
* 500, if one or more components are not healthy
|
||||
|
||||
If you're accessing this endpoint through a load balancer, you'll have no way to
|
||||
know which UCP manager node is not healthy. So make sure you make a request
|
||||
directly to each manager node.
|
||||
If an administrator client certificate is used as a TLS client certificate for
|
||||
the `_ping` endpoint, a detailed error message is returned if any component is
|
||||
unhealthy.
|
||||
|
||||
If you're accessing the `_ping` endpoint through a load balancer, you'll have no
|
||||
way of knowing which UCP manager node is not healthy, since any manager node
|
||||
might be serving your request. Make sure you're connecting directly to the
|
||||
URL of a manager node, and not a load balancer.
|
||||
|
||||
## Where to go next
|
||||
|
||||
* [Troubleshoot with logs](troubleshoot-with-logs.md)
|
||||
* [Troubleshoot node states](./troubleshoot-node-messages.md)
|
||||
|
|
|
@ -1,22 +1,26 @@
|
|||
---
|
||||
title: Troubleshoot cluster configurations
|
||||
description: Learn how to troubleshoot your Docker Universal Control Plane cluster.
|
||||
keywords: ectd, rethinkdb, key, value, store, database, ucp
|
||||
title: Troubleshoot cluster configurations
|
||||
---
|
||||
|
||||
Docker UCP persists configuration data on an [etcd](https://coreos.com/etcd/)
|
||||
UCP automatically tries to heal itself by monitoring it's internal
|
||||
components and trying to bring them to an healthy state.
|
||||
|
||||
In most cases, if a single UCP component is persistently in a
|
||||
failed state, you should be able to restore the cluster to a healthy state by
|
||||
removing the unhealthy node from the cluster and joining it again.
|
||||
[Lean how to remove and join modes](../configure/scale-your-cluster.md).
|
||||
|
||||
## Troubleshoot the etcd key-value store
|
||||
|
||||
UCP persists configuration data on an [etcd](https://coreos.com/etcd/)
|
||||
key-value store and [RethinkDB](https://rethinkdb.com/) database that are
|
||||
replicated on all manager nodes of the UCP cluster. These data stores are for
|
||||
internal use only, and should not be used by other applications.
|
||||
|
||||
This article shows how you can access the key-value store and database, for
|
||||
troubleshooting configuration problems in your cluster.
|
||||
|
||||
## etcd Key-Value Store
|
||||
|
||||
### Using the REST API
|
||||
|
||||
In this example we'll be using `curl` for making requests to the key-value
|
||||
### With the HTTP API
|
||||
In this example we'll use `curl` for making requests to the key-value
|
||||
store REST API, and `jq` to process the responses.
|
||||
|
||||
You can install these tools on a Ubuntu distribution by running:
|
||||
|
@ -41,19 +45,16 @@ $ curl -s \
|
|||
${KV_URL}/v2/keys | jq "."
|
||||
```
|
||||
|
||||
To learn more about the key-value store rest API check the
|
||||
To learn more about the key-value store REST API check the
|
||||
[etcd official documentation](https://coreos.com/etcd/docs/latest/).
|
||||
|
||||
|
||||
### Using a CLI client
|
||||
### With the CLI client
|
||||
|
||||
The containers running the key-value store, include `etcdctl`, a command line
|
||||
client for etcd. You can run it using the `docker exec` command.
|
||||
|
||||
The examples below assume you are logged in with ssh into a UCP manager node.
|
||||
|
||||
#### Check the health of the etcd cluster
|
||||
|
||||
```bash
|
||||
$ docker exec -it ucp-kv etcdctl \
|
||||
--endpoint https://127.0.0.1:2379 \
|
||||
|
|