diff --git a/.github/CONTRIBUTING.md b/CONTRIBUTING.md similarity index 100% rename from .github/CONTRIBUTING.md rename to CONTRIBUTING.md diff --git a/README.md b/README.md index d565b279b7..c538fa1ce5 100644 --- a/README.md +++ b/README.md @@ -136,23 +136,23 @@ You have three options: docker-compose down ``` -2. Use Jekyll directly. +2. Use Jekyll directly. a. Clone this repo by running: ```bash git clone https://github.com/docker/docker.github.io.git ``` - + b. Install Ruby 2.3 or later as described in [Installing Ruby] (https://www.ruby-lang.org/en/documentation/installation/). - + c. Install Bundler: ```bash gem install bundler ``` - + d. If you use Ubuntu, install packages required for the Nokogiri HTML parser: @@ -165,7 +165,7 @@ You have three options: ```bash bundle install ``` - + >**Note**: You may have to install some packages manually. f. Change the directory to `docker.github.io`. @@ -203,12 +203,43 @@ guidance about grammar, syntax, formatting, styling, language, or tone. If something isn't clear in the guide, please submit an issue to let us know or submit a pull request to help us improve it. -### Generate the man pages +### Per-page front-matter -For information on generating man pages (short for manual page), see the README.md -document in [the man page directory](https://github.com/docker/docker/tree/master/man) -in this project. +The front-matter of a given page is in a section at the top of the Markdown +file that starts and ends with three hyphens. It includes YAML content. The +following keys are supported. The title, description, and keywords are required. + +| Key | Required | Description | +|------------------------|-----------|-----------------------------------------| +| title | yes | The page title. This is added to the HTML output as a `

` level header. | +| description | yes | A sentence that describes the page contents. This is added to the HTML metadata. | +| keywords | yes | A comma-separated list of keywords. These are added to the HTML metadata. | +| redirect_from | no | A YAML list of pages which should redirect to THIS page. At build time, each page listed here is created as a HTML stub containing a 302 redirect to this page. | +| notoc | no | Either `true` or `false`. If `true`, no in-page TOC is generated for the HTML output of this page. Defaults to `false`. Appropriate for some landing pages that have no in-page headings.| +| toc_min | no | Ignored if `notoc` is set to `true`. The minimum heading level included in the in-page TOC. Defaults to `2`, to show `

` headings as the minimum. | +| toc_max | no | Ignored if `notoc` is set to `false`. The maximum heading level included in the in-page TOC. Defaults to `3`, to show `

` headings. Set to the same as `toc_min` to only show `toc_min` level of headings. | +| tree | no | Either `true` or `false`. Set to `false` to disable the left-hand site-wide navigation for this page. Appropriate for some pages like the search page or the 404 page. | +| no_ratings | no | Either `true` or `false`. Set to `true` to disable the page-ratings applet for this page. Defaults to `false`. | + +The following is an example of valid (but contrived) page metadata. The order of +the metadata elements in the front-matter is not important. + +```liquid +--- +description: Instructions for installing Docker on Ubuntu +keywords: requirements, apt, installation, ubuntu, install, uninstall, upgrade, update +redirect_from: +- /engine/installation/ubuntulinux/ +- /installation/ubuntulinux/ +- /engine/installation/linux/ubuntulinux/ +title: Get Docker for Ubuntu +toc_min: 1 +toc_max: 6 +tree: false +no_ratings: true +--- +``` ## Copyright and license -Code and documentation copyright 2016 Docker, inc, released under the Apache 2.0 license. +Code and documentation copyright 2017 Docker, inc, released under the Apache 2.0 license. diff --git a/_config.yml b/_config.yml index 2cd7f91f78..0b413d6756 100644 --- a/_config.yml +++ b/_config.yml @@ -16,6 +16,7 @@ keep_files: ["v1.4", "v1.5", "v1.6", "v1.7", "v1.8", "v1.9", "v1.10", "v1.11", " gems: - jekyll-redirect-from - jekyll-seo-tag + - jekyll-relative-links webrick: headers: @@ -98,14 +99,14 @@ defaults: path: "datacenter" values: ucp_latest_image: "docker/ucp:2.1.0" - dtr_latest_image: "docker/dtr:2.2.0" + dtr_latest_image: "docker/dtr:2.2.2" - scope: path: "datacenter/dtr/2.2" values: ucp_version: "2.1" dtr_version: "2.2" - docker_image: "docker/dtr:2.2.1" + docker_image: "docker/dtr:2.2.2" - scope: path: "datacenter/dtr/2.1" @@ -134,6 +135,7 @@ defaults: hide_from_sitemap: true ucp_version: "2.0" dtr_version: "2.1" + docker_image: "docker/ucp:2.0.3" - scope: path: "datacenter/ucp/1.1" diff --git a/_data/ddc_offline_files.yaml b/_data/ddc_offline_files.yaml index e59911416e..f99f20e9b6 100644 --- a/_data/ddc_offline_files.yaml +++ b/_data/ddc_offline_files.yaml @@ -8,6 +8,8 @@ tar-files: - description: "UCP 2.1.0" url: https://packages.docker.com/caas/ucp_images_2.1.0.tar.gz + - description: "DTR 2.2.2" + url: https://packages.docker.com/caas/dtr-2.2.2.tar.gz - description: "DTR 2.2.1" url: https://packages.docker.com/caas/dtr-2.2.1.tar.gz - description: "DTR 2.2.0" diff --git a/_data/engine-cli/docker_build.yaml b/_data/engine-cli/docker_build.yaml index 4942d317c7..733734c2a3 100644 --- a/_data/engine-cli/docker_build.yaml +++ b/_data/engine-cli/docker_build.yaml @@ -331,6 +331,7 @@ examples: |- ```bash $ docker build -t whenry/fedora-jboss:latest -t whenry/fedora-jboss:v2.1 . ``` + ### Specify a Dockerfile (-f) ```bash diff --git a/_data/engine-cli/docker_stats.yaml b/_data/engine-cli/docker_stats.yaml index d8009ef45e..a08e550783 100644 --- a/_data/engine-cli/docker_stats.yaml +++ b/_data/engine-cli/docker_stats.yaml @@ -104,12 +104,12 @@ examples: |- ```bash {% raw %} - $ docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" + $ docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}" - CONTAINER CPU % PRIV WORKING SET - 1285939c1fd3 0.07% 796 KiB / 64 MiB - 9c76f7834ae2 0.07% 2.746 MiB / 64 MiB - d1ea048f04e4 0.03% 4.583 MiB / 64 MiB + NAME CPU % PRIV WORKING SET + fervent_panini 0.07% 796 KiB / 64 MiB + ecstatic_kilby 0.07% 2.746 MiB / 64 MiB + quizzical_nobel 0.03% 4.583 MiB / 64 MiB {% endraw %} ``` diff --git a/_data/toc.yaml b/_data/toc.yaml index 6d10124e22..087c327ee4 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -60,6 +60,8 @@ guides: title: Upgrading - path: /docker-for-aws/deploy/ title: Deploy your app + - path: /docker-for-aws/persistent-data-volumes/ + title: Persistent data volumes - path: /docker-for-aws/faqs/ title: FAQs - path: /docker-for-aws/opensource/ @@ -76,6 +78,8 @@ guides: title: Upgrading - path: /docker-for-azure/deploy/ title: Deploy your app + - path: /docker-for-azure/persistent-data-volumes/ + title: Persistent data volumes - path: /docker-for-azure/faqs/ title: FAQs - path: /docker-for-azure/opensource/ @@ -144,6 +148,8 @@ guides: title: Try out the voting app - path: /engine/getstarted-voting-app/customize-app/ title: Customize the app and redeploy + - path: /engine/getstarted-voting-app/cleanup/ + title: Graceful shutdown, reboot, and clean-up - sectiontitle: Learn by example section: - path: /engine/tutorials/networkingcontainers/ @@ -875,8 +881,8 @@ manuals: title: Release notes - sectiontitle: 1.12 section: - - path: /cs-engine/1.12/install/ - title: Install CS Docker Engine + - path: /cs-engine/1.12/ + title: Install - path: /cs-engine/1.12/upgrade/ title: Upgrade - sectiontitle: Release notes @@ -1085,18 +1091,18 @@ manuals: title: unpause - path: /compose/reference/up/ title: up - - sectiontitle: Compose File Reference - section: - - path: /compose/compose-file/ - title: Version 3 - - path: /compose/compose-file/compose-file-v2/ - title: Version 2 - - path: /compose/compose-file/compose-file-v1/ - title: Version 1 - - path: /compose/compose-file/compose-versioning/ - title: About versions and upgrading - - path: /compose/faq/ - title: Frequently Asked Questions + - sectiontitle: Compose File Reference + section: + - path: /compose/compose-file/ + title: Version 3 + - path: /compose/compose-file/compose-file-v2/ + title: Version 2 + - path: /compose/compose-file/compose-file-v1/ + title: Version 1 + - path: /compose/compose-file/compose-versioning/ + title: About versions and upgrading + - path: /compose/faq/ + title: Frequently Asked Questions - path: /compose/bundles/ title: Docker Stacks and Distributed Application Bundles - path: /compose/swarm/ @@ -1163,6 +1169,8 @@ manuals: title: Use a load balancer - path: /datacenter/ucp/2.1/guides/admin/configure/add-labels-to-cluster-nodes/ title: Add labels to cluster nodes + - path: /datacenter/ucp/2.1/guides/admin/configure/add-sans-to-cluster/ + title: Add SANs to cluster certificates - path: /datacenter/ucp/2.1/guides/admin/configure/store-logs-in-an-external-system/ title: Store logs in an external system - path: /datacenter/ucp/2.1/guides/admin/configure/restrict-services-to-worker-nodes/ @@ -1191,6 +1199,8 @@ manuals: section: - path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/ title: Monitor the cluster status + - path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages/ + title: Troubleshoot node messages - path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs/ title: Troubleshoot with logs - path: /datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations/ @@ -1297,6 +1307,8 @@ manuals: title: Set up vulnerability scans - path: /datacenter/dtr/2.2/guides/admin/configure/deploy-a-cache/ title: Deploy a cache + - path: /datacenter/dtr/2.2/guides/admin/configure/garbage-collection/ + title: Garbage collection - sectiontitle: Manage users section: - path: /datacenter/dtr/2.2/guides/admin/manage-users/ diff --git a/_includes/d4a_buttons.md b/_includes/d4a_buttons.md index d6c0ca9c34..05af8c8396 100644 --- a/_includes/d4a_buttons.md +++ b/_includes/d4a_buttons.md @@ -1,19 +1,25 @@ {% capture aws_button_latest %} -![Docker for AWS](https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png) +![Docker for AWS](https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png) {% endcapture %} {% capture aws_blue_latest %} -Deploy Docker for AWS (stable) +Deploy Docker for AWS (stable) {% endcapture %} -{% capture aws_blue_beta %} -Deploy Docker for AWS (beta) +{% capture aws_blue_edge %} +Deploy Docker for AWS (beta) +{% endcapture %} +{% capture aws_blue_vpc_latest %} +Deploy Docker for AWS (stable)
uses your existing VPC
+{% endcapture %} +{% capture aws_blue_vpc_edge %} +Deploy Docker for AWS (beta)
uses your existing VPC
{% endcapture %} {% capture azure_blue_latest %} -Deploy Docker for Azure (stable) +Deploy Docker for Azure (stable) {% endcapture %} -{% capture azure_blue_beta %} -Deploy Docker for Azure (beta) +{% capture azure_blue_edge %} +Deploy Docker for Azure (beta) {% endcapture %} {% capture azure_button_latest %} -![Docker for Azure](http://azuredeploy.net/deploybutton.png) +![Docker for Azure](http://azuredeploy.net/deploybutton.png) {% endcapture %} diff --git a/_includes/toc_pure_liquid.html b/_includes/toc_pure_liquid.html index bf2b4c166d..84bd122d9c 100644 --- a/_includes/toc_pure_liquid.html +++ b/_includes/toc_pure_liquid.html @@ -45,7 +45,7 @@ {% assign html_id = _idWorkspace[1] %} {% capture _hAttrToStrip %}{{ headerLevel }} id="{{ html_id }}">{% endcapture %} - {% assign header = _workspace[0] | replace: _hAttrToStrip, '' %} + {% assign header = _workspace[0] | replace: _hAttrToStrip, '' | remove_first: "1>" %} {% assign space = '' %} {% for i in (1..indentAmount) %} diff --git a/compose/compose-file/compose-file-v1.md b/compose/compose-file/compose-file-v1.md index 6469d40a17..ba49a46a8c 100644 --- a/compose/compose-file/compose-file-v1.md +++ b/compose/compose-file/compose-file-v1.md @@ -4,6 +4,8 @@ keywords: fig, composition, compose version 1, docker redirect_from: - /compose/yml title: Compose file version 1 reference +toc_max: 4 +toc_min: 1 --- These topics describe version 1 of the Compose file format. This is the oldest @@ -473,7 +475,7 @@ is specified, then read-write will be used. - service_name - service_name:ro -### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, oom_score_adj, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir +### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir Each of these is a single value, analogous to its [docker run](/engine/reference/run.md) counterpart. diff --git a/compose/compose-file/compose-file-v2.md b/compose/compose-file/compose-file-v2.md index 19d7f8b25e..81303fe9a7 100644 --- a/compose/compose-file/compose-file-v2.md +++ b/compose/compose-file/compose-file-v2.md @@ -4,6 +4,8 @@ keywords: fig, composition, compose version 3, docker redirect_from: - /compose/yml title: Compose file version 2 reference +toc_max: 4 +toc_min: 1 --- These topics describe version 2 of the Compose file format. @@ -911,9 +913,6 @@ Each of these is a single value, analogous to its stdin_open: true tty: true -> **Note:** The following options are only available for -> [version 2](compose-versioning.md#version-2) and up: -> `oom_score_adj` ## Specifying durations diff --git a/compose/compose-file/compose-versioning.md b/compose/compose-file/compose-versioning.md index 9fc2ed401b..c2e69eec09 100644 --- a/compose/compose-file/compose-versioning.md +++ b/compose/compose-file/compose-versioning.md @@ -227,6 +227,8 @@ several options have been removed: `deploy`. Note that `deploy` configuration only takes effect when using `docker stack deploy`, and is ignored by `docker-compose`. +- `extends`: This option has been removed for `version: "3.x"` Compose files. + ### Version 1 to 2.x In the majority of cases, moving from version 1 to 2 is a very simple process: diff --git a/compose/compose-file/index.md b/compose/compose-file/index.md index 702491ad71..81db83243f 100644 --- a/compose/compose-file/index.md +++ b/compose/compose-file/index.md @@ -5,6 +5,8 @@ redirect_from: - /compose/yml - /compose/compose-file-v3.md title: Compose file version 3 reference +toc_max: 4 +toc_min: 1 --- These topics describe version 3 of the Compose file format. This is the newest @@ -34,8 +36,8 @@ As with `docker run`, options specified in the Dockerfile (e.g., `CMD`, specify them again in `docker-compose.yml`. You can use environment variables in configuration values with a Bash-like -`${VARIABLE}` syntax - see [variable -substitution](compose-file.md#variable-substitution) for full details. +`${VARIABLE}` syntax - see +[variable substitution](#variable-substitution) for full details. This section contains a list of all configuration options supported by a service definition in version 3. @@ -45,8 +47,8 @@ definition in version 3. Configuration options that are applied at build time. `build` can be specified either as a string containing a path to the build -context, or an object with the path specified under [context](compose-file.md#context) and -optionally [dockerfile](compose-file.md#dockerfile) and [args](compose-file.md#args). +context, or an object with the path specified under [context](#context) and +optionally [dockerfile](#dockerfile) and [args](#args). build: ./dir @@ -263,15 +265,15 @@ resources: #### restart_policy Configures if and how to restart containers when they exit. Replaces -[`restart`](compose-file.md#restart). +[`restart`](compose-file-v2.md#cpushares-cpuquota-cpuset-domainname-hostname-ipc-macaddress-memlimit-memswaplimit-oomscoreadj-privileged-readonly-restart-shmsize-stdinopen-tty-user-workingdir). - `condition`: One of `none`, `on-failure` or `any` (default: `any`). - `delay`: How long to wait between restart attempts, specified as a - [duration](compose-file.md#specifying-durations) (default: 0). + [duration](#specifying-durations) (default: 0). - `max_attempts`: How many times to attempt to restart a container before giving up (default: never give up). - `window`: How long to wait before deciding if a restart has succeeded, - specified as a [duration](compose-file.md#specifying-durations) (default: + specified as a [duration](#specifying-durations) (default: decide immediately). ``` @@ -459,9 +461,9 @@ beginning with `#` (i.e. comments) are ignored, as are blank lines. # Set Rails/Rack environment RACK_ENV=development -> **Note:** If your service specifies a [build](compose-file.md#build) option, variables +> **Note:** If your service specifies a [build](#build) option, variables > defined in environment files will _not_ be automatically visible during the -> build. Use the [args](compose-file.md#args) sub-option of `build` to define build-time +> build. Use the [args](#args) sub-option of `build` to define build-time > environment variables. The value of `VAL` is used as is and not modified at all. For example if the value is @@ -487,9 +489,9 @@ machine Compose is running on, which can be helpful for secret or host-specific - SHOW=true - SESSION_SECRET -> **Note:** If your service specifies a [build](compose-file.md#build) option, variables +> **Note:** If your service specifies a [build](#build) option, variables > defined in `environment` will _not_ be automatically visible during the -> build. Use the [args](compose-file.md#args) sub-option of `build` to define build-time +> build. Use the [args](#args) sub-option of `build` to define build-time > environment variables. ### expose @@ -501,40 +503,6 @@ accessible to linked services. Only the internal port can be specified. - "3000" - "8000" -### extends - -Extend another service, in the current file or another, optionally overriding -configuration. - -You can use `extends` on any service together with other configuration keys. -The `extends` value must be a dictionary defined with a required `service` -and an optional `file` key. - - extends: - file: common.yml - service: webapp - -The `service` the name of the service being extended, for example -`web` or `database`. The `file` is the location of a Compose configuration -file defining that service. - -If you omit the `file` Compose looks for the service configuration in the -current file. The `file` value can be an absolute or relative path. If you -specify a relative path, Compose treats it as relative to the location of the -current file. - -You can extend a service that itself extends another. You can extend -indefinitely. Compose does not support circular references and `docker-compose` -returns an error if it encounters one. - -For more on `extends`, see the -[the extends documentation](../extends.md#extending-services). - -> **Note:** This option is not yet supported when -> [deploying a stack in swarm mode](/engine/reference/commandline/stack_deploy.md) -> with a (version 3) Compose file. Use `docker-compose config` to generate a -> configuration with all `extends` options resolved, and deploy from that. - ### external_links Link to containers started outside this `docker-compose.yml` or even outside @@ -595,7 +563,7 @@ used. ### healthcheck -> [Version 2.1 file format](compose-file.md#version-21) and up. +> [Version 2.1 file format](compose-versioning.md#version-21) and up. Configure a check that's run to determine whether or not containers for this service are "healthy". See the docs for the @@ -609,7 +577,7 @@ for details on how healthchecks work. retries: 3 `interval` and `timeout` are specified as -[durations](compose-file.md#specifying-durations). +[durations](#specifying-durations). `test` must be either a string or a list. If it's a list, the first item must be either `NONE`, `CMD` or `CMD-SHELL`. If it's a string, it's equivalent to @@ -640,7 +608,7 @@ a partial image ID. image: a4bc65fd If the image does not exist, Compose attempts to pull it, unless you have also -specified [build](compose-file.md#build), in which case it builds it using the specified +specified [build](#build), in which case it builds it using the specified options and tags it with the specified tag. ### isolation @@ -682,9 +650,9 @@ Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified. Links also express dependency between services in the same way as -[depends_on](compose-file.md#dependson), so they determine the order of service startup. +[depends_on](#dependson), so they determine the order of service startup. -> **Note:** If you define both links and [networks](compose-file.md#networks), services with +> **Note:** If you define both links and [networks](#networks), services with > links between them must share at least one network in common in order to > communicate. @@ -741,7 +709,7 @@ the special form `service:[service name]`. ### networks Networks to join, referencing entries under the -[top-level `networks` key](compose-file.md#network-configuration-reference). +[top-level `networks` key](#network-configuration-reference). services: some-service: @@ -803,7 +771,7 @@ In the example below, three services are provided (`web`, `worker`, and `db`), a Specify a static IP address for containers for this service when joining the network. -The corresponding network configuration in the [top-level networks section](compose-file.md#network-configuration-reference) must have an `ipam` block with subnet configurations covering each static address. If IPv6 addressing is desired, the [`enable_ipv6`](compose-file.md#enableipv6) option must be set. +The corresponding network configuration in the [top-level networks section](#network-configuration-reference) must have an `ipam` block with subnet configurations covering each static address. If IPv6 addressing is desired, the [`enable_ipv6`](#enableipv6) option must be set. An example: @@ -882,6 +850,102 @@ port (a random host port will be chosen). - "127.0.0.1:5000-5010:5000-5010" - "6060:6060/udp" +### secrets + +Grant access to secrets on a per-service basis using the per-service `secrets` +configuration. Two different syntax variants are supported. + +> **Note**: The secret must already exist or be +> [defined in the top-level `secrets` configuration](#secrets-configuration-reference) +> of this stack file, or stack deployment will fail. + +#### Short syntax + +The short syntax variant only specifies the secret name. This grants the +container access to the secret and mounts it at `/run/secrets/` +within the container. The source name and destination mountpoint are both set +to the secret name. + +> **Warning**: Due to a bug in Docker 1.13.1, using the short syntax currently +> mounts the secret with permissions `000`, which means secrets defined using +> the short syntax are unreadable within the container if the command does not +> run as the `root` user. The workaround is to use the long syntax instead if +> you use Docker 1.13.1 and the secret must be read by a non-`root` user. + +The following example uses the short syntax to grant the `redis` service +access to the `my_secret` and `my_other_secret` secrets. The value of +`my_secret` is set to the contents of the file `./my_secret.txt`, and +`my_other_secret` is defined as an external resource, which means that it has +already been defined in Docker, either by running the `docker secret create` +command or by another stack deployment. If the external secret does not exist, +the stack deployment fails with a `secret not found` error. + +```none +version: "3.1" +services: + redis: + image: redis:latest + deploy: + replicas: 1 + secrets: + - my_secret + - my_other_secret +secrets: + my_secret: + file: ./my_secret.txt + my_other_secret: + external: true +``` + +#### Long syntax + +The long syntax provides more granularity in how the secret is created within +the service's task containers. + +- `source`: The name of the secret as it exists in Docker. +- `target`: The name of the file that will be mounted in `/run/secrets/` in the + service's task containers. Defaults to `source` if not specified. +- `uid` and `gid`: The numeric UID or GID which will own the file within + `/run/secrets/` in the service's task containers. Both default to `0` if not + specified. +- `mode`: The permissions for the file that will be mounted in `/run/secrets/` + in the service's task containers, in octal notation. For instance, `0444` + represents world-readable. The default in Docker 1.13.1 is `0000`, but will + be `0444` in the future. Secrets cannot be writable because they are mounted + in a temporary filesystem, so if you set the writable bit, it is ignored. The + executable bit can be set. If you aren't familiar with UNIX file permission + modes, you may find this + [permissions calculator](http://permissions-calculator.org/){: target="_blank" class="_" } + useful. + +The following example sets name of the `my_secret` to `redis_secret` within the +container, sets the mode to `0440` (group-readable) and sets the user and group +to `103`. The `redis` service does not have access to the `my_other_secret` +secret. + +```none +version: "3.1" +services: + redis: + image: redis:latest + deploy: + replicas: 1 + secrets: + - source: my_secret + target: redis_secret + uid: '103' + gid: '103' + mode: 0440 +secrets: + my_secret: + file: ./my_secret.txt + my_other_secret: + external: true +``` + +You can grant a service access to multiple secrets and you can mix long and +short syntax. Defining a secret does not imply granting a service access to it. + ### security_opt Override the default labeling scheme for each container. @@ -898,8 +962,8 @@ Override the default labeling scheme for each container. Specify how long to wait when attempting to stop a container if it doesn't handle SIGTERM (or whatever stop signal has been specified with -[`stop_signal`](compose-file.md#stopsignal)), before sending SIGKILL. Specified -as a [duration](compose-file.md#specifying-durations). +[`stop_signal`](#stopsignal)), before sending SIGKILL. Specified +as a [duration](#specifying-durations). stop_grace_period: 1s stop_grace_period: 1m30s @@ -963,7 +1027,7 @@ more information. ### volumes, volume\_driver > **Note:** The top-level -> [`volumes` option](compose-file.md#volume-configuration-reference) defines +> [`volumes` option](#volume-configuration-reference) defines > a named volume and references it from each service's `volumes` list. This replaces `volumes_from` in earlier versions of the Compose file format. Mount paths or named volumes, optionally specifying a path on the host machine @@ -1013,10 +1077,33 @@ There are several things to note, depending on which See [Docker Volumes](/engine/userguide/dockervolumes.md) and [Volume Plugins](/engine/extend/plugins_volume.md) for more information. +### domainname, hostname, ipc, mac\_address, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir + +Each of these is a single value, analogous to its +[docker run](/engine/reference/run.md) counterpart. + + user: postgresql + working_dir: /code + + domainname: foo.com + hostname: foo + ipc: host + mac_address: 02:42:ac:11:65:43 + + privileged: true + + restart: always + + read_only: true + shm_size: 64M + stdin_open: true + tty: true + + ## Specifying durations Some configuration options, such as the `interval` and `timeout` sub-options for -[`healthcheck`](compose-file.md#healthcheck), accept a duration as a string in a +[`healthcheck`](#healthcheck), accept a duration as a string in a format that looks like this: 2.5s @@ -1136,7 +1223,7 @@ conflicting with those used by other software. The top-level `networks` key lets you specify networks to be created. For a full explanation of Compose's use of Docker networking features, see the -[Networking guide](networking.md). +[Networking guide](../networking.md). ### driver @@ -1245,6 +1332,33 @@ refer to it within the Compose file: external: name: actual-name-of-network +## secrets configuration reference + +The top-level `secrets` declaration defines or references +[secrets](/engine/swarm/secrets.md) which can be granted to the services in this +stack. The source of the secret is either `file` or `external`. + +- `file`: The secret is created with the contents of the file at the specified + path. +- `external`: If set to true, specifies that this secret has already been + created. Docker will not attempt to create it, and if it does not exist, a + `secret not found` error occurs. + +In this example, `my_first_secret` will be created (as +`_my_first_secret)`when the stack is deployed, +and `my_second_secret` already exists in Docker. + +```none +secrets: + my_first_secret: + file: ./secret_data + my_second_secret + external: true +``` + +You still need to [grant access to the secrets](#secrets) to each service in the +stack. + ## Variable substitution {% include content/compose-var-sub.md %} diff --git a/compose/env-file.md b/compose/env-file.md index 12aac5eadb..2254803fec 100644 --- a/compose/env-file.md +++ b/compose/env-file.md @@ -2,6 +2,7 @@ description: Declare default environment variables in a file keywords: fig, composition, compose, docker, orchestration, environment, env file title: Declare default environment variables in file +notoc: true --- Compose supports declaring default environment variables in an environment @@ -34,4 +35,4 @@ file, but can also be used to define the following - [User guide](index.md) - [Command line reference](./reference/index.md) -- [Compose file reference](compose-file.md) \ No newline at end of file +- [Compose file reference](compose-file.md) diff --git a/compose/extends.md b/compose/extends.md index c8d40d0944..de60205dc9 100644 --- a/compose/extends.md +++ b/compose/extends.md @@ -159,6 +159,8 @@ backup, include the `docker-compose.admin.yml` as well. ## Extending services +> Up to version 2.1 , version 3.x does not support `extends` yet. + Docker Compose's `extends` keyword enables sharing of common configurations among different files, or even different projects entirely. Extending services is useful if you have several services that reuse a common set of configuration diff --git a/compose/index.md b/compose/index.md index 2d07ff5812..962cc7ade4 100644 --- a/compose/index.md +++ b/compose/index.md @@ -2,6 +2,7 @@ description: Introduction and Overview of Compose keywords: documentation, docs, docker, compose, orchestration, containers title: Docker Compose +notoc: true --- Compose is a tool for defining and running multi-container Docker applications. To learn more about Compose refer to the following documentation: diff --git a/compose/install.md b/compose/install.md index 31fbe214c0..1e8de2c442 100644 --- a/compose/install.md +++ b/compose/install.md @@ -34,7 +34,7 @@ To install Compose, do the following: The following is an example command illustrating the format: ```bash - $ curl -L "https://github.com/docker/compose/releases/download/1.10.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose + $ curl -L "https://github.com/docker/compose/releases/download/1.11.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose ``` If you have problems installing with `curl`, see @@ -54,7 +54,7 @@ To install Compose, do the following: ```bash $ docker-compose --version - docker-compose version: 1.10.0 + docker-compose version: 1.11.1 ``` ## Alternative install options @@ -80,7 +80,7 @@ Compose can also be run inside a container, from a small bash script wrapper. To install compose as a container run: ```bash -$ curl -L https://github.com/docker/compose/releases/download/1.10.0/run.sh > /usr/local/bin/docker-compose +$ curl -L https://github.com/docker/compose/releases/download/1.11.1/run.sh > /usr/local/bin/docker-compose $ chmod +x /usr/local/bin/docker-compose ``` diff --git a/compose/link-env-deprecated.md b/compose/link-env-deprecated.md index 0cfb9583fa..0a7a162363 100644 --- a/compose/link-env-deprecated.md +++ b/compose/link-env-deprecated.md @@ -4,6 +4,7 @@ keywords: fig, composition, compose, docker, orchestration, cli, reference redirect_from: - /compose/env title: Link environment variables (superseded) +notoc: true --- > **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details. @@ -39,4 +40,4 @@ Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1` - [User guide](index.md) - [Installing Compose](install.md) - [Command line reference](./reference/index.md) -- [Compose file reference](compose-file.md) \ No newline at end of file +- [Compose file reference](compose-file.md) diff --git a/compose/networking.md b/compose/networking.md index 6a4f1e7adc..d1b56b5340 100644 --- a/compose/networking.md +++ b/compose/networking.md @@ -4,7 +4,7 @@ keywords: documentation, docs, docker, compose, orchestration, containers, netwo title: Networking in Compose --- -> **Note:** This document only applies if you're using [version 2 of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files. +> **Note:** This document only applies if you're using [version 2 or higher of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files. By default Compose sets up a single [network](/engine/reference/commandline/network_create/) for your app. Each @@ -143,4 +143,4 @@ If you want your containers to join a pre-existing network, use the [`external` external: name: my-pre-existing-network -Instead of attempting to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it. \ No newline at end of file +Instead of attempting to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it. diff --git a/compose/reference/build.md b/compose/reference/build.md index 51703c258b..1f73082534 100644 --- a/compose/reference/build.md +++ b/compose/reference/build.md @@ -2,6 +2,8 @@ description: docker-compose build keywords: fig, composition, compose, docker, orchestration, cli, build title: docker-compose build +notoc: true + --- ``` @@ -15,4 +17,4 @@ Options: Services are built once and then tagged as `project_service`, e.g., `composetest_db`. If you change a service's Dockerfile or the contents of its -build directory, run `docker-compose build` to rebuild it. \ No newline at end of file +build directory, run `docker-compose build` to rebuild it. diff --git a/compose/reference/bundle.md b/compose/reference/bundle.md index f916481236..9b6354ac9b 100644 --- a/compose/reference/bundle.md +++ b/compose/reference/bundle.md @@ -2,6 +2,7 @@ description: Create a distributed application bundle from the Compose file. keywords: fig, composition, compose, docker, orchestration, cli, bundle title: docker-compose bundle +notoc: true --- ``` @@ -21,4 +22,4 @@ Images must have digests stored, which requires interaction with a Docker registry. If digests aren't stored for all images, you can fetch them with `docker-compose pull` or `docker-compose push`. To push images automatically when bundling, pass `--push-images`. Only services with -a `build` option specified will have their images pushed. \ No newline at end of file +a `build` option specified will have their images pushed. diff --git a/compose/reference/config.md b/compose/reference/config.md index 369a7694de..f60eae70e4 100644 --- a/compose/reference/config.md +++ b/compose/reference/config.md @@ -2,6 +2,7 @@ description: Config validates and view the compose file. keywords: fig, composition, compose, docker, orchestration, cli, config title: docker-compose config +notoc: true --- ```: @@ -13,4 +14,4 @@ Options: --services Print the service names, one per line. ``` -Validate and view the compose file. \ No newline at end of file +Validate and view the compose file. diff --git a/compose/reference/create.md b/compose/reference/create.md index 00c4fed197..b45f165a70 100644 --- a/compose/reference/create.md +++ b/compose/reference/create.md @@ -2,6 +2,7 @@ description: Create creates containers for a service. keywords: fig, composition, compose, docker, orchestration, cli, create title: docker-compose create +notoc: true --- ``` @@ -16,4 +17,4 @@ Options: Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing. --build Build images before creating containers. -``` \ No newline at end of file +``` diff --git a/compose/reference/down.md b/compose/reference/down.md index 4a1348a7d8..638c0a8ad8 100644 --- a/compose/reference/down.md +++ b/compose/reference/down.md @@ -2,6 +2,7 @@ description: docker-compose down keywords: fig, composition, compose, docker, orchestration, cli, down title: docker-compose down +notoc: true --- ``` @@ -28,4 +29,4 @@ By default, the only things removed are: - Networks defined in the `networks` section of the Compose file - The default network, if one is used -Networks and volumes defined as `external` are never removed. \ No newline at end of file +Networks and volumes defined as `external` are never removed. diff --git a/compose/reference/envvars.md b/compose/reference/envvars.md index 12b160f559..4a00b0d386 100644 --- a/compose/reference/envvars.md +++ b/compose/reference/envvars.md @@ -2,6 +2,7 @@ description: Compose CLI environment variables keywords: fig, composition, compose, docker, orchestration, cli, reference title: Compose CLI environment variables +notoc: true --- Several environment variables are available for you to configure the Docker Compose command-line behaviour. diff --git a/compose/reference/events.md b/compose/reference/events.md index 4c08345092..b3558aa19f 100644 --- a/compose/reference/events.md +++ b/compose/reference/events.md @@ -2,6 +2,7 @@ description: Receive real time events from containers. keywords: fig, composition, compose, docker, orchestration, cli, events title: docker-compose events +notoc: true --- ``` @@ -24,4 +25,4 @@ format: "image": "alpine:edge", "time": "2015-11-20T18:01:03.615550", } -``` \ No newline at end of file +``` diff --git a/compose/reference/exec.md b/compose/reference/exec.md index edc7260243..e8087b1e33 100644 --- a/compose/reference/exec.md +++ b/compose/reference/exec.md @@ -2,6 +2,7 @@ description: docker-compose exec keywords: fig, composition, compose, docker, orchestration, cli, exec title: docker-compose exec +notoc: true --- ``` @@ -19,4 +20,4 @@ Options: This is equivalent of `docker exec`. With this subcommand you can run arbitrary commands in your services. Commands are by default allocating a TTY, so you can -do e.g. `docker-compose exec web sh` to get an interactive prompt. \ No newline at end of file +do e.g. `docker-compose exec web sh` to get an interactive prompt. diff --git a/compose/reference/help.md b/compose/reference/help.md index 956c295ea0..3ce82c4302 100644 --- a/compose/reference/help.md +++ b/compose/reference/help.md @@ -2,10 +2,11 @@ description: docker-compose help keywords: fig, composition, compose, docker, orchestration, cli, help title: docker-compose help +notoc: true --- ``` Usage: help COMMAND ``` -Displays help and usage instructions for a command. \ No newline at end of file +Displays help and usage instructions for a command. diff --git a/compose/reference/index.md b/compose/reference/index.md index 04b9f3b3f3..235ae7ef58 100644 --- a/compose/reference/index.md +++ b/compose/reference/index.md @@ -2,6 +2,7 @@ description: Compose CLI reference keywords: fig, composition, compose, docker, orchestration, cli, reference title: Compose command-line reference +notoc: true --- The following pages describe the usage information for the [docker-compose](overview.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line. @@ -35,4 +36,4 @@ The following pages describe the usage information for the [docker-compose](over ## Where to go next * [CLI environment variables](envvars.md) -* [docker-compose Command](overview.md) \ No newline at end of file +* [docker-compose Command](overview.md) diff --git a/compose/reference/kill.md b/compose/reference/kill.md index 77396dae56..a72873d8b0 100644 --- a/compose/reference/kill.md +++ b/compose/reference/kill.md @@ -2,6 +2,7 @@ description: Forces running containers to stop. keywords: fig, composition, compose, docker, orchestration, cli, kill title: docker-compose kill +notoc: true --- ``` diff --git a/compose/reference/logs.md b/compose/reference/logs.md index ea33bcb407..297ff334e4 100644 --- a/compose/reference/logs.md +++ b/compose/reference/logs.md @@ -2,6 +2,7 @@ description: Displays log output from services. keywords: fig, composition, compose, docker, orchestration, cli, logs title: docker-compose logs +notoc: true --- ``` @@ -15,4 +16,4 @@ Options: for each container. ``` -Displays log output from services. \ No newline at end of file +Displays log output from services. diff --git a/compose/reference/overview.md b/compose/reference/overview.md index ca57b16027..a75433be81 100644 --- a/compose/reference/overview.md +++ b/compose/reference/overview.md @@ -4,6 +4,7 @@ keywords: fig, composition, compose, docker, orchestration, cli, docker-compose redirect_from: - /compose/reference/docker-compose/ title: Overview of docker-compose CLI +notoc: true --- This page provides the usage information for the `docker-compose` Command. diff --git a/compose/reference/pause.md b/compose/reference/pause.md index 9bf42351d9..9f0147d728 100644 --- a/compose/reference/pause.md +++ b/compose/reference/pause.md @@ -2,10 +2,11 @@ description: Pauses running containers for a service. keywords: fig, composition, compose, docker, orchestration, cli, pause title: docker-compose pause +notoc: true --- ``` Usage: pause [SERVICE...] ``` -Pauses running containers of a service. They can be unpaused with `docker-compose unpause`. \ No newline at end of file +Pauses running containers of a service. They can be unpaused with `docker-compose unpause`. diff --git a/compose/reference/port.md b/compose/reference/port.md index da70ff90f0..96051ba507 100644 --- a/compose/reference/port.md +++ b/compose/reference/port.md @@ -2,6 +2,7 @@ description: Prints the public port for a port binding.s keywords: fig, composition, compose, docker, orchestration, cli, port title: docker-compose port +notoc: true --- ``` @@ -13,4 +14,4 @@ Options: instances of a service [default: 1] ``` -Prints the public port for a port binding. \ No newline at end of file +Prints the public port for a port binding. diff --git a/compose/reference/ps.md b/compose/reference/ps.md index 9bdd1ca44b..24bd3e01ad 100644 --- a/compose/reference/ps.md +++ b/compose/reference/ps.md @@ -2,6 +2,7 @@ description: Lists containers. keywords: fig, composition, compose, docker, orchestration, cli, ps title: docker-compose ps +notoc: true --- ```none @@ -19,4 +20,4 @@ $ docker-compose ps -------------------------------------------------------------------------------------------- mywordpress_db_1 docker-entrypoint.sh mysqld Up 3306/tcp mywordpress_wordpress_1 /entrypoint.sh apache2-for ... Restarting 0.0.0.0:8000->80/tcp -``` \ No newline at end of file +``` diff --git a/compose/reference/pull.md b/compose/reference/pull.md index db53eb53e1..ccfe66fa54 100644 --- a/compose/reference/pull.md +++ b/compose/reference/pull.md @@ -2,6 +2,7 @@ description: Pulls service images. keywords: fig, composition, compose, docker, orchestration, cli, pull title: docker-compose pull +notoc: true --- ``` @@ -11,4 +12,4 @@ Options: --ignore-pull-failures Pull what it can and ignores images with pull failures. ``` -Pulls service images. \ No newline at end of file +Pulls service images. diff --git a/compose/reference/push.md b/compose/reference/push.md index 73f4f4da92..ab9c677b15 100644 --- a/compose/reference/push.md +++ b/compose/reference/push.md @@ -2,6 +2,7 @@ description: Pushes service images. keywords: fig, composition, compose, docker, orchestration, cli, push title: docker-compose push +notoc: true --- ``` @@ -11,4 +12,4 @@ Options: --ignore-push-failures Push what it can and ignores images with push failures. ``` -Pushes images for services. \ No newline at end of file +Pushes images for services. diff --git a/compose/reference/restart.md b/compose/reference/restart.md index f85aa739d4..e449ba546d 100644 --- a/compose/reference/restart.md +++ b/compose/reference/restart.md @@ -2,6 +2,7 @@ description: Restarts Docker Compose services. keywords: fig, composition, compose, docker, orchestration, cli, restart title: docker-compose restart +notoc: true --- ``` diff --git a/compose/reference/rm.md b/compose/reference/rm.md index 9646100f8f..ec33d5744b 100644 --- a/compose/reference/rm.md +++ b/compose/reference/rm.md @@ -2,6 +2,7 @@ description: Removes stopped service containers. keywords: fig, composition, compose, docker, orchestration, cli, rm title: docker-compose rm +notoc: true --- ``` @@ -19,4 +20,4 @@ Removes stopped service containers. By default, anonymous volumes attached to containers will not be removed. You can override this with `-v`. To list all volumes, use `docker volume ls`. -Any data which is not in a volume will be lost. \ No newline at end of file +Any data which is not in a volume will be lost. diff --git a/compose/reference/run.md b/compose/reference/run.md index 5717694f66..fec994f4c7 100644 --- a/compose/reference/run.md +++ b/compose/reference/run.md @@ -2,6 +2,7 @@ description: Runs a one-off command on a service. keywords: fig, composition, compose, docker, orchestration, cli, run title: docker-compose run +notoc: true --- ``` diff --git a/compose/reference/scale.md b/compose/reference/scale.md index fbf6b2b777..881756e8ff 100644 --- a/compose/reference/scale.md +++ b/compose/reference/scale.md @@ -2,6 +2,7 @@ description: Sets the number of containers to run for a service. keywords: fig, composition, compose, docker, orchestration, cli, scale title: docker-compose scale +notoc: true --- ``` @@ -13,3 +14,9 @@ Sets the number of containers to run for a service. Numbers are specified as arguments in the form `service=num`. For example: docker-compose scale web=2 worker=3 + +>**Tip:** Alternatively, in +[Compose file version 3.x](/compose/compose-file/index.md), you can specify +[`replicas`](/compose/compose-file/index.md#replicas) +under [`deploy`](/compose/compose-file/index.md#deploy) as part of the +service configuration for [Swarm mode](/engine/swarm/). diff --git a/compose/reference/start.md b/compose/reference/start.md index 719c8147cc..8d9d0cd1f4 100644 --- a/compose/reference/start.md +++ b/compose/reference/start.md @@ -2,10 +2,11 @@ description: Starts existing containers for a service. keywords: fig, composition, compose, docker, orchestration, cli, start title: docker-compose start +notoc: true --- ``` Usage: start [SERVICE...] ``` -Starts existing containers for a service. \ No newline at end of file +Starts existing containers for a service. diff --git a/compose/reference/stop.md b/compose/reference/stop.md index 62f4cb5943..2da3ff2669 100644 --- a/compose/reference/stop.md +++ b/compose/reference/stop.md @@ -2,6 +2,7 @@ description: 'Stops running containers without removing them. ' keywords: fig, composition, compose, docker, orchestration, cli, stop title: docker-compose stop +notoc: true --- ``` @@ -12,4 +13,4 @@ Options: ``` Stops running containers without removing them. They can be started again with -`docker-compose start`. \ No newline at end of file +`docker-compose start`. diff --git a/compose/reference/top.md b/compose/reference/top.md index 106347e341..5f71cb44f1 100644 --- a/compose/reference/top.md +++ b/compose/reference/top.md @@ -2,6 +2,7 @@ description: Displays the running processes. keywords: fig, composition, compose, docker, orchestration, cli, top title: docker-compose top +notoc: true --- ```none @@ -22,4 +23,4 @@ compose_service_b_1 PID USER TIME COMMAND ---------------------------- 4115 root 0:00 top -``` \ No newline at end of file +``` diff --git a/compose/reference/unpause.md b/compose/reference/unpause.md index 0e33775224..83b79e6111 100644 --- a/compose/reference/unpause.md +++ b/compose/reference/unpause.md @@ -2,10 +2,11 @@ description: Unpauses paused containers for a service. keywords: fig, composition, compose, docker, orchestration, cli, unpause title: docker-compose unpause +notoc: true --- ``` Usage: unpause [SERVICE...] ``` -Unpauses paused containers of a service. \ No newline at end of file +Unpauses paused containers of a service. diff --git a/compose/reference/up.md b/compose/reference/up.md index f24a5aa88c..3bfd8dff22 100644 --- a/compose/reference/up.md +++ b/compose/reference/up.md @@ -2,6 +2,7 @@ description: Builds, (re)creates, starts, and attaches to containers for a service. keywords: fig, composition, compose, docker, orchestration, cli, up title: docker-compose up +notoc: true --- ``` @@ -45,4 +46,4 @@ volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag. If you want to force Compose to stop and recreate all containers, use the -`--force-recreate` flag. \ No newline at end of file +`--force-recreate` flag. diff --git a/compose/startup-order.md b/compose/startup-order.md index 1df82a6c3d..55c5350f9e 100644 --- a/compose/startup-order.md +++ b/compose/startup-order.md @@ -2,6 +2,7 @@ description: How to control service startup order in Docker Compose keywords: documentation, docs, docker, compose, startup, order title: Controlling startup order in Compose +notoc: true --- You can control the order of service startup with the diff --git a/cs-engine/1.12/index.md b/cs-engine/1.12/index.md index b8e75a9c23..da4965f1e5 100644 --- a/cs-engine/1.12/index.md +++ b/cs-engine/1.12/index.md @@ -1,14 +1,381 @@ --- -description: Learn more about the Commercially Supported Docker Engine. -keywords: docker, engine, documentation +description: Learn how to install the commercially supported version of Docker Engine. +keywords: docker, engine, dtr, install +title: Install CS Docker Engine redirect_from: -- /docker-trusted-registry/cs-engine/ -- /cs-engine/ -title: Commercially Supported Docker Engine +- /cs-engine/1.12/install/ --- -This section includes the following topics: +Follow these instructions to install CS Docker Engine, the commercially +supported version of Docker Engine. -* [Install CS Docker Engine](install.md) -* [Upgrade](upgrade.md) -* [Release notes](release-notes/release-notes.md) +CS Docker Engine can be installed on the following operating systems: + +* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)](install.md#install-on-centos-7172--rhel-707172-yum-based-systems) +* [Ubuntu 14.04 LTS](install.md#install-on-ubuntu-1404-lts) +* [SUSE Linux Enterprise 12](install.md#install-on-suse-linux-enterprise-123) + +You can install CS Docker Engine using a repository or using packages. + +- If you [use a repository](#install-using-a-repository), your operating system + will notify you when updates are available and you can upgrade or downgrade + easily, but you need an internet connection. This approach is recommended. + +- If you [use packages](#install-using-packages), you can install CS Docker + Engine on air-gapped systems that have no internet connection. However, you + are responsible for manually checking for updates and managing upgrades. + +## Prerequisites + +To install CS Docker Engine, you need root or sudo privileges and you need +access to a command line on the system. + +## Install using a repository + +### Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems) + +This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3. Only +these versions are supported. CentOS 7.0 is **not** supported. On RHEL, +depending on your current level of updates, you may need to reboot your server +to update its RHEL kernel. + +1. Add the Docker public key for CS Docker Engine packages: + + ```bash + $ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e" + ``` + +2. Install yum-utils if necessary: + + ```bash + $ sudo yum install -y yum-utils + ``` + +3. Add the Docker repository: + + ```bash + $ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/centos/7 + ``` + + This adds the repository of the latest version of CS Docker Engine. You can + customize the URL to install an older version. + +4. Install Docker CS Engine: + + - **Latest version**: + + ```bash + $ sudo yum makecache fast + + $ sudo yum install docker-engine + ``` + + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ yum list docker-engine.x86_64 --showduplicates |sort -r + ``` + + The second column represents the version. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by a hyphen (`-`): + + ```bash + $ sudo yum install docker-engine- + ``` + +5. Configure `devicemapper`: + + By default, the `devicemapper` graph driver does not come pre-configured in + a production-ready state. Follow the documented step by step instructions to + [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) + to achieve the best performance and reliability for your environment. + +6. Configure the Docker daemon to start automatically when the system starts, + and start it now. + + ```bash + $ sudo systemctl enable docker.service + $ sudo systemctl start docker.service + ``` + +7. Confirm the Docker daemon is running: + + ```bash + $ sudo docker info + ``` + +8. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. + + ```bash + $ sudo usermod -a -G docker $USER + ``` + +9. Log out and log back in to have your new permissions take effect. + +### Install on Ubuntu 14.04 LTS or 16.04 LTS + +1. Install packages to allow `apt` to use a repository over HTTPS: + + ```bash + $ sudo apt-get update + + $ sudo apt-get install --no-install-recommends \ + apt-transport-https \ + curl \ + software-properties-common + ``` + + Optionally, install additional kernel modules to add AUFS support. + + ```bash + $ sudo apt-get install -y --no-install-recommends \ + linux-image-extra-$(uname -r) \ + linux-image-extra-virtual + ``` + +2. Download and import Docker's public key for CS packages: + + ```bash + $ curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add - + ``` + +3. Add the repository. In the command below, the `lsb_release -cs` sub-command + returns the name of your Ubuntu version, like `xenial` or `trusty`. + + ```bash + $ sudo add-apt-repository \ + "deb https://packages.docker.com/1.12/apt/repo/ \ + ubuntu-$(lsb_release -cs) \ + main" + ``` + +4. Install CS Docker Engine: + + - **Latest version**: + + ```bash + $ sudo apt-get update + + $ sudo apt-get -y install docker-engine + ``` + + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ sudo apt-get update + + $ apt-cache madison docker-engine + ``` + + The second column represents the version. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by an equals sign (`=`): + + ```bash + $ sudo apt-get install docker-engine= + ``` + +5. Confirm the Docker daemon is running: + + ```bash + $ sudo docker info + ``` + +6. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. + + ```bash + $ sudo usermod -a -G docker $USER + ``` + + Log out and log back in to have your new permissions take effect. + +### Install on SUSE Linux Enterprise 12.3 + +1. Refresh your repository: + + ```bash + $ sudo zypper update + ``` + +2. Add the Docker repository and public key: + + ```bash + $ sudo zypper ar -t YUM https://packages.docker.com/1.12/yum/repo/main/opensuse/12.3 docker-1.13 + $ sudo rpm --import 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' + ``` + + This adds the repository of the latest version of CS Docker Engine. You can + customize the URL to install an older version. + +3. Install CS Docker Engine. + + - **Latest version**: + + ```bash + $ sudo zypper refresh + + $ sudo zypper install docker-engine + ``` + + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ sudo zypper refresh + + $ zypper search -s --match-exact -t package docker-engine + ``` + + The third column is the version string. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by a hyphen (`-`): + + ```bash + $ sudo zypper install docker-engine- + ``` + +4. Configure the Docker daemon to start automatically when the system starts, + and start it now. + + ```bash + $ sudo systemctl enable docker.service + $ sudo systemctl start docker.service + ``` + +5. Confirm the Docker daemon is running: + + ```bash + $ sudo docker info + ``` + +6. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. + + ```bash + $ sudo usermod -a -G docker $USER + ``` + + Log out and log back in to have your new permissions take effect. + +## Install using packages + +If you need to install Docker on an air-gapped system with no access to the +internet, use the [package download link table](#package-download-links) to +download the Docker package for your operating system, then install it using the +[appropriate command](#general-commands). You are responsible for manually +upgrading Docker when a new version is available, and also for satisfying +Docker's dependencies. + +### General commands + +To install Docker from packages, use the following commands: + +| Operating system | Command | +|-----------------------|---------| +| RHEL / CentOS / SLES | `$ sudo yum install /path/to/package.rpm` | +| Ubuntu | `$ sudo dpkg -i /path/to/package.deb` | + +### Package download links + +{% assign rpm-prefix = "https://packages.docker.com/1.12/yum/repo/main" %} +{% assign deb-prefix = "https://packages.docker.com/1.12/apt/repo/pool/main/d/docker-engine" %} + +#### CS Docker Engine 1.12.6 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.12.6.cs8-1" %} +{% assign rpm-rhel-version = "1.12.6.cs8-1" %} +{% assign deb-version = "1.12.6~cs8-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| RHEL 7.2 (only use if you have problems with `selinux` with the packages above) | [docker-engine]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-debuginfo-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-selinux-{{ rpm-rhel-version }}.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | + +#### CS Docker Engine 1.12.5 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.12.5.cs5-1" %} +{% assign deb-version = "1.12.5~cs5-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | + +#### CS Docker Engine 1.12.3 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.12.3.cs4-1" %} +{% assign deb-version = "1.12.3~cs4-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | + +#### CS Docker Engine 1.12.2 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.12.2.cs2-1" %} +{% assign deb-version = "1.12.2~cs2-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | + +#### CS Docker Engine 1.12.1 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.12.1.cs1-1" %} +{% assign deb-version = "1.12.1~cs1-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Wily | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-wily_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | diff --git a/cs-engine/1.12/install.md b/cs-engine/1.12/install.md deleted file mode 100644 index 3dc59d49f4..0000000000 --- a/cs-engine/1.12/install.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -description: Learn how to install the commercially supported version of Docker Engine. -keywords: docker, engine, dtr, install -redirect_from: -- /docker-trusted-registry/install/engine-ami-launch/ -- /docker-trusted-registry/install/install-csengine/ -- /docker-trusted-registry/cs-engine/install/ -- /cs-engine/install/ -title: Install Commercially Supported Docker Engine ---- - -Follow these instructions to install CS Docker Engine, the commercially -supported version of Docker Engine. - -CS Docker Engine can be installed on the following operating systems: - - -* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems)](install.md#install-on-centos-7172--rhel-707172-yum-based-systems) -* [Ubuntu 14.04 LTS](install.md#install-on-ubuntu-1404-lts) -* [SUSE Linux Enterprise 12](install.md#install-on-suse-linux-enterprise-123) - - -## Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems) - -This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2. Only -these versions are supported. CentOS 7.0 is **not** supported. On RHEL, -depending on your current level of updates, you may need to reboot your server -to update its RHEL kernel. - -1. Log into the system as a user with root or sudo permissions. - -2. Add the Docker public key for CS packages: - - ```bash - $ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e" - ``` - -3. Install yum-utils if necessary: - - ```bash - $ sudo yum install -y yum-utils - ``` - -4. Add the Docker repository: - - ```bash - $ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/centos/7 - ``` - - This adds the repository of the latest version of CS Docker Engine. You can - customize the URL to install an older version. - - > **Note**: For users on RHEL 7.2 who have issues with installing the selinux - > policy, use the following command instead of the one above: - - ```bash - $ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/rhel/7.2 - ``` - -5. Install Docker CS Engine: - - ```bash - $ sudo yum install docker-engine - ``` - -6. Configure devicemapper: - - By default, the `devicemapper` graph driver does not come pre-configured in a production ready state. Follow the documented step by step instructions to [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) in order to achieve the best performance and reliability for your environment. - -7. Enable the Docker daemon as a service and start it. - - ```bash - $ sudo systemctl enable docker.service - $ sudo systemctl start docker.service - ``` - -8. Confirm the Docker daemon is running: - - ```bash - $ sudo docker info - ``` - -9. Optionally, add non-sudo access to the Docker socket by adding your user -to the `docker` group. - - ```bash - $ sudo usermod -a -G docker $USER - ``` - -10. Log out and log back in to have your new permissions take effect. - -## Install on Ubuntu 14.04 LTS - -1. Log into the system as a user with root or sudo permissions. - -2. Add Docker's public key for CS packages: - - ```bash - $ curl -s 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add --import - ``` - -3. Install the HTTPS helper for apt (your system may already have it): - - ```bash - $ sudo apt-get update && sudo apt-get install apt-transport-https - ``` - -4. Install additional kernel modules to add AUFS support. - - ```bash - $ sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual - ``` - -5. Add the repository for the new version: - - ```bash - $ echo "deb https://packages.docker.com/1.12/apt/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list - ``` - -6. Run the following to install commercially supported Docker Engine and its -dependencies: - - ```bash - $ sudo apt-get update && sudo apt-get install docker-engine - ``` - -7. Confirm the Docker daemon is running: - - ```bash - $ sudo docker info - ``` - -8. Optionally, add non-sudo access to the Docker socket by adding your -user to the `docker` group. - - ```bash - $ sudo usermod -a -G docker $USER - ``` - - Log out and log back in to have your new permissions take effect. - - -## Install on SUSE Linux Enterprise 12.3 - -1. Log into the system as a user with root or sudo permissions. - -2. Refresh your repository so that curl commands and CA certificates -are available: - - ```bash - $ sudo zypper ref - ``` - -3. Add the Docker repository and public key: - - ```bash - $ sudo zypper ar -t YUM https://packages.docker.com/1.12/yum/repo/main/opensuse/12.3 docker-1.12 - $ sudo rpm --import 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' - ``` - - This adds the repository of the latest version of CS Docker Engine. You can - customize the URL to install an older version. - -4. Install the Docker daemon package: - - ```bash - $ sudo zypper install docker-engine - ``` - -5. Enable the Docker daemon as a service and then start it: - - ```bash - $ sudo systemctl enable docker.service - $ sudo systemctl start docker.service - ``` - -6. Confirm the Docker daemon is running: - - ```bash - $ sudo docker info - ``` - -7. Optionally, add non-sudo access to the Docker socket by adding your user -to the `docker` group. - - ```bash - $ sudo usermod -a -G docker $USER - ``` - -8. Log out and log back in to have your new permissions take effect. diff --git a/cs-engine/1.12/release-notes/prior-release-notes.md b/cs-engine/1.12/release-notes/prior-release-notes.md index 91767fd963..2dcb185e25 100644 --- a/cs-engine/1.12/release-notes/prior-release-notes.md +++ b/cs-engine/1.12/release-notes/prior-release-notes.md @@ -4,7 +4,7 @@ keywords: docker, documentation, about, technology, understanding, enterprise, h redirect_from: - /docker-trusted-registry/cse-prior-release-notes/ - /docker-trusted-registry/cs-engine/release-notes/prior-release-notes/ -- - /cs-engine/release-notes/prior-release-notes/ +- /cs-engine/release-notes/prior-release-notes/ title: Release notes archive for Commercially Supported Docker Engine. --- diff --git a/cs-engine/1.13/index.md b/cs-engine/1.13/index.md index 310dfe13fb..2a7f50efed 100644 --- a/cs-engine/1.13/index.md +++ b/cs-engine/1.13/index.md @@ -1,7 +1,12 @@ --- -title: Install CS Docker Engine 1.13 +title: Install CS Docker Engine description: Learn how to install the commercially supported version of Docker Engine. keywords: docker, engine, install +redirect_from: +- /docker-trusted-registry/install/engine-ami-launch/ +- /docker-trusted-registry/install/install-csengine/ +- /docker-trusted-registry/cs-engine/install/ +- /cs-engine/install/ --- Follow these instructions to install CS Docker Engine, the commercially @@ -9,34 +14,47 @@ supported version of Docker Engine. CS Docker Engine can be installed on the following operating systems: +* [CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)](#install-on-centos-7172--rhel-70717273-yum-based-systems) +* [Ubuntu 14.04 LTS or 16.04 LTS](#install-on-ubuntu-1404-lts-or-1604-lts) +* [SUSE Linux Enterprise 12](#install-on-suse-linux-enterprise-123) -* CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2 (YUM-based systems) -* Ubuntu 14.04 LTS or 16.04 LTS -* SUSE Linux Enterprise 12 +You can install CS Docker Engine using a repository or using packages. +- If you [use a repository](#install-using-a-repository), your operating system + will notify you when updates are available and you can upgrade or downgrade + easily, but you need an internet connection. This approach is recommended. -## Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems) +- If you [use packages](#install-using-packages), you can install CS Docker + Engine on air-gapped systems that have no internet connection. However, you + are responsible for manually checking for updates and managing upgrades. + +## Prerequisites + +To install CS Docker Engine, you need root or sudo privileges and you need +access to a command line on the system. + +## Install using a repository + +### Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems) This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3. Only these versions are supported. CentOS 7.0 is **not** supported. On RHEL, depending on your current level of updates, you may need to reboot your server to update its RHEL kernel. -1. Log into the system as a user with root or sudo permissions. - -2. Add the Docker public key for CS packages: +1. Add the Docker public key for CS Docker Engine packages: ```bash $ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e" ``` -3. Install yum-utils if necessary: +2. Install yum-utils if necessary: ```bash $ sudo yum install -y yum-utils ``` -4. Add the Docker repository: +3. Add the Docker repository: ```bash $ sudo yum-config-manager --add-repo https://packages.docker.com/1.13/yum/repo/main/centos/7 @@ -45,89 +63,145 @@ to update its RHEL kernel. This adds the repository of the latest version of CS Docker Engine. You can customize the URL to install an older version. -5. Install Docker CS Engine: +4. Install Docker CS Engine: - ```bash - $ sudo yum install docker-engine - ``` + - **Latest version**: -6. Configure devicemapper: + ```bash + $ sudo yum makecache fast - By default, the `devicemapper` graph driver does not come pre-configured in a production ready state. Follow the documented step by step instructions to [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) in order to achieve the best performance and reliability for your environment. + $ sudo yum install docker-engine + ``` -7. Enable the Docker daemon as a service and start it. + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ yum list docker-engine.x86_64 --showduplicates |sort -r + ``` + + The second column represents the version. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by a hyphen (`-`): + + ```bash + $ sudo yum install docker-engine- + ``` + +5. Configure `devicemapper`: + + By default, the `devicemapper` graph driver does not come pre-configured in + a production-ready state. Follow the documented step by step instructions to + [configure devicemapper with direct-lvm for production](../../engine/userguide/storagedriver/device-mapper-driver/#/for-a-direct-lvm-mode-configuration) + to achieve the best performance and reliability for your environment. + +6. Configure the Docker daemon to start automatically when the system starts, + and start it now. ```bash $ sudo systemctl enable docker.service $ sudo systemctl start docker.service ``` -8. Confirm the Docker daemon is running: +7. Confirm the Docker daemon is running: ```bash $ sudo docker info ``` -9. Optionally, add non-sudo access to the Docker socket by adding your user -to the `docker` group. +8. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. ```bash $ sudo usermod -a -G docker $USER ``` -10. Log out and log back in to have your new permissions take effect. +9. Log out and log back in to have your new permissions take effect. -## Install on Ubuntu 14.04 LTS or 16.04 LTS +### Install on Ubuntu 14.04 LTS or 16.04 LTS -1. Log into the system as a user with root or sudo permissions. - -2. Add Docker's public key for CS packages: +1. Install packages to allow `apt` to use a repository over HTTPS: ```bash - $ curl -s 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add --import + $ sudo apt-get update + + $ sudo apt-get install --no-install-recommends \ + apt-transport-https \ + curl \ + software-properties-common ``` -3. Install the HTTPS helper for apt (your system may already have it): + Optionally, install additional kernel modules to add AUFS support. ```bash - $ sudo apt-get update && sudo apt-get install apt-transport-https + $ sudo apt-get install -y --no-install-recommends \ + linux-image-extra-$(uname -r) \ + linux-image-extra-virtual ``` -4. Install additional kernel modules to add AUFS support. +2. Download and import Docker's public key for CS packages: ```bash - $ sudo apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual + $ curl -fsSL 'https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add - ``` -5. Add the repository for the new version: - - for 14.04: +3. Add the repository. In the command below, the `lsb_release -cs` sub-command + returns the name of your Ubuntu version, like `xenial` or `trusty`. ```bash - $ echo "deb https://packages.docker.com/1.13/apt/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list - ``` - - for 16.04: - - ```bash - $ echo "deb https://packages.docker.com/1.13/apt/repo ubuntu-xenial main" | sudo tee /etc/apt/sources.list.d/docker.list + $ sudo add-apt-repository \ + "deb https://packages.docker.com/1.13/apt/repo/ \ + ubuntu-$(lsb_release -cs) \ + main" ``` -6. Run the following to install commercially supported Docker Engine and its -dependencies: +4. Install CS Docker Engine: - ```bash - $ sudo apt-get update && sudo apt-get install docker-engine - ``` + - **Latest version**: -7. Confirm the Docker daemon is running: + ```bash + $ sudo apt-get update + + $ sudo apt-get -y install docker-engine + ``` + + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ sudo apt-get update + + $ apt-cache madison docker-engine + ``` + + The second column represents the version. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by an equals sign (`=`): + + ```bash + $ sudo apt-get install docker-engine= + ``` + +5. Confirm the Docker daemon is running: ```bash $ sudo docker info ``` -8. Optionally, add non-sudo access to the Docker socket by adding your -user to the `docker` group. +6. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. ```bash $ sudo usermod -a -G docker $USER @@ -135,19 +209,15 @@ user to the `docker` group. Log out and log back in to have your new permissions take effect. +### Install on SUSE Linux Enterprise 12.3 -## Install on SUSE Linux Enterprise 12.3 - -1. Log into the system as a user with root or sudo permissions. - -2. Refresh your repository so that curl commands and CA certificates -are available: +1. Refresh your repository: ```bash - $ sudo zypper ref + $ sudo zypper update ``` -3. Add the Docker repository and public key: +2. Add the Docker repository and public key: ```bash $ sudo zypper ar -t YUM https://packages.docker.com/1.13/yum/repo/main/opensuse/12.3 docker-1.13 @@ -157,30 +227,97 @@ are available: This adds the repository of the latest version of CS Docker Engine. You can customize the URL to install an older version. -4. Install the Docker daemon package: +3. Install CS Docker Engine. - ```bash - $ sudo zypper install docker-engine - ``` + - **Latest version**: -5. Enable the Docker daemon as a service and then start it: + ```bash + $ sudo zypper refresh + + $ sudo zypper install docker-engine + ``` + + - **Specific version**: + + On production systems, you should install a specific version rather than + relying on the latest. + + 1. List the available versions: + + ```bash + $ sudo zypper refresh + + $ zypper search -s --match-exact -t package docker-engine + ``` + + The third column is the version string. + + 2. Install a specific version by adding the version after `docker-engine`, + separeated by a hyphen (`-`): + + ```bash + $ sudo zypper install docker-engine- + ``` + +4. Configure the Docker daemon to start automatically when the system starts, + and start it now. ```bash $ sudo systemctl enable docker.service $ sudo systemctl start docker.service ``` -6. Confirm the Docker daemon is running: +5. Confirm the Docker daemon is running: ```bash $ sudo docker info ``` -7. Optionally, add non-sudo access to the Docker socket by adding your user -to the `docker` group. +6. Only users with `sudo` access will be able to run `docker` commands. + Optionally, add non-sudo access to the Docker socket by adding your user + to the `docker` group. ```bash $ sudo usermod -a -G docker $USER ``` -8. Log out and log back in to have your new permissions take effect. + Log out and log back in to have your new permissions take effect. + +## Install using packages + +If you need to install Docker on an air-gapped system with no access to the +internet, use the [package download link table](#package-download-links) to +download the Docker package for your operating system, then install it using the +[appropriate command](#general-commands). You are responsible for manually +upgrading Docker when a new version is available, and also for satisfying +Docker's dependencies. + +### General commands + +To install Docker from packages, use the following commands: + +| Operating system | Command | +|-----------------------|---------| +| RHEL / CentOS / SLES | `$ sudo yum install /path/to/package.rpm` | +| Ubuntu | `$ sudo dpkg -i /path/to/package.deb` | + +### Package download links + +{% assign rpm-prefix = "https://packages.docker.com/1.13/yum/repo/main" %} +{% assign deb-prefix = "https://packages.docker.com/1.13/apt/repo/pool/main/d/docker-engine" %} + +#### CS Docker Engine 1.13.1 + +{% comment %} Check on the S3 bucket for packages.docker.com for the versions. {% endcomment %} +{% assign rpm-version = "1.13.1.cs2-1" %} +{% assign rpm-rhel-version = "1.13.1.cs2-1" %} +{% assign deb-version = "1.13.1~cs2-0" %} + +| Operating system | Package links | +|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| RHEL 7.x and CentOS 7 | [docker-engine]({{ rpm-prefix }}/centos/7/Packages/docker-engine-{{ rpm-version}}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/centos/7/Packages/docker-engine-debuginfo-{{ rpm-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/centos/7/Packages/docker-engine-selinux-{{ rpm-version}}1.el7.centos.noarch.rpm) | +| RHEL 7.2 (only use if you have problems with `selinux` with the packages above) | [docker-engine]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-debuginfo]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-debuginfo-{{ rpm-rhel-version }}.el7.centos.x86_64.rpm), [docker-engine-selinux]({{ rpm-prefix }}/rhel/7.2/Packages/docker-engine-selinux-{{ rpm-rhel-version }}.el7.centos.noarch.rpm) | +| SLES 12 | [docker-engine]({{ rpm-prefix }}/opensuse/12.3/Packages/docker-engine-{{ rpm-version }}.x86_64.rpm) | +| Ubuntu Xenial | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-xenial_amd64.deb) | +| Ubuntu Trusty | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-trusty_amd64.deb) | +| Ubuntu Precise | [docker-engine]({{ deb-prefix }}/docker-engine_{{ deb-version }}~ubuntu-precisel_amd64.deb) | diff --git a/cs-engine/1.13/release-notes.md b/cs-engine/1.13/release-notes.md index a331b8151f..8f6f620fd5 100644 --- a/cs-engine/1.13/release-notes.md +++ b/cs-engine/1.13/release-notes.md @@ -14,6 +14,49 @@ back-ported fixes (security-related and priority defects) from the open source. It incorporates defect fixes that you can use in environments where new features cannot be adopted as quickly for consistency and compatibility reasons. + +## CS Engine 1.13.1-cs2 +(23 Feb 2017) + +### Client + +* Fix panic in `docker stats --format` [#30776](https://github.com/docker/docker/pull/30776) + +### Contrib + +* Update various `bash` and `zsh` completion scripts [#30823](https://github.com/docker/docker/pull/30823), [#30945](https://github.com/docker/docker/pull/30945) and more... +* Block obsolete socket families in default seccomp profile - mitigates unpatched kernels' CVE-2017-6074 [#29076](https://github.com/docker/docker/pull/29076) + +### Networking + +* Fix bug on overlay encryption keys rotation in cross-datacenter swarm [#30727](https://github.com/docker/docker/pull/30727) +* Fix side effect panic in overlay encryption and network control plane communication failure ("No installed keys could decrypt the message") on frequent swarm leader re-election [#25608](https://github.com/docker/docker/pull/25608) +* Several fixes around system responsiveness and datapath programming when using overlay network with external kv-store [docker/libnetwork#1639](https://github.com/docker/libnetwork/pull/1639), [docker/libnetwork#1632](https://github.com/docker/libnetwork/pull/1632) and more... +* Discard incoming plain vxlan packets for encrypted overlay network [#31170](https://github.com/docker/docker/pull/31170) +* Release the network attachment on allocation failure [#31073](https://github.com/docker/docker/pull/31073) +* Fix port allocation when multiple published ports map to the same target port [docker/swarmkit#1835](https://github.com/docker/swarmkit/pull/1835) + +### Runtime + +* Fix a deadlock in docker logs [#30223](https://github.com/docker/docker/pull/30223) +* Fix cpu spin waiting for log write events [#31070](https://github.com/docker/docker/pull/31070) +* Fix a possible crash when using journald [#31231](https://github.com/docker/docker/pull/31231) [#31263](https://github.com/docker/docker/pull/31231) +* Fix a panic on close of nil channel [#31274](https://github.com/docker/docker/pull/31274) +* Fix duplicate mount point for `--volumes-from` in `docker run` [#29563](https://github.com/docker/docker/pull/29563) +* Fix `--cache-from` does not cache last step [#31189](https://github.com/docker/docker/pull/31189) +* Fix issue with lock contention while performing container size calculation [#31159](https://github.com/docker/docker/pull/31159) + +### Swarm Mode + +* Shutdown leaks an error when the container was never started [#31279](https://github.com/docker/docker/pull/31279) + +### Swarm Mode + +* Fix possibility of tasks getting stuck in the "NEW" state during a leader failover [docker/swarmkit#1938](https://github.com/docker/swarmkit/pull/1938) +* Fix extraneous task creations for global services that led to confusing replica counts in `docker service ls` [docker/swarmkit#1957](https://github.com/docker/swarmkit/pull/1957) +* Fix problem that made rolling updates slow when `task-history-limit` was set to 1 [docker/swarmkit#1948](https://github.com/docker/swarmkit/pull/1948) +* Restart tasks elsewhere, if appropriate, when they are shut down as a result of nodes no longer satisfying constraints [docker/swarmkit#1958](https://github.com/docker/swarmkit/pull/1958) + ## CS Engine 1.13.1-cs1 (08 Feb 2017) diff --git a/datacenter/dtr/2.0/index.md b/datacenter/dtr/2.0/index.md index c1e75a58a8..e55dd04332 100644 --- a/datacenter/dtr/2.0/index.md +++ b/datacenter/dtr/2.0/index.md @@ -1,6 +1,10 @@ --- description: Learn how to install, configure, and use Docker Trusted Registry. keywords: docker, registry, repository, images +redirect_from: +- /docker-hub-enterprise/ +- /docker-trusted-registry/overview/ +- /docker-trusted-registry/ title: Docker Trusted Registry overview --- @@ -35,4 +39,4 @@ access to your Docker images. ## Where to go next * [DTR architecture](architecture.md) -* [Install DTR](install/index.md) +* [Install DTR](install/index.md) \ No newline at end of file diff --git a/datacenter/dtr/2.1/guides/install/index.md b/datacenter/dtr/2.1/guides/install/index.md index e5fc61b3a0..d1311ffd15 100644 --- a/datacenter/dtr/2.1/guides/install/index.md +++ b/datacenter/dtr/2.1/guides/install/index.md @@ -49,7 +49,16 @@ Where the `--ucp-node` is the hostname of the UCP node where you want to deploy DTR. `--ucp-insecure-tls` tells the installer to trust the certificates used by UCP. -The install command has other flags for customizing DTR at install time. +By default the install command runs in interactive mode and prompts for +additional information like: + +* DTR external url: the url clients use to read DTR. If you're using a load +balancer for DTR, this is the IP address or DNS name of the load balancer +* UCP url: the url clients use to reach UCP +* UCP username and password: administrator credentials for UCP + +You can also provide this information to the installer command so that it +runs without prompting. Check the [reference documentation to learn more](../../reference/cli/install.md). diff --git a/datacenter/dtr/2.2/guides/admin/backups-and-disaster-recovery.md b/datacenter/dtr/2.2/guides/admin/backups-and-disaster-recovery.md index c45be78842..84ef1ccdcc 100644 --- a/datacenter/dtr/2.2/guides/admin/backups-and-disaster-recovery.md +++ b/datacenter/dtr/2.2/guides/admin/backups-and-disaster-recovery.md @@ -4,10 +4,10 @@ keywords: docker, registry, high-availability, backup, recovery title: DTR backups and recovery --- -When you use Docker Trusted Registry on a production setting, you should first -configure it for high availability. - -The next step is creating a backup policy and disaster recovery plan. +DTR replicas rely on having a majority available at any given time for writes to +succeed. Therefore if a majority of replicas are permanently lost, the only way +to restore DTR to a working state is to recover from backups. This is why it's +very important to perform periodic backups. ## DTR data persistence @@ -17,6 +17,9 @@ Docker Trusted Registry persists: that is replicated through all DTR replicas. * **Repository metadata**: the information about the repositories and images deployed. This information is replicated through all DTR replicas. +* **Access control**: permissions for teams and repos. +* **Notary data**: notary tags and signatures. +* **Scan results**: security scanning results for images. * **Certificates and keys**: the certificates, public keys, and private keys that are used for mutual TLS communication. @@ -33,21 +36,38 @@ command creates a backup of DTR: * Configurations, * Repository metadata, +* Access control, +* Notary data, +* Scan results, * Certificates and keys used by DTR. -These files are added to a tar archive, and the result is streamed to stdout. +This data is added to a tar archive, and the result is streamed to stdout. This +is done while DTR is running without shutting down any containers. + +Things DTR's backup command doesn't back up: + +* The vulnerability database (if using image scanning) +* Image contents +* Users, orgs, teams + +There is no way to back up the vulnerability database. You can re-download it +after restoring or re-apply your offline tar update if offline. The backup command does not create a backup of Docker images. You should implement a separate backup policy for the Docker images, taking in consideration whether your DTR installation is configured to store images on the -filesystem or using a cloud provider. +filesystem or using a cloud provider. During restore, you need to separately +restore the image contents. The backup command also doesn't create a backup of the users and organizations. That data is managed by UCP, so when you create a UCP backup you're creating -a backup of the users and organizations metadata. +a backup of the users and organizations. For this reason, when restoring DTR, +you must do it on the same UCP cluster (or one created by restoring from +backups) or else all DTR resources such as repos will be owned by non-existent +users and will not be usable despite technically existing in the database. When creating a backup, the resulting .tar file contains sensitive information -like private keys. You should ensure the backups are stored securely. +such as private keys. You should ensure the backups are stored securely. You can check the [reference documentation](../../reference/cli/backup.md), for the @@ -71,6 +91,23 @@ Where: * `--existing-replica-id` is the id of the replica to backup, * `--ucp-username`, and `--ucp-password` are the credentials of a UCP administrator. +To avoid having to pass the password as a command line parameter, you may +instead use the following approach in bash: + +```none +$ read -sp 'ucp password: ' PASS; UCP_PASSWORD=$PASS docker run -i --rm -e UCP_PASSWORD docker/dtr backup \ + --ucp-url \ + --ucp-insecure-tls \ + --existing-replica-id \ + --ucp-username > /tmp/backup.tar +``` + +This puts the password into a shell variable which is then passed into the +docker client command with the -e flag which in turn relays the password to the +DTR bootstrapper. + +## Testing backups + To validate that the backup was correctly performed, you can print the contents of the tar file created: @@ -78,27 +115,59 @@ of the tar file created: $ tar -tf /tmp/backup.tar ``` +The structure of the archive should look something like this: + +```none +dtr-backup-v2.2.1/ +dtr-backup-v2.2.1/rethink/ +dtr-backup-v2.2.1/rethink/properties/ +dtr-backup-v2.2.1/rethink/properties/0 +... +``` + +To really test that the backup works, you must make a copy of your UCP cluster +by backing it up and restoring it onto separate machines. Then you can restore +DTR there from your backup and verify that it has all the data you expect to +see. + ## Restore DTR data -You can restore a DTR node from a backup using the `restore` -command. -This command performs a fresh installation of DTR, and reconfigures it with -the configuration created during a backup. +You can restore a DTR node from a backup using the `restore` command. -The command starts by installing DTR, restores the configurations stored on -etcd, and then restores the repository metadata stored on RethinkDB. You -can use the `--config-only` option, to only restore the configurations stored -on etcd. +Note that backups are tied to specific DTR versions and are guaranteed to work +only with those DTR versions. You can backup/restore across patch versions +at your own risk, but not across minor versions as those require more complex +migrations. -This command does not restore Docker images. You should implement a separate -restore procedure for the Docker images stored in your registry, taking in -consideration whether your DTR installation is configured to store images on -the filesystem or using a cloud provider. +Before restoring DTR, make sure that you are restoring it on the same UCP +cluster or you've also restored UCP using its restore command. DTR does not +manage users, orgs or teams so if you try to +restore DTR on a cluster other than the one it was backed up on, DTR +repositories will be associated with users that don't exist and it will appear +as if the restore operation didn't work. + +Note that to restore DTR, you must first remove any left over containers from +the previous installation. To do this, see the [uninstall +documentation](../install/uninstall.md). + +The restore command performs a fresh installation of DTR, and reconfigures it with +the configuration created during a backup. The command starts by installing DTR. +Then it restores the configurations from the backup and then restores the +repository metadata. Finally, it applies all of the configs specified as flags to +the restore command. + +After restoring DTR, you must make sure that it's configured to use the same +storage backend where it can find the image data. If the image data was backed +up separately, you must restore it now. + +Finally, if you are using security scanning, you must re-fetch the security +scanning database through the online update or by uploading the offline tar. See +the [security scanning configuration](../admin/configure/set-up-vulnerability-scans.md) +for more detail. You can check the -[reference documentation](../../reference/cli/backup.md), for the -backup command to learn about all the available flags. - +[reference documentation](../../reference/cli/restore.md), for the +restore command to learn about all the available flags. As an example, to install DTR on the host and restore its state from an existing backup: @@ -121,6 +190,15 @@ Where: * `--ucp-username`, and `--ucp-password` are the credentials of a UCP administrator, * `--dtr-load-balancer` is the domain name or ip where DTR can be reached. +Note that if you want to avoid typing your password into the terminal you must pass +it in as an environment variable using the same approach as for the backup command: + +```none +$ read -sp 'ucp password: ' PASS; UCP_PASSWORD=$PASS docker run -i --rm -e UCP_PASSWORD docker/dtr restore ... +``` + +After you successfully restore DTR, you can join new replicas the same way you +would after a fresh installation. ## Where to go next diff --git a/datacenter/dtr/2.2/guides/admin/configure/external-storage/nfs.md b/datacenter/dtr/2.2/guides/admin/configure/external-storage/nfs.md index e48edae89f..db368e1519 100644 --- a/datacenter/dtr/2.2/guides/admin/configure/external-storage/nfs.md +++ b/datacenter/dtr/2.2/guides/admin/configure/external-storage/nfs.md @@ -4,8 +4,6 @@ description: Learn how to integrate Docker Trusted Registry with NFS keywords: registry, dtr, storage, nfs --- - - You can configure DTR to store Docker images in an NFS directory. Before installing or configuring DTR to use an NFS directory, make sure that: @@ -70,4 +68,4 @@ add it back again. ## Where to go next -* [Configure where images are stored](configure-storage.md) +* [Configure where images are stored](index.md) diff --git a/datacenter/dtr/2.2/guides/admin/configure/garbage-collection.md b/datacenter/dtr/2.2/guides/admin/configure/garbage-collection.md new file mode 100644 index 0000000000..7d903fe279 --- /dev/null +++ b/datacenter/dtr/2.2/guides/admin/configure/garbage-collection.md @@ -0,0 +1,101 @@ +--- +description: Configure garbage collection in Docker Trusted Registry +keyworkds: docker, registry, garbage collection, gc, space, disk space +title: Docker Trusted Registry 2.2 Garbage Collection +--- + +#### TL;DR + +1. Garbage Collection (GC) reclaims disk space from your storage by deleting +unused layers +2. GC can be configured to run automatically with a cron schedule, and can also +be run manually. Only admins can configure these +3. When GC runs DTR will be placed in read-only mode. Pulls will work but +pushes will fail +4. The UI will show when GC is running, and an admin can stop GC within the UI + +**Important notes** + +The GC cron schedule is set to run in **UTC time**. Containers typically run in +UTC time (unless the system time is mounted), therefore remember that the cron +schedule will run based off of UTC time when configuring. + +GC puts DTR into read only mode; pulls succeed while pushes fail. Pushing an +image while GC runs may lead to undefined behaviour and data loss, therefore +this is disabled for safety. For this reason it's generally best practice to +ensure GC runs in the early morning on a Saturday or Sunday night. + + +## Setting up garbage collection + +You can set up GC if you're an admin by hitting "Settings" in the UI then +choosing "Garbage Collection". By default, GC will be disabled, showing this +screen: + +![](../../images/garbage-collection-1.png){: .with-border} + +Here you can configure GC to run **until it's done** or **with a timeout**. +The timeout ensures that your registry will be in read-only mode for a maximum +amount of time. + +Select an option (either "Until done" or "For N minutes") and you'll have the +option to configure GC to run via a cron job, with several default crons +provided: + +![](../../images/garbage-collection-2.png){: .with-border} + +You can also choose "Do not repeat" to disable the cron schedule entirely. + +Once the cron schedule has been configured (or disabled), you have the option to +the schedule ("Save") or save the schedule *and* start GC immediately ("Save +& Start"). + +## Stopping GC while it's running + +When GC runs the garbage collection settings page looks as follows: + +![](../../images/garbage-collection-3.png){: .with-border} + +Note the global banner visible to all users, ensuring everyone knows that GC is +running. + +An admin can stop the current GC process by hitting "Stop". This safely shuts +down the running GC job and moves the registry into read-write mode, ensuring +pushes work as expected. + +## How does garbage collection work? + +### Background: how images are stored + +Each image stored in DTR is made up of multiple files: + +- A list of "layers", which represent the image's filesystem +- The "config" file, which dictates the OS, architecture and other image +metadata +- The "manifest", which is pulled first and lists all layers and the config file +for the image. + +All of these files are stored in a content-addressible manner. We take the +sha256 hash of the file's content and use the hash as the filename. This means +that if tag `example.com/user/blog:1.11.0` and `example.com/user/blog:latest` +use the same layers we only store them once. + +### How this impacts GC + +Let's continue from the above example, where `example.com/user/blog:latest` and +`example.com/user/blog:1.11.0` point to the same image and use the same layers. +If we delete `example.com/user/blog:latest` but *not* +`example.com/user/blog:1.11.0` we expect that `example.com/user/blog:1.11.0` +can still be pulled. + +This means that we can't delete layers when tags or manifests are deleted. +Instead, we need to pause writing and take reference counts to see how many +times a file is used. If the file is never used only then is it safe to delete. + +This is the basis of our "mark and sweep" collection: + +1. Iterate over all manifests in registry and record all files that are +referenced +2. Iterate over all file stored and check if the file is referenced by any +manifest +3. If the file is *not* referenced, delete it diff --git a/datacenter/dtr/2.2/guides/admin/configure/use-a-load-balancer.md b/datacenter/dtr/2.2/guides/admin/configure/use-a-load-balancer.md index 28a368fcc8..dd02e338e4 100644 --- a/datacenter/dtr/2.2/guides/admin/configure/use-a-load-balancer.md +++ b/datacenter/dtr/2.2/guides/admin/configure/use-a-load-balancer.md @@ -4,8 +4,9 @@ description: Learn how to configure a load balancer to balance user requests acr keywords: docker, dtr, load balancer --- -Once you’ve joined multiple DTR replicas nodes for high-availability, you can -configure your own load balancer to balance user requests across all replicas. +Once you’ve joined multiple DTR replicas nodes for +[high-availability](set-up-high-availability.md), you can configure your own +load balancer to balance user requests across all replicas. ![](../../images/use-a-load-balancer-1.svg) @@ -14,18 +15,75 @@ This allows users to access DTR using a centralized domain name. If a replica goes down, the load balancer can detect that and stop forwarding requests to it, so that the failure goes unnoticed by users. -## Load-balancing DTR +## Load balancing DTR DTR does not provide a load balancing service. You can use an on-premises or cloud-based load balancer to balance requests across multiple DTR replicas. Make sure you configure your load balancer to: -* Load-balance TCP traffic on ports 80 and 443 +* Load balance TCP traffic on ports 80 and 443 * Not terminate HTTPS connections -* Use the `/health` endpoint on each DTR replica, to check if -the replica is healthy and if it should remain on the load balancing pool or -not +* Use the unauthenticated `/health` endpoint (note the lack of an `/api/v0/` in +the path) on each DTR replica, to check if the replica is healthy and if it +should remain in the load balancing pool or not + +## Health check endpoints + +The `/health` endpoint returns a JSON object for the replica being queried of +the form: + +```json +{ + "Error": "error message", + "Health": true +} +``` + +A response of `"Healthy": true` means the replica is suitable for taking +requests. It is also sufficient to check whether the HTTP status code is 200. + +An unhealthy replica will return 503 as the status code and populate `"Error"` +with more details on any one of these services: + +* Storage container (registry) +* Authorization (garant) +* Metadata persistence (rethinkdb) +* Content trust (notary) + +Note that this endpoint is for checking the health of a *single* replica. To get +the health of every replica in a cluster, querying each replica individiually is +the preferred way to do it in real time. + +The `/api/v0/meta/cluster_status` +[endpoint](https://docs.docker.com/datacenter/dtr/2.2/reference/api/) +returns a JSON object for the entire cluster *as observed* by the replica being +queried, and it takes the form: + +```json +{ + "replica_health": { + "replica id": "OK", + "another replica id": "error message" + }, + "replica_timestamp": { + "replica id": "2006-01-02T15:04:05Z07:00", + "another replica id": "2006-01-02T15:04:05Z07:00" + }, + // other fields +} +} +``` + +Health statuses for the replicas is available in the `"replica_health"` object. +These statuses are taken from a cache which is last updated by each replica +individually at the time specified in the `"replica_timestamp"` object. + +The response also contains information about the internal DTR storage state, +which is around 45 KB of data. This, combined with the fact that the endpoint +requires admin credentials, means it is not particularly appropriate for load +balance checks. Use `/health` instead for those kinds of checks. + ## Where to go next diff --git a/datacenter/dtr/2.2/guides/admin/install/index.md b/datacenter/dtr/2.2/guides/admin/install/index.md index 6f7d7ddcb1..e230c2ff23 100644 --- a/datacenter/dtr/2.2/guides/admin/install/index.md +++ b/datacenter/dtr/2.2/guides/admin/install/index.md @@ -49,7 +49,16 @@ Where the `--ucp-node` is the hostname of the UCP node where you want to deploy DTR. `--ucp-insecure-tls` tells the installer to trust the TLS certificates used by UCP. -The install command has other flags for customizing DTR at install time. +By default the install command runs in interactive mode and prompts for +additional information like: + +* DTR external url: the url clients use to read DTR. If you're using a load +balancer for DTR, this is the IP address or DNS name of the load balancer +* UCP url: the url clients use to reach UCP +* UCP username and password: administrator credentials for UCP + +You can also provide this information to the installer command so that it +runs without prompting. Check the [reference documentation to learn more](../../../reference/cli/install.md). ## Step 4. Check that DTR is running diff --git a/datacenter/dtr/2.2/guides/admin/install/install-offline.md b/datacenter/dtr/2.2/guides/admin/install/install-offline.md index 55ad6e97c5..d84fe174fc 100644 --- a/datacenter/dtr/2.2/guides/admin/install/install-offline.md +++ b/datacenter/dtr/2.2/guides/admin/install/install-offline.md @@ -53,6 +53,17 @@ For each machine where you want to install DTR: Now that the offline hosts have all the images needed to install DTR, you can [install DTR on that host](index.md). +### Preventing outgoing connections + +DTR makes outgoing connections to: + +* report analytics, +* check for new versions, +* check online licenses, +* update the vulnerability scanning database + +All of these uses of online connections are optional. You can choose to +disable or not use any or all of these features on the admin settings page. ## Where to go next diff --git a/datacenter/dtr/2.2/guides/admin/install/uninstall.md b/datacenter/dtr/2.2/guides/admin/install/uninstall.md index 2bee482939..13547ab977 100644 --- a/datacenter/dtr/2.2/guides/admin/install/uninstall.md +++ b/datacenter/dtr/2.2/guides/admin/install/uninstall.md @@ -4,18 +4,8 @@ keywords: docker, dtr, install, uninstall title: Uninstall Docker Trusted Registry --- -Uninstalling DTR is a two-step process. You first scale your DTR deployment down -to a single replica. Then you uninstall the last DTR replica, which permanently -removes DTR and deletes all its data. - -Start by [scaling down your DTR deployment](../configure/set-up-high-availability.md) to a -single replica. - -When your DTR deployment is down to a single replica, you can use the -`docker/dtr destroy` command to permanently remove DTR and all its data: - -1. Use ssh to log into any node that is part of UCP. -2. Uninstall DTR: +Uninstalling DTR can be done by simply removing all data associated with each +replica. To do that, you just run the destroy command once per replica: ```none docker run -it --rm \ @@ -23,6 +13,9 @@ docker run -it --rm \ --ucp-insecure-tls ``` +You will be prompted for the UCP URL, UCP credentials and which replica to +destroy. + To see what options are available in the destroy command, check the [destroy command reference documentation](../../../reference/cli/destroy.md). diff --git a/datacenter/dtr/2.2/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md b/datacenter/dtr/2.2/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md index bf9a58fb8e..ec1567a9f4 100644 --- a/datacenter/dtr/2.2/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md +++ b/datacenter/dtr/2.2/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md @@ -4,61 +4,85 @@ keywords: docker, registry, monitor, troubleshoot title: Troubleshoot Docker Trusted Registry --- - +This guide contains tips and tricks for troubleshooting DTR problems. + +## Troubleshoot overlay networks High availability in DTR depends on having overlay networking working in UCP. -To manually test that overlay networking is working in UCP run the following -commands on two different UCP machines. +One way to test if overlay networks are working correctly you can deploy +containers in different nodes, that are attached to the same overlay network +and see if they can ping one another. + +Use SSH to log into a UCP node, and run: ```none -docker run -it --rm --net dtr-ol --name overlay-test1 --entrypoint sh docker/dtr -docker run -it --rm --net dtr-ol --name overlay-test2 --entrypoint ping docker/dtr -c 3 overlay-test1 +docker run -it --rm \ + --net dtr-ol --name overlay-test1 \ + --entrypoint sh docker/dtr ``` -You can create new overlay network for this test with `docker network create -d overaly network-name`. -You can also use any images that contain `sh` and `ping` for this test. +Then use SSH to log into another UCP node and run: -If the second command succeeds, overlay networking is working. - -## DTR doesn't come up after a Docker restart - -This is a known issue with Docker restart policies when DTR is running on the same -machine as a UCP controller. If this happens, you can simply restart the DTR replica -from the UCP UI under "Applications". The best workaround right now is to not run -DTR on the same node as a UCP controller. - -## Etcd refuses to start after a Docker restart - -If you see the following log message in etcd's logs after a DTR restart it means that -your DTR replicas are on machines that don't have their clocks synchronized. Etcd requires -synchronized clocks to function correctly. - -``` -2016-04-27 17:56:34.086748 W | rafthttp: the clock difference against peer aa4fdaf4c562342d is too high [8.484795885s > 1s] +```none +docker run -it --rm \ + --net dtr-ol --name overlay-test2 \ + --entrypoint ping docker/dtr -c 3 overlay-test1 ``` -## Accessing the RethinkDB Admin UI +If the second command succeeds, it means that overlay networking is working +correctly. - > Warning: This command will expose your database to the internet with no authentication. Use with caution. +You can run this test with any overlay network, and any Docker image that has +`sh` and `ping`. -Run this on the UCP node that has a DTR replica with the given replica id: -``` -docker run --rm -it --net dtr-br -p 9999:8080 svendowideit/ambassador dtr-rethinkdb-$REPLICA_ID 8080 +## Access RethinkDB directly + +DTR uses RethinkDB for persisting data and replicating it across replicas. +It might be helpful to connect directly to the RethinkDB instance running on a +DTR replica to check the DTR internal state. + +Use SSH to log into a node that is running a DTR replica, and run the following +command, replacing `$REPLICA_ID` by the id of the DTR replica running on that +node: + +```none +docker run -it --rm \ + --net dtr-ol \ + -v dtr-ca-$REPLICA_ID:/ca dockerhubenterprise/rethinkcli:v2.2.0 \ + $REPLICA_ID ``` -Options to make this more secure: - -* Use `-p 127.0.0.1:9999:8080` to expose the admin UI only to localhost -* Use an SSH tunnel in combination with exposing the port only to localhost -* Use a firewall to limit which IPs are allowed to connect -* Use a second proxy with TLS and basic auth to provide secure access over the Internet - -## Accessing etcd directly - -You can execute etcd commands on a UCP node hosting a DTR replica using etcdctl -via the following docker command: +This starts an interactive prompt where you can run RethinkDB queries like: +```none +> r.db('dtr2').table('repositories') ``` -docker run --rm -v dtr-ca-$REPLICA_ID:/ca --net dtr-br -it --entrypoint /etcdctl docker/dtr-etcd:v2.2.4 --endpoint https://dtr-etcd-$REPLICA_ID.dtr-br:2379 --ca-file /ca/etcd/cert.pem --key-file /ca/etcd-client/key.pem --cert-file /ca/etcd-client/cert.pem + +[Learn more about RethinkDB queries](https://www.rethinkdb.com/docs/guide/javascript/). + +## Recover from an unhealthy replica + +When a DTR replica is unhealthy or down, the DTR web UI displays a warning: + +```none +Warning: The following replicas are unhealthy: 59e4e9b0a254; Reasons: Replica reported health too long ago: 2017-02-18T01:11:20Z; Replicas 000000000000, 563f02aba617 are still healthy. +``` + +To fix this, you should remove the unhealthy replica from the DTR cluster, +and join a new one. Start by running: + +```none +docker run -it --rm \ + {{ page.docker_image }} remove \ + --ucp-insecure-tls +``` + +And then: + +```none +docker run -it --rm \ + {{ page.docker_image }} join \ + --ucp-node \ + --ucp-insecure-tls ``` diff --git a/datacenter/dtr/2.2/guides/admin/upgrade/index.md b/datacenter/dtr/2.2/guides/admin/upgrade/index.md index 5cb8ee20fd..d9d4cd0799 100644 --- a/datacenter/dtr/2.2/guides/admin/upgrade/index.md +++ b/datacenter/dtr/2.2/guides/admin/upgrade/index.md @@ -4,23 +4,53 @@ keywords: docker, dtr, upgrade, install title: Upgrade DTR --- -The first step in upgrading to a new minor version or patch release of DTR 2.2, -is ensuring you're running DTR 2.1. If that's not the case, start by upgrading -your installation to version 2.1, and then upgrade to 2.2. +DTR uses [semantic versioning](http://semver.org/) and we aim to achieve specific +guarantees while upgrading between versions. We never support downgrading. We +support upgrades according to the following rules: -There is no downtime when upgrading a highly-available DTR cluster. If your -DTR deployment has a single replica, schedule the upgrade to take place outside -business peak hours to ensure the impact on your business is close to none. +* When upgrading from one patch version to another you can skip patch versions + because no data migraiton is done for patch versions. +* When upgrading between minor versions, you can't skip versions, but you can + upgrade from any patch versions of the previous minor version to any patch + version of the current minor version. +* When upgrading between major versions you also have to upgrade one major + version at a time, but you have to upgrade to the earliest available minor + version. We also strongly recommend upgrading to the latest minor/patch + version for your major version first. -## Step 1. Upgrade DTR to 2.1 +|From| To| Description| Supported| +|:----|:---|:------------|----------| +| 2.2.0 | 2.2.1 | patch upgrade | yes | +| 2.2.0 | 2.2.2 | skip patch version | yes | +| 2.2.2 | 2.2.1 | patch downgrade | no | +| 2.1.0 | 2.2.0 | minor upgrade | yes | +| 2.1.1 | 2.2.0 | minor upgrade | yes | +| 2.1.2 | 2.2.2 | minor upgrade | yes | +| 2.0.1 | 2.2.0 | skip minor version | no | +| 2.2.0 | 2.1.0 | minor downgrade | no | +| 1.4.3 | 2.0.0 | major upgrade | yes | +| 1.4.3 | 2.0.3 | major upgrade | yes | +| 1.4.3 | 3.0.0 | skip major version | no | +| 1.4.1 | 2.0.3 | major upgrade from an old version | no | +| 1.4.3 | 2.1.0 | major upgrade skipping minor version | no | +| 2.0.0 | 1.4.3 | major downgrade | no | + +There may be at most a few seconds of interruption during the upgrade of a +DTR cluster. Schedule the upgrade to take place outside business peak hours +to ensure the impact on your business is close to none. + +## Minor upgrade + +Before starting your upgrade planning, make sure that the version of UCP you are +using is supported by the version of DTR you are trying to upgrade to. + +### Step 1. Upgrade DTR to 2.1 if necessary Make sure you're running DTR 2.1. If that's not the case, [upgrade your installation to the 2.1 version](/datacenter/dtr/2.1/guides/install/upgrade/.md). -## Step 2. Upgrade DTR +### Step 2. Upgrade DTR - - -To upgrade DTR, **login with ssh** into a node that's part of the UCP cluster. Then pull the latest version of DTR: ```none @@ -28,10 +58,11 @@ $ docker pull {{ page.docker_image }} ``` If the node you're upgrading doesn't have access to the internet, you can -use a machine with internet connection to -[pull all the DTR images](../install/install-offline.md). +follow the [offline installation documentation](../install/install-offline.md) +to get the images. -Once you have the latest images on the node, run the upgrade command: +Once you have the latest image on your machine (and the images on the target +nodes if upgrading offline), run the upgrade command: ```none $ docker run -it --rm \ @@ -43,6 +74,16 @@ By default the upgrade command runs in interactive mode and prompts you for any necessary information. You can also check the [reference documentation](../../../reference/cli/index.md) for other existing flags. +The upgrade command will start replacing every container in your DTR cluster, +one replica at a time. It will also perform certain data migrations. If anything +fails or the upgrade is interrupted for any reason, you can re-run the upgrade +command and it will resume from where it left off. + +## Patch upgrade + +A patch upgrade changes only the DTR containers and it's always safer than a minor +upgrade. The command is the same as for a minor upgrade. + ## Where to go next * [Release notes](release-notes.md) diff --git a/datacenter/dtr/2.2/guides/admin/upgrade/release-notes.md b/datacenter/dtr/2.2/guides/admin/upgrade/release-notes.md index 3e3cf9dfdb..468e86df0c 100644 --- a/datacenter/dtr/2.2/guides/admin/upgrade/release-notes.md +++ b/datacenter/dtr/2.2/guides/admin/upgrade/release-notes.md @@ -10,6 +10,37 @@ known issues for each DTR version. You can then use [the upgrade instructions](index.md), to upgrade your installation to the latest release. +## DTR 2.2.2 + +(27 Feb 2017) + +**New features** + +* The web UI now displays a banner to administrators when a tag migration job +is running + + ![](../../images/release-notes-1.png) + +**General improvements** + +* Upgraded DTR security scanner +* Security scanner now generates less verbose logs +* Made `docker/dtr join` more resilient when using an NFS storage backend +* Made tag migrations more stable +* Made updates to the vulnerability database more stable + +**Bugs fixed** + +* Fixed a problem when trying to use Scality as storage backend. This problem +affected DTR 2.2.0 and 2.2.1 +* You can now use the web UI to create and manage teams that have slashes in +their name +* Fixed an issue causing RethinkDB to not start due to DNS errors when +the RethinkDB containers were not restarted at the same time +* The web UI now shows the security scanning button if the vulnerability database +or security scanner have been updated + + ## DTR 2.2.1 (9 Feb 2017) diff --git a/datacenter/dtr/2.2/guides/images/garbage-collection-1.png b/datacenter/dtr/2.2/guides/images/garbage-collection-1.png new file mode 100644 index 0000000000..fad454730e Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/garbage-collection-1.png differ diff --git a/datacenter/dtr/2.2/guides/images/garbage-collection-2.png b/datacenter/dtr/2.2/guides/images/garbage-collection-2.png new file mode 100644 index 0000000000..0351ececed Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/garbage-collection-2.png differ diff --git a/datacenter/dtr/2.2/guides/images/garbage-collection-3.png b/datacenter/dtr/2.2/guides/images/garbage-collection-3.png new file mode 100644 index 0000000000..dea2734bee Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/garbage-collection-3.png differ diff --git a/datacenter/dtr/2.2/guides/images/release-notes-1.png b/datacenter/dtr/2.2/guides/images/release-notes-1.png new file mode 100644 index 0000000000..a02df72972 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/release-notes-1.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-1.png b/datacenter/dtr/2.2/guides/images/scanning-images-1.png new file mode 100644 index 0000000000..e47437051c Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-1.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-2.png b/datacenter/dtr/2.2/guides/images/scanning-images-2.png new file mode 100644 index 0000000000..827ec79c36 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-2.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-3.png b/datacenter/dtr/2.2/guides/images/scanning-images-3.png new file mode 100644 index 0000000000..1020010c09 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-3.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-4.png b/datacenter/dtr/2.2/guides/images/scanning-images-4.png new file mode 100644 index 0000000000..97bc9ad1f5 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-4.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-5.png b/datacenter/dtr/2.2/guides/images/scanning-images-5.png new file mode 100644 index 0000000000..2092e07db7 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-5.png differ diff --git a/datacenter/dtr/2.2/guides/images/scanning-images-6.png b/datacenter/dtr/2.2/guides/images/scanning-images-6.png new file mode 100644 index 0000000000..b020182b84 Binary files /dev/null and b/datacenter/dtr/2.2/guides/images/scanning-images-6.png differ diff --git a/datacenter/dtr/2.2/guides/index.md b/datacenter/dtr/2.2/guides/index.md index faba203454..f2c4d7479f 100644 --- a/datacenter/dtr/2.2/guides/index.md +++ b/datacenter/dtr/2.2/guides/index.md @@ -2,10 +2,6 @@ description: Learn how to install, configure, and use Docker Trusted Registry. keywords: docker, registry, repository, images title: Docker Trusted Registry 2.2 overview -redirect_from: -- /docker-hub-enterprise/ -- /docker-trusted-registry/overview/ -- /docker-trusted-registry/ --- Docker Trusted Registry (DTR) is the enterprise-grade image storage solution @@ -14,27 +10,44 @@ and manage the Docker images you use in your applications. ## Image management -Docker Trusted Registry can be installed on-premises, or on a virtual private +DTR can be installed on-premises, or on a virtual private cloud. And with it, you can store your Docker images securely, behind your firewall. -![](images/overview-1.png) - You can use DTR as part of your continuous integration, and continuous -delivery processes to build, run, and ship your applications. +delivery processes to build, ship and run your applications. +DTR has a web based user interface that allows authorized users in your +organization to browse docker images. It provides information about +who pushed what image at what time. It even allows you to see what dockerfile +lines were used to produce the image and, if security scanning is enabled, to +see a list of all of the software installed in your images. -## Built-in security and access control +## Built-in access control DTR uses the same authentication mechanism as Docker Universal Control Plane. -It has a built-in authentication mechanism, and also integrates with LDAP -and Active Directory. It also supports Role Based Access Control (RBAC). +Users can be managed manually or syched from LDAP or Active Directory. DTR +uses [Role Based Access Control](admin/manage-users/index.md) (RBAC) to allow you to implement fine-grained +access control policies for who has access to your Docker images. -This allows you to implement fine-grain access control policies on who has -access to your Docker images. +## Security scanning -![](images/overview-2.png) +DTR has a built in security scanner that can be used to discover what versions +of software are used in your images. It scans each layer and aggregates the +results to give you a complete picture of what you are shipping as a part of +your stack. Most importantly, it co-relates this information with a +vulnerability database that is kept up to date through [periodic +updates](admin/configure/set-up-vulnerability-scans.md). This +gives you [unprecedented insight into your exposure to known security +threats](user/manage-images/scan-images-for-vulnerabilities.md). +## Image signing + +DTR ships with [Notary](../../../notary/getting_started/) +built in so that you can use +[Docker Content Trust](../../../engine/security/trust/content_trust/) to sign +and verify images. For more information about managing Notary data in DTR see +the [DTR-specific notary documentation](user/manage-images/manage-trusted-repositories.md). ## Where to go next diff --git a/datacenter/dtr/2.2/guides/user/manage-images/scan-images-for-vulnerabilities.md b/datacenter/dtr/2.2/guides/user/manage-images/scan-images-for-vulnerabilities.md index 11bb8eb14f..a878d37dd3 100644 --- a/datacenter/dtr/2.2/guides/user/manage-images/scan-images-for-vulnerabilities.md +++ b/datacenter/dtr/2.2/guides/user/manage-images/scan-images-for-vulnerabilities.md @@ -13,7 +13,8 @@ Scanning. The results of these scans are reported for each image tag. Docker Security Scanning is available as an add-on to Docker Trusted Registry, and an administrator configures it for your DTR instance. If you do not see security scan results available on your repositories, your organization may not -have purchased the Security Scanning feature or it may be disabled. +have purchased the Security Scanning feature or it may be disabled. See [Set up +Security Scanning in DTR](../../admin/configure/set-up-vulnerability-scans.md) for more details. > **Tip**: Only users with write access to a repository can manually start a scan. Users with read-only access can view the scan results, but cannot start @@ -21,19 +22,27 @@ a new scan. ## The Docker Security Scan process -Scans run either on demand when a user clicks the **Start Scan** links or -**Scan** button, or automatically on any `docker push` to the repository. +Scans run either on demand when a user clicks the **Start a Scan** links or +**Scan** button (see [Manual scanning](#manual-scanning) below), or automatically +on any `docker push` to the repository. First the scanner performs a binary scan on each layer of the image, identifies -the software components in each layer, and indexes the SHA of each component. A -binary scan evaluates the components on a bit-by-bit level, so vulnerable -components are discovered no matter what they're named or statically-linked. +the software components in each layer, and indexes the SHA of each component in a +bill-of-materials. A binary scan evaluates the components on a bit-by-bit level, +so vulnerable components are discovered even if they are statically-linked or +under a different name. + +[//]: # (Placeholder for DSS workflow. @sarahpark is working on the diagram.) The scan then compares the SHA of each component against the US National -Vulnerability Database that is installed on your DTR instance. when +Vulnerability Database that is installed on your DTR instance. When this database is updated, DTR reviews the indexed components for newly discovered vulnerabilities. +If you have subscribed to a webhook (see [Manage webhooks](../create-and-manage-webhooks.md)) +for scan completed/scan failed, then you will received the results of the scan +as a json to the specified endpoint. + Most scans complete within an hour, however larger repositories may take longer to scan depending on your system resources. @@ -58,8 +67,15 @@ To start a security scan: 2. Click the **Images** tab. 3. Locate the image tag that you want to scan. 4. In the **Vulnerabilities** column, click **Start a scan**. +![](../../images/scanning-images-1.png){: .with-border} -DTR begins the scanning process. You may need to refresh the page to see the +You can also start a scan from the image details screen: + +1. Click **View Details** on the desired image tag. +2. Click **Scan** on the right-hand side, above the layers table. +![](../../images/scanning-images-2.png){: .with-border} + +DTR begins the scanning process. You will need to refresh the page to see the results once the scan is complete. ## Change the scanning mode @@ -77,6 +93,7 @@ To change the repository scanning mode: 1. Navigate to the repository, and click the **Settings** tab. 2. Scroll down to the **Image scanning** section. 3. Select the desired scanning mode. +![](../../images/security-scanning-setup-5.png){: .with-border} ## View security scan results @@ -85,6 +102,7 @@ Once DTR has run a security scan for an image, you can view the results. The **Images** tab for each repository includes a summary of the most recent scan results for each image. +![](../../images/scanning-images-4.png){: .with-border} - A green shield icon with a check mark indicates that the scan did not find any vulnerabilities. - A red or orange shield icon indicates that vulnerabilities were found, and @@ -113,6 +131,8 @@ by the Dockerfile. > **Tip**: The layers view can be long, so be sure to scroll down if you don't immediately see the reported vulnerabilities. + ![](../../images/scanning-images-5.png){: .with-border} + - The **Components** view lists the individual component libraries indexed by the scanning system, in order of severity and number of vulnerabilities found, most vulnerable first. @@ -123,6 +143,7 @@ most vulnerable first. the scan report provides details on each one. The component details also include the license type used by the component, and the filepath to the component in the image. +![](../../images/scanning-images-6.png){: .with-border} ### What do I do next? diff --git a/datacenter/images/dtr.png b/datacenter/images/dtr.png index 3c7c6a5d5f..806e7940de 100644 Binary files a/datacenter/images/dtr.png and b/datacenter/images/dtr.png differ diff --git a/datacenter/images/try-ddc-1.png b/datacenter/images/try-ddc-1.png index 29f8fd5695..d8ceec84af 100644 Binary files a/datacenter/images/try-ddc-1.png and b/datacenter/images/try-ddc-1.png differ diff --git a/datacenter/images/try-ddc-2.png b/datacenter/images/try-ddc-2.png index 1dcb9da3d0..fff4d96b6a 100644 Binary files a/datacenter/images/try-ddc-2.png and b/datacenter/images/try-ddc-2.png differ diff --git a/datacenter/images/try-ddc-3.png b/datacenter/images/try-ddc-3.png index 84f456e96c..b821407c43 100644 Binary files a/datacenter/images/try-ddc-3.png and b/datacenter/images/try-ddc-3.png differ diff --git a/datacenter/images/ucp.png b/datacenter/images/ucp.png index 0bcea263ba..1d0cf0182e 100644 Binary files a/datacenter/images/ucp.png and b/datacenter/images/ucp.png differ diff --git a/datacenter/install/aws.md b/datacenter/install/aws.md index d51156ef75..5e16049bee 100644 --- a/datacenter/install/aws.md +++ b/datacenter/install/aws.md @@ -7,11 +7,10 @@ keywords: docker, datacenter, install, orchestration, management {% assign launch_url = "https://console.aws.amazon.com/cloudformation/home?#/stacks/new?templateURL=" %} {% assign template_url = "https://s3.amazonaws.com/packages.docker.com/caas/docker/docker_for_aws_ddc_2.1.0.json" %} -Docker Datacenter on Docker for AWS is a one-click deploy of highly-scalable -Docker Datacenter (Universal Control Plane and Docker Trusted Registry) based -on Docker and AWS best-practices. It is based on -[Docker for AWS](https://beta.docker.com/docs/) and currently should be used -for evaluation purposes only. +Docker Datacenter on Docker for Amazon AWS is an one-click deployment of DDC on +AWS. It deploys multiple nodes with Docker CS Engine, and then installs +highly available versions of Universal Control Plane and Docker Trusted +Registry. ![ddc_aws.svg](/images/ddc_aws.svg) @@ -92,7 +91,7 @@ Docker Datacenter Password Docker Datacenter License in JSON format or an S3 URL to download it. You can get a trial license [here](https://store.docker.com/bundles/docker-datacenter) -#### EnableSystemPrune +**EnableSystemPrune** Enable if you want Docker for AWS to automatically cleanup unused space on your swarm nodes. @@ -104,19 +103,16 @@ Pruning removes the following: - All dangling images - All unused networks -#### EnableCloudWatchLogs -Enable if you want Docker to send your container logs to CloudWatch. ("yes", "no") Defaults to yes. - -#### WorkerDiskSize +**WorkerDiskSize** Size of Workers's ephemeral storage volume in GiB (20 - 1024). -#### WorkerDiskType +**WorkerDiskType** Worker ephemeral storage volume type ("standard", "gp2"). -#### ManagerDiskSize +**ManagerDiskSize** Size of Manager's ephemeral storage volume in GiB (20 - 1024) -#### ManagerDiskType +**ManagerDiskType** Manager ephemeral storage volume type ("standard", "gp2") @@ -131,11 +127,11 @@ above configuration options. - Click on **Launch Stack** below. This link will take you to AWS cloudformation portal. - [![Docker Datacenter on Docker for AWS](https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png)]({{ launch_url }}{{ template_url }}) + [![Docker Datacenter on Docker for AWS](https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png)]({{ launch_url }}{{ template_url }}){: .with-border} - Confirm your AWS Region that you'd like to launch this stack in (top right corner) - Provide the required parameters and click **Next** (see below) -![console_installation.png](../images/console_installation.png) +![console_installation.png](../images/console_installation.png){: .with-border} - **Confirm** and **Launch** - Once the stack is successfully created (it does take between 10-15 mins), click on **Output** tab to see the URLs of UCP and DTR. @@ -217,15 +213,16 @@ which is a highly optimized AMI built specifically for running Docker on AWS Once the stack is successfully created, you can access UCP and DTR URLs in the output tab as follows: -![insecure.png](../images/output.png) +![insecure.png](../images/output.png){: .with-border} When accessing UCP and DTR, log in using the username and password that you provided when you launched the cloudformation stack. You should see the below landing pages: -![ucp.png](../images/ucp.png) -![dtr.png](../images/dtr.png) +![ucp.png](../images/ucp.png){: .with-border} + +![dtr.png](../images/dtr.png){: .with-border} > Note: During the installation process, a self-signed certificate is generated for both UCP and DTR. You can replace these certificates with your own @@ -305,11 +302,11 @@ provides multiple advantages to easily deploy and access your application. ``` b. Notice the updated ELB configuration: - ![elb_listeners_update.png](../images/elb_listeners_update.png) + ![elb_listeners_update.png](../images/elb_listeners_update.png){: .with-border} c. Access your application using **DefaultExternalTarget** DNS and published port: - ![app.png](../images/app.png) + ![app.png](../images/app.png){: .with-border} 2. **Swarm Mode Routing Mesh** @@ -342,7 +339,7 @@ provides multiple advantages to easily deploy and access your application. docker service create -p 8080 \ --network ucp-hrm \ --name demo-hrm-app \ - --label com.docker.ucp.mesh.http=8080=http://foo.example.com \ + --label com.docker.ucp.mesh.http.8080=external_route=http://foo.example.com,internal_port=8080 \ ehazlett/docker-demo:dcus ``` @@ -385,12 +382,12 @@ created when filling out the CloudFormation template for Docker for AWS. Once you find it, click the checkbox, next to the name. Then Click on the "Edit" button on the lower detail pane. -![console_installation.png](../images/autoscale_update.png) +![console_installation.png](../images/autoscale_update.png){: .with-border} Change the "Desired" field to the size of the worker pool that you would like, and hit "Save". -![console_installation.png](../images/autoscale_save.png) +![console_installation.png](../images/autoscale_save.png){: .with-border} This will take a few minutes and add the new workers to your swarm automatically. To lower the number of workers back down, you just need to @@ -402,7 +399,7 @@ Go to the CloudFormation management page, and click the checkbox next to the stack you want to update. Then Click on the action button at the top, and select "Update Stack". -![console_installation.png](../images/cloudformation_update.png) +![console_installation.png](../images/cloudformation_update.png){: .with-border} Pick "Use current template", and then click "Next". Fill out the same parameters you have specified before, but this time, change your worker count to the new diff --git a/datacenter/ucp/1.1/overview.md b/datacenter/ucp/1.1/overview.md index f7b1756832..e3af209692 100644 --- a/datacenter/ucp/1.1/overview.md +++ b/datacenter/ucp/1.1/overview.md @@ -2,6 +2,8 @@ description: Learn about Docker Universal Control Plane, the enterprise-grade cluster management solution from Docker. keywords: docker, ucp, overview, orchestration, clustering +redirect_from: +- /ucp/overview/ title: Universal Control Plane overview --- @@ -64,4 +66,4 @@ the images you deploy have not been altered in any way. ## Where to go next * [Get started with UCP](install-sandbox.md) - * [UCP architecture](architecture.md) + * [UCP architecture](architecture.md) \ No newline at end of file diff --git a/datacenter/ucp/2.0/guides/configuration/integrate-with-ldap.md b/datacenter/ucp/2.0/guides/configuration/integrate-with-ldap.md index 85ced42440..de8d51a34f 100644 --- a/datacenter/ucp/2.0/guides/configuration/integrate-with-ldap.md +++ b/datacenter/ucp/2.0/guides/configuration/integrate-with-ldap.md @@ -8,6 +8,11 @@ title: Integrate with LDAP Docker UCP integrates with LDAP services, so that you can manage users from a single place. +When you switch from built-in authentication to LDAP authentication, +all manually created users whose usernames do not match any LDAP search results +become inactive with the exception of the recovery admin user which can still +login with the recovery admin password. + ## Configure the LDAP integration To configure UCP to authenticate users using an LDAP service, go to @@ -89,5 +94,3 @@ You can also manually synchronize users by clicking the **Sync Now** button. When a user is removed from LDAP, that user becomes inactive after the LDAP synchronization runs. -Also, when you switch from the built-in authentication to using LDAP -authentication, all manually created users become inactive. diff --git a/datacenter/ucp/2.0/guides/images/get-support-1.png b/datacenter/ucp/2.0/guides/images/get-support-1.png new file mode 100644 index 0000000000..142d6fe679 Binary files /dev/null and b/datacenter/ucp/2.0/guides/images/get-support-1.png differ diff --git a/datacenter/ucp/2.0/guides/support.md b/datacenter/ucp/2.0/guides/support.md index 64d093de84..8854069bce 100644 --- a/datacenter/ucp/2.0/guides/support.md +++ b/datacenter/ucp/2.0/guides/support.md @@ -15,10 +15,26 @@ If you need help, you can file a ticket via: Be sure to use your company email when filing tickets. -## Download a support dump +Docker Support engineers may ask you to provide a UCP support dump, which is an +archive that contains UCP system logs and diagnostic information. To obtain a +support dump: -Docker Support engineers may ask you to provide a UCP support dump. For this: - -1. Log into UCP with an administrator account. +## From the UI +1. Log into the UCP UI with an administrator account. 2. On the top-right menu, **click your username**, and choose **Support Dump**. + +![](images/get-support-1.png){: .with-border} + +## From the CLI + +To get the support dump from the CLI, use SSH to log into a UCP manager node +and run: + +```none +docker run --rm \ + --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.docker_image }} \ + support > docker-support.tgz +``` diff --git a/datacenter/ucp/2.0/guides/user-management/permission-levels.md b/datacenter/ucp/2.0/guides/user-management/permission-levels.md index 2016059d33..b4fa3de270 100644 --- a/datacenter/ucp/2.0/guides/user-management/permission-levels.md +++ b/datacenter/ucp/2.0/guides/user-management/permission-levels.md @@ -43,8 +43,7 @@ users can see the containers they deploy in the cluster. ## Team permission levels -Teams allow you to define fine-grain permissions to services, containers, and -networks that have the label `com.docker.ucp.access.label` applied to them. +Teams and labels give the administrator fine-grained control over permissions. Each team can have multiple labels. Each label has a key of `com.docker.ucp.access.label`. The label is then applied to the containers, services, networks, secrets and volumes. Labels are not currently available for nodes and images. DTR has its own permissions. There are four permission levels: diff --git a/datacenter/ucp/2.1/guides/admin/backups-and-disaster-recovery.md b/datacenter/ucp/2.1/guides/admin/backups-and-disaster-recovery.md index 3d44878fdb..00d75ff1f7 100644 --- a/datacenter/ucp/2.1/guides/admin/backups-and-disaster-recovery.md +++ b/datacenter/ucp/2.1/guides/admin/backups-and-disaster-recovery.md @@ -14,29 +14,56 @@ The next step is creating a backup policy and disaster recovery plan. ## Backup policy As part of your backup policy you should regularly create backups of UCP. -To create a backup of UCP, use the `docker/ucp backup` command. This creates -a tar archive with the contents of the [volumes used by UCP](../architecture.md) -to persist data, and streams it to stdout. -You need to run the backup command on a UCP manager node. Since UCP stores -the same data on all manager nodes, you only need to create a backup of a -single node. +To create a UCP backup, you can run the `{{ page.docker_image }} backup` command +on a single UCP manager. This command creates a tar archive with the +contents of all the [volumes used by UCP](../architecture.md) to persist data +and streams it to stdout. + +You only need to run the backup command on a single UCP manager node. Since UCP +stores the same data on all manager nodes, you only need to take periodic +backups of a single manager node. To create a consistent backup, the backup command temporarily stops the UCP containers running on the node where the backup is being performed. User -containers and services are not affected by this. +resources, such as services, containers and stacks are not affected by this +operation and will continue operating as expected. Any long-lasting `exec`, +`logs`, `events` or `attach` operations on the affected manager node will +be disconnected. -To have minimal impact on your business, you should: +Additionally, if UCP is not configured for high availability, you will be +temporarily unable to: + +* Log in to the UCP Web UI +* Perform CLI operations using existing client bundles + +To minimize the impact of the backup policy on your business, you should: -* Schedule the backup to take place outside business hours. * Configure UCP for high availability. This allows load-balancing user requests -across multiple UCP controller nodes. +across multiple UCP manager nodes. +* Schedule the backup to take place outside business hours. ## Backup command -The example below shows how to create a backup of a UCP controller node: +The example below shows how to create a backup of a UCP manager node and +verify its contents: -```bash +```none +# Create a backup, encrypt it, and store it on /tmp/backup.tar +$ docker run --rm -i --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.docker_image }} backup --interactive > /tmp/backup.tar + +# Ensure the backup is a valid tar and list its contents +# In a valid backup file, over 100 files should appear in the list +# and the `./ucp-node-certs/key.pem` file should be present +$ tar --list -f /tmp/backup.tar +``` + +A backup file may optionally be encrypted using a passphrase, as in the +following example: + +```none # Create a backup, encrypt it, and store it on /tmp/backup.tar $ docker run --rm -i --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ @@ -47,71 +74,83 @@ $ docker run --rm -i --name ucp \ $ gpg --decrypt /tmp/backup.tar | tar --list ``` -## Restore command - -The example below shows how to restore a UCP controller node from an existing -backup: - -```bash -$ docker run --rm -i --name ucp \ - -v /var/run/docker.sock:/var/run/docker.sock \ - {{ page.docker_image }} restore < backup.tar -``` - -The restore command may also be invoked in interactive mode: - -```bash -$ docker run --rm -i --name ucp \ - -v /var/run/docker.sock:/var/run/docker.sock \ - -v /path/to/backup.tar:/config/backup.tar \ - {{ page.docker_image }} restore -i -``` - ## Restore your cluster The restore command can be used to create a new UCP cluster from a backup file. -After the restore operation is complete, the following data will be copied from -the backup file: +After the restore operation is complete, the following data will be recovered +from the backup file: -* Users, Teams and Permissions. -* Cluster Configuration, such as the default Controller Port or the KV store -timeout. -* DDC Subscription License. -* Options on Scheduling, Content Trust, Authentication Methods and Reporting. +* Users, teams and permissions. +* All UCP configuration options available under `Admin Settings`, such as the +DDC subscription license, scheduling options, Content Trust and authentication +backends. -The restore operation may be performed against any Docker Engine, regardless of -swarm membership, as long as the target Engine is not already managed by a UCP -installation. If the Docker Engine is already part of a swarm, that swarm and -all deployed containers and services will be managed by UCP after the restore -operation completes. +There are two ways to restore a UCP cluster: -As an example, if you have a cluster with three controller nodes, A, B, and C, -and your most recent backup was of node A: +* On a manager node of an existing swarm, which is not part of a UCP +installation. In this case, a UCP cluster will be restored from the backup. +* On a docker engine that is not participating in a swarm. In this case, a new +swarm will be created and UCP will be restored on top -1. Uninstall UCP from the swarm using the `uninstall-ucp` operation. -2. Restore one of the swarm managers, such as node B, using the most recent - backup from node A. -3. Wait for all nodes of the swarm to become healthy UCP nodes. +In order to restore an existing UCP installation from a backup, you will need to +first uninstall UCP from the cluster by using the `uninstall-ucp` command -You should now have your UCP cluster up and running. +The example below shows how to restore a UCP cluster from an existing backup +file, presumed to be located at `/tmp/backup.tar`: -Additionally, in the event where half or more controller nodes are lost and -cannot be recovered to a healthy state, the system can only be restored through -the following disaster recovery procedure. It is important to note that this -proceedure is not guaranteed to succeed with no loss of either swarm services or -UCP configuration data: +```none +$ docker run --rm -i --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.docker_image }} restore < /tmp/backup.tar +``` + +If the backup file is encrypted with a passphrase, you will need to provide the +passphrase to the restore operation: + +```none +$ docker run --rm -i --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.docker_image }} restore --passphrase "secret" < /tmp/backup.tar +``` + +The restore command may also be invoked in interactive mode, in which case the +backup file should be mounted to the container rather than streamed through +stdin: + +```none +$ docker run --rm -i --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + -v /tmp/backup.tar:/config/backup.tar \ + {{ page.docker_image }} restore -i +``` + +## Disaster recovery + +In the event where half or more manager nodes are lost and cannot be recovered +to a healthy state, the system is considered to have lost quorum and can only be +restored through the following disaster recovery procedure. + +It is important to note that this proceedure is not guaranteed to succeed with +no loss of running services or configuration data. To properly protect against +manager failures, the system should be configured for [high availability](configure/set-up-high-availability.md). 1. On one of the remaining manager nodes, perform `docker swarm init - --force-new-cluster`. This will instantiate a new single-manager swarm by - recovering as much state as possible from the existing manager. This is a - disruptive operation and any existing tasks will be either terminated or - suspended. + --force-new-cluster`. You may need to specify also need to specify an + `--advertise-addr` parameter which is equivalent to the `--host-address` + parameter of the `docker/ucp install` operation. This will instantiate a new + single-manager swarm by recovering as much state as possible from the + existing manager. This is a disruptive operation and existing tasks may be + either terminated or suspended. 2. Obtain a backup of one of the remaining manager nodes if one is not already available. -3. Perform a restore operation on the recovered swarm manager node. -4. For all other nodes of the cluster, perform a `docker swarm leave --force` - and then a `docker swarm join` operation with the cluster's new join-token. -5. Wait for all nodes of the swarm to become healthy UCP nodes. +3. If UCP is still installed on the cluster, uninstall UCP using the + `uninstall-ucp` command. +4. Perform a restore operation on the recovered swarm manager node. +5. Log in to UCP and browse to the nodes page, or use the CLI `docker node ls` + command. +6. If any nodes are listed as `down`, you'll have to manually [remove these + nodes](../configure/scale-your-cluster.md) from the cluster and then re-join + them using a `docker swarm join` operation with the cluster's new join-token. ## Where to go next diff --git a/datacenter/ucp/2.1/guides/admin/configure/add-sans-to-cluster.md b/datacenter/ucp/2.1/guides/admin/configure/add-sans-to-cluster.md new file mode 100644 index 0000000000..da3e143e0e --- /dev/null +++ b/datacenter/ucp/2.1/guides/admin/configure/add-sans-to-cluster.md @@ -0,0 +1,56 @@ +--- +title: Add SANs to cluster certificates +description: Learn how to add new SANs to cluster nodes, allowing you to connect to UCP with a different hostname +keywords: Docker, cluster, nodes, labels, certificates, SANs +--- + +UCP always runs with HTTPS enabled.When you connect to UCP, you need to make +sure that the hostname that you use to connect is recognized by UCP's +certificates. If, for instance, you put UCP behind a load balancer that +forwards its traffic to your UCP instance, your requests will be for the load +balancer's hostname or IP address, not UCP's. UCP will reject these requests +unless you include the load balancer's address as a Subject Alternative Name +(or SAN) in UCP's certificates. + +If you [use your own TLS certificates](use-your-own-tls-certificates.md), you +need to make sure that they have the correct SAN values. You can learn more +at the above link. + +If you want to use the self-signed certificate that UCP has out of the box, you +can set up the SANs when you install UCP with the `--san` argument. You can +also add them after installation. + +## Add new SANs to UCP after installation + +Log in with administrator credentials in the **UCP web UI**, navigate to the +**Nodes** page, and choose a node. + +Click the **Add SAN** button, and add one or more SANs to the node. + +![](../../images/add-sans-to-cluster-1.png){: .with-border} + +Once you're done, click **Save Changes**. + +You will have to do this on every manager node in the cluster, but once you +have done so, the SANs will be automatically applied to any new manager nodes +that join the cluster. + +You can also do this from the CLI by first running: + +```bash +{% raw %} +$ docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' +default-cs,127.0.0.1,172.17.0.1 +{% endraw %} +``` + +This will get the current set of SANs for the given manager node. Append your +desired SAN to this list (e.g. `default-cs,127.0.0.1,172.17.0.1,example.com`) +and then run: + +```bash +$ docker node update --label-add com.docker.ucp.SANs= +``` + +`` is the list of SANs with your new SAN appended at the end. As in +the web UI, you must do this for every manager node. diff --git a/datacenter/ucp/2.1/guides/admin/configure/external-auth/index.md b/datacenter/ucp/2.1/guides/admin/configure/external-auth/index.md index c09465c97f..5326edb8a3 100644 --- a/datacenter/ucp/2.1/guides/admin/configure/external-auth/index.md +++ b/datacenter/ucp/2.1/guides/admin/configure/external-auth/index.md @@ -1,63 +1,109 @@ --- -description: Learn how to integrate UCP with an LDAP service, so that you can manage - users from a single place. -keywords: LDAP, authentication, user management -title: Integrate with LDAP +description: Learn how to integrate UCP with an LDAP service, so that you can + manage users from a single place. +keywords: LDAP, directory, authentication, user management +title: Integrate with an LDAP Directory --- -Docker UCP integrates with LDAP services, so that you can manage users from a -single place. +Docker UCP integrates with LDAP directory services, so that you can manage +users and groups from your organization's directory and it will automatically +propagate that information to UCP and DTR. + +When you switch from built-in authentication to LDAP authentication, +all manually created users whose usernames do not match any LDAP search results +become inactive with the exception of the recovery admin user which can still +login with the recovery admin password. ## Configure the LDAP integration -To configure UCP to authenticate users using an LDAP service, go to -the **UCP web UI**, navigate to the **Settings** page, and click the **Auth** -tab. +To configure UCP to create and authenticate users using an LDAP directory, +go to the **UCP web UI**, navigate to the **Settings** page, and click the +**Auth** tab. ![](../../../images/ldap-integration-1.png){: .with-border} -Then configure your LDAP integration. +Then configure your LDAP directory integration. **Authentication** -| Field | Description | -|:-------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Method | The method used to authenticate users. Managed authentication uses the UCP built-in authentication mechanism. LDAP uses an LDAP service to authenticate users. | -| Default permission for newly discovered accounts | The permission level assigned by default to a new user. Learn more about default permission levels. | +| Field | Description | +|:-------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Method | The method used to create and authenticate users. The *LDAP* method uses a remote directory server to automatically create users and all logins will be forwarded to the directory server. | +| Default permission for newly discovered accounts | The permission level assigned by default to a new user. [Learn more about default permission levels](../../manage-users/permission-levels.md). | **LDAP server configuration** -| Field | Description | -|:------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------| -| LDAP server URL | The URL where the LDAP server can be reached. | -| Recovery admin username | The username for a recovery user that can access UCP even when the integration with LDAP is misconfigured or the LDAP server is offline. | -| Recovery admin password | The password for the recovery user. | -| Reader DN | The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice this should be an LDAP read-only user. | -| Reader password | The password of the account used for searching entries in the LDAP server. | +| Field | Description | +|:------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| LDAP server URL | The URL where the LDAP server can be reached. | +| Recovery admin username | The username for a recovery user that can access UCP even when the integration with LDAP is misconfigured or the LDAP server is offline. | +| Recovery admin password | The password for the recovery user which is securely salted and hashed and stored in UCP. The recovery admin user can use this password to login if the LDAP server is misconfigured or offline. | +| Reader DN | The distinguished name of the LDAP account used for searching entries in the LDAP server. As a best practice this should be an LDAP read-only user. | +| Reader password | The password of the account used for searching entries in the LDAP server. | **LDAP security options** -| Field | Description | -|:----------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------| -| Skip verification of server certificate | Whether to verify or not the LDAP server certificate when using TLS. The connection is still encrypted, but vulnerable to man-in-the-middle attacks. | -| Use StartTLS | Whether to connect to the LDAP server using TLS or not. If you set the LDAP Server URL field with `ldaps://`, this field is ignored. | +| Field | Description | +|:----------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Skip verification of server certificate | Whether to verify the LDAP server certificate when using TLS. The connection is still encrypted, but vulnerable to man-in-the-middle attacks. | +| Use StartTLS | Whether to authenticate/encrypt the connection after connecting to the LDAP server over TCP. If you set the LDAP Server URL field with `ldaps://`, this field is ignored. | **User search configurations** -| Field | Description | -|:--------------------|:---------------------------------------------------------------------------------------------------------------------------------------| -| Base DN | The distinguished name on the LDAP tree where the search should start looking for users. | -| Username attribute | The LDAP attribute to use as username on UCP. | -| Full name attribute | The LDAP attribute to use as user name on UCP. | -| Filter | The LDAP search filter used to find LDAP users. If you leave this field empty, all LDAP entries on the Base DN, are imported as users. | -| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | +| Field | Description | +|:------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. | +| Username attribute | The LDAP attribute to use as username on UCP. Only user entries with a valid username will be created. A valid username is no longer than 100 characters and does not contain any unprintable characters, whitespace characters, or any of the following characters: `/` `\` `[` `]` `:` `;` `|` `=` `,` `+` `*` `?` `<` `>` `'` `"`. | +| Full name attribute | The LDAP attribute to use as the user's full name for display purposes. If left empty, UCP will not create new users with a full name value. | +| Filter | The LDAP search filter used to find users. If you leave this field empty, all directory entries in the search scope with valid username attributes are created as users. | +| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | +| Match group members | Whether to further filter users by selecting those who are also members of a specific group on the directory server. This feature is helpful if the LDAP server does not support `memberOf` search filters. | +| Iterate through group members | If `Match Group Members` is selected, this option searches for users by first iterating over the target group's membership and makes a separate LDAP query for each member, as opposed to first querying for all users which match the above search query and intersecting those with the set of group members. This option can be more efficient in situations where the number of members of the target group is significantly smaller than the number of users which would match the above search filter or if your directory server does not support simple pagination of search results. | +| Group DN | If `Match Group Members` is selected, this specifies the distinguished name of the group from which to select users. | +| Group member attribute | If `Match Group Members` is selected, the value of this group attribute corresponds to the distinguished names of the members of the group. | + +![](../../../images/ldap-integration-2.png){: .with-border} + +Clicking **+ Add another user search configuration** will expand additional +sections for configuring more user search queries. This is useful in cases +where users may be found in multiple distinct subtrees of your organization's +directory. Any user entry which matches at least one of the search +configurations will be synced as a user. **Advanced LDAP configuration** -| Field | Description | -|:---------------------------|:----------------------------------------------------| -| No simple pagination | If your LDAP server doesn't support pagination. | -| Enable sync of admin users | Whether to import LDAP users as UCP administrators. | +| Field | Description | +|:---------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| No simple pagination | If your LDAP server doesn't support pagination. | +| Enable sync of admin users | Whether to import LDAP users as UCP administrators. | +| LDAP Match Method | If admin user sync is enabled, this option specifies whether to match admin user entries using a search query or by selecting them as members from a group. For the expanded options, refer to the options described below. | + + +**Match LDAP Group Members** + +This option specifies that system admins should be synced directly with members +of a group in your organization's LDAP directory. The admins will be synced to +match the membership of the group. The configured recovery admin user will also +remain a system admin. + +| Field | Description | +|:-----------------------|:------------------------------------------------------------------------------------------------------| +| Group DN | This specifies the distinguished name of the group from which to select users. | +| Group member attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. | + +**Match LDAP Search Results** + +This option specifies that system admin should be synced using a search query +against your organization's LDAP directory. The admins will by synced to match +the users in the search results. The configured recovery admin user will also +remain a system admin. + +| Field | Description | +|:--------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------| +| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. | +| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | +| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. | + **Sync configuration** @@ -67,10 +113,10 @@ Then configure your LDAP integration. **Test LDAP connection** -| Field | Description | -|:-------------------|:---------------------------------------------------------------------| -| LDAP test username | An LDAP user to test that the configuration is correctly configured. | -| LDAP test password | The password of the LDAP user. | +| Field | Description | +|:---------|:-----------------------------------------------------------------------------------------------------------------------------------------------------| +| Username | The username with which the user will login to this application. This value should correspond to the Username Attribute specified in the form above. | +| Password | The user's password used to authenticate (BIND) to the directory server. | Before you save the configuration changes, you should test that the integration is correctly configured. You can do this by providing the credentials of an @@ -79,9 +125,9 @@ LDAP user, and clicking the **Test** button. ## Synchronize users Once you've configure the LDAP integration, UCP synchronizes users based on the -interval you've defined. When the synchronization runs, UCP stores logs that -can help you troubleshoot when something goes wrong. - +interval you've defined starting at the top of the hour. When the +synchronization runs, UCP stores logs that can help you troubleshoot when +something goes wrong. You can also manually synchronize users by clicking the **Sync Now** button. @@ -89,5 +135,20 @@ You can also manually synchronize users by clicking the **Sync Now** button. When a user is removed from LDAP, that user becomes inactive after the LDAP synchronization runs. + Also, when you switch from the built-in authentication to using LDAP -authentication, all manually created users become inactive. +authentication, all manually created users whose usernames do not match any +LDAP search results become inactive with the exception of the recovery admin +user which can still login with the recovery admin password. + +## Data synced from your organization's LDAP directory + +UCP saves a minimum amount of user data required to operate. This includes +the value of the username and full name attributes that you have specified in +the configuration as well as the distinguished name of each synced user. +UCP does not query, or store any additional data from the directory server. + +## Syncing Teams + +For syncing teams in UCP with a search query or group in your organization's +LDAP directory, refer to [the documentation on creating and managing teams](../../manage-users/create-and-manage-teams.md). diff --git a/datacenter/ucp/2.1/guides/admin/configure/scale-your-cluster.md b/datacenter/ucp/2.1/guides/admin/configure/scale-your-cluster.md index e48dd1629c..34c162f8e5 100644 --- a/datacenter/ucp/2.1/guides/admin/configure/scale-your-cluster.md +++ b/datacenter/ucp/2.1/guides/admin/configure/scale-your-cluster.md @@ -9,7 +9,7 @@ Docker UCP is designed for scaling horizontally as your applications grow in size and usage. You can add or remove nodes from the UCP cluster to make it scale to your needs. -![](../../images/scale-your-cluster-1.svg) +![](../../images/scale-your-cluster-0.svg) Since UCP leverages the clustering functionality provided by Docker Engine, you use the [docker swarm join](/engine/swarm/swarm-tutorial/add-nodes.md) @@ -47,7 +47,7 @@ the **Resources** page, and go to the **Nodes** section. Click the **Add Node button** to add a new node. -![](../../images/scale-your-cluster-2.png){: .with-border} +![](../../../../../images/try-ddc-3.png){: .with-border} Check the 'Add node as a manager' option if you want to add the node as manager. Also, set the 'Use a custom listen address' option to specify the IP of the @@ -56,12 +56,36 @@ host that you'll be joining to the cluster. Then you can copy the command displayed, use ssh to **log into the host** that you want to join to the cluster, and **run the command** on that host. -![](../../images/scale-your-cluster-3.png){: .with-border} +![](../../images/scale-your-cluster-2.png){: .with-border} After you run the join command in the node, the node starts being displayed in UCP. -## Pause, drain, and remove nodes +## Remove nodes from the cluster + +1. If the target node is a manager, you will need to first demote the node into + a worker before proceeding with the removal: + * From the UCP web UI, navigate to the **Resources** section and then go to + the **Nodes** page. Select the node you wish to remove and switch its role + to **Worker**, wait until the operation is completed and confirm that the + node is no longer a manager. + * From the CLI, perform `docker node ls` and identify the nodeID or hostname + of the target node. Then, run `docker node demote `. + +2. If the status of the worker node is `Ready`, you'll need to manually force + the node to leave the swarm. To do this, connect to the target node through + SSH and run `docker swarm leave --force` directly against the local docker + engine. Warning: do not perform this step if the node is still a manager, as + that may cause loss of quorum. + +3. Now that the status of the node is reported as `Down`, you may remove the + node: + * From the UCP web UI, browse to the **Nodes** page, select the node and + click on the **Remove Node** button. You will need to click on the button + again within 5 seconds to confirm the operation. + * From the CLI, perform `docker node rm ` + +## Pause and drain nodes Once a node is part of the cluster you can change its role making a manager node into a worker and vice versa. You can also configure the node availability @@ -72,12 +96,52 @@ so that it is: * Drained: the node won't receive new tasks. Existing tasks are stopped and replica tasks are launched in active nodes. -![](../../images/scale-your-cluster-4.png){: .with-border} +![](../../images/scale-your-cluster-3.png){: .with-border} If you're load-balancing user requests to UCP across multiple manager nodes, when demoting those nodes into workers, don't forget to remove them from your load-balancing pool. +## Scale your cluster from the CLI + +You can also use the command line to do all of the above operations. To get the +join token, run the following command on a manager node: + +```none +$ docker swarm join-token worker +``` + +If you want to add a new manager node instead of a worker node, use +`docker swarm join-token manager` instead. If you want to use a custom listen +address, add the `--listen-addr` arg: + +```none +docker swarm join \ + --token SWMTKN-1-2o5ra9t7022neymg4u15f3jjfh0qh3yof817nunoioxa9i7lsp-dkmt01ebwp2m0wce1u31h6lmj \ + --listen-addr 234.234.234.234 \ + 192.168.99.100:2377 +``` + +Once your node is added, you can see it by running `docker node ls` on a manager: + +```none +$ docker node ls +``` + +To change the node's availability, use: + +``` +$ docker node update --availability drain node2 +``` + +You can set the availability to `active`, `pause`, or `drain`. + +To remove the node, use: + +``` +$ docker node rm +``` + ## Where to go next * [Use your own TLS certificates](use-your-own-tls-certificates.md) diff --git a/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services.md b/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services.md index c8c5653935..1c6ed0ca58 100644 --- a/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services.md +++ b/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services.md @@ -183,7 +183,7 @@ domains should be redirected to it. For example, a website that has been renamed might use this functionality. The following labels accomplish this for `new.example.com` and `old.example.com` -* `com.docker.ucp.mesh.http.1=external_route=http://old.example.com.com,redirect=http://new.example.com` +* `com.docker.ucp.mesh.http.1=external_route=http://old.example.com,redirect=http://new.example.com` * `com.docker.ucp.mesh.http.2=external_route=http://new.example.com` ## Troubleshoot @@ -192,4 +192,4 @@ If a service is not configured properly for use of the HTTP routing mesh, this information is available in the UI when inspecting the service. More logging from the HTTP routing mesh is available in the logs of the -`ucp-controller` containers on your UCP controller nodes. +`ucp-controller` containers on your UCP manager nodes. diff --git a/datacenter/ucp/2.1/guides/admin/install/index.md b/datacenter/ucp/2.1/guides/admin/install/index.md index c39982ce0f..90f7f043fd 100644 --- a/datacenter/ucp/2.1/guides/admin/install/index.md +++ b/datacenter/ucp/2.1/guides/admin/install/index.md @@ -82,7 +82,7 @@ Now that UCP is installed, you need to license it. In your browser, navigate to the UCP web UI, login with your administrator credentials and upload your license. -![](../../images/install-production-1.png){: .with-border} +![](../../../../../images/try-ddc-1.png){: .with-border} If you're registered in the beta program and don't have a license yet, you can get it from your [Docker Store subscriptions](https://store.docker.com/?overlay=subscriptions). @@ -101,11 +101,11 @@ for worker nodes to execute. To join manager nodes to the swarm, go to the **UCP web UI**, navigate to the **Resources** page, and go to the **Nodes** section. -![](../../images/install-production-2.png){: .with-border} +![](../../images/step-6-one-node.png){: .with-border} Click the **Add Node button** to add a new node. -![](../../images/install-production-3.png){: .with-border} +![](../../../../../images/try-ddc-3.png){: .with-border} Check the 'Add node as a manager' to turn this node into a manager and replicate UCP for high-availability. @@ -119,7 +119,7 @@ can reach it. For each manager node that you want to join to UCP, login into that node using ssh, and run the join command that is displayed on UCP. -![](../../images/install-production-4.png){: .with-border} +![](../../images/step-6-two-nodes.png){: .with-border} After you run the join command in the node, the node starts being displayed in UCP. diff --git a/datacenter/ucp/2.1/guides/admin/install/plan-installation.md b/datacenter/ucp/2.1/guides/admin/install/plan-installation.md index 5e1b860eef..82a3b25971 100644 --- a/datacenter/ucp/2.1/guides/admin/install/plan-installation.md +++ b/datacenter/ucp/2.1/guides/admin/install/plan-installation.md @@ -53,7 +53,7 @@ cause poor performance or even failures. ## Load balancing strategy Docker UCP does not include a load balancer. You can configure your own -load balancer to balance user requests across all controller nodes. +load balancer to balance user requests across all manager nodes. If you plan on using a load balancer, you need to decide whether you are going to add the nodes to the load balancer using their IP address, or their FQDN. @@ -83,10 +83,10 @@ need to have a certificate bundle that has: * A ca.pem file with the root CA public certificate, * A cert.pem file with the server certificate and any intermediate CA public certificates. This certificate should also have SANs for all addresses used to -reach the UCP controller, +reach the UCP manager, * A key.pem file with server private key. -You can have a certificate for each controller, with a common SAN. As an +You can have a certificate for each manager, with a common SAN. As an example, on a three node cluster you can have: * node1.company.example.org with SAN ucp.company.org @@ -94,9 +94,9 @@ example, on a three node cluster you can have: * node3.company.example.org with SAN ucp.company.org Alternatively, you can also install UCP with a single externally-signed -certificate for all controllers rather than one for each controller node. +certificate for all managers rather than one for each manager node. In that case, the certificate files will automatically be copied to any new -controller nodes joining the cluster or being promoted into controllers. +manager nodes joining the cluster or being promoted into managers. ## Where to go next diff --git a/datacenter/ucp/2.1/guides/admin/install/system-requirements.md b/datacenter/ucp/2.1/guides/admin/install/system-requirements.md index d14d7e3c1a..ca6dc50b20 100644 --- a/datacenter/ucp/2.1/guides/admin/install/system-requirements.md +++ b/datacenter/ucp/2.1/guides/admin/install/system-requirements.md @@ -54,6 +54,14 @@ Docker Datacenter is a software subscription that includes 3 products: [Learn more about the maintenance lifecycle for these products](http://success.docker.com/Get_Help/Compatibility_Matrix_and_Maintenance_Lifecycle). +## Version compatibility + +UCP 2.1 requires minimum versions of the following Docker components: + +- Docker Engine 1.13.0 +- Docker Remote API 1.25 +- Compose 1.9 + ## Where to go next * [UCP architecture](../../architecture.md) diff --git a/datacenter/ucp/2.1/guides/admin/install/uninstall.md b/datacenter/ucp/2.1/guides/admin/install/uninstall.md index bdf3495329..caa0536c7b 100644 --- a/datacenter/ucp/2.1/guides/admin/install/uninstall.md +++ b/datacenter/ucp/2.1/guides/admin/install/uninstall.md @@ -5,21 +5,25 @@ title: Uninstall UCP --- Docker UCP is designed to scale as your applications grow in size and usage. -You can [add and remove nodes](../configure/scale-your-cluster.md) from the cluster, to make -it scale to your needs. +You can [add and remove nodes](../configure/scale-your-cluster.md) from the +cluster, to make it scale to your needs. You can also uninstall Docker Universal Control plane from your cluster. In this case the UCP services are stopped and removed, but your Docker Engines will continue running in swarm mode. You applications will continue running normally. +If you wish to remove a single node from the UCP cluster, you should instead +[Remove that node from the cluster](../configure/scale-your-cluster.md). + After you uninstall UCP from the cluster, you'll no longer be able to enforce role-based access control to the cluster, or have a centralized way to monitor and manage the cluster. -After uninstalling UCP from the cluster, you will no longer be able to -join new nodes using `docker swarm join` unless you reinstall UCP. +After uninstalling UCP from the cluster, you will no longer be able to join new +nodes using `docker swarm join` unless you reinstall UCP. -To uninstall UCP, log in into a manager node using ssh, and run: +To uninstall UCP, log in into a manager node using ssh, and run the following +command: ```bash $ docker run --rm -it \ @@ -29,9 +33,20 @@ $ docker run --rm -it \ ``` This runs the uninstall command in interactive mode, so that you are prompted -for any necessary configuration values. -[Check the reference documentation](../../../reference/cli/index.md) to learn the options -available in the `uninstall-ucp` command. +for any necessary configuration values. Running this command on a single manager +node will uninstall UCP from the entire cluster. [Check the reference +documentation](../../../reference/cli/index.md) to learn the options available +in the `uninstall-ucp` command. + +## Swarm mode CA + +After uninstalling UCP, the nodes in your cluster will still be in swarm mode, but you cannot +join new nodes until you reinstall UCP, because swarm mode was relying on UCP to provide the +CA certificates that allow nodes in the cluster to identify each other. Additionally, since +swarm mode is no longer controlling its own certificates, if the certificates expire after +you uninstall UCP the nodes in the cluster will not be able to communicate at all. To fix this, +either reinstall UCP before the certificates expire or disable swarm mode by running +`docker swarm leave --force` on every node. ## Where to go next diff --git a/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-teams.md b/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-teams.md index 8029aea77d..fbce8e8b7c 100644 --- a/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-teams.md +++ b/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-teams.md @@ -1,14 +1,15 @@ --- -description: Learn how to create and manage user permissions, using teams in your - Docker Universal Control Plane cluster. -keywords: authorize, authentication, users, teams, UCP, Docker +description: Learn how to create and manage user permissions, using teams in + your Docker Universal Control Plane cluster. +keywords: authorize, authentication, users, teams, groups, sync, UCP, Docker title: Create and manage teams --- You can extend the user's default permissions by granting them fine-grain permissions over resources. You do this by adding the user to a team. A team defines the permissions users have for resources that have the label -`com.docker.ucp.access.label` applied to them. +`com.docker.ucp.access.label` applied to them. Keep in mind that a label can +be applied to multiple teams with different permission levels. To create a new team, go to the **UCP web UI**, and navigate to the **Users & Teams** page. @@ -27,6 +28,47 @@ Then choose the list of users that you want to add to the team. ![](../../images/create-and-manage-teams-3.png){: .with-border} +## Sync team members with your organization's LDAP directory. + +If UCP is configured to sync users with your organization's LDAP directory +server, you will have the option to enable syncing the new team's members when +creating a new team or when modifying settings of an existing team. +[Learn how to configure integration with an LDAP directory](../configure/external-auth/index.md). +Enabling this option will expand the form with additional field for configuring +the sync of team members. + +![](../../images/create-and-manage-teams-5.png){: .with-border} + +There are two methods for matching group members from an LDAP directory: + +**Match LDAP Group Members** + +This option specifies that team members should be synced directly with members +of a group in your organization's LDAP directory. The team's membership will by +synced to match the membership of the group. + +| Field | Description | +|:-----------------------|:------------------------------------------------------------------------------------------------------| +| Group DN | This specifies the distinguished name of the group from which to select users. | +| Group member attribute | The value of this group attribute corresponds to the distinguished names of the members of the group. | + +**Match LDAP Search Results** + +This option specifies that team members should be synced using a search query +against your organization's LDAP directory. The team's membership will be +synced to match the users in the search results. + +| Field | Description | +|:--------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------| +| Base DN | The distinguished name of the node in the directory tree where the search should start looking for users. | +| Search scope | Whether to perform the LDAP search on a single level of the LDAP tree, or search through the full LDAP tree starting at the Base DN. | +| Search Filter | The LDAP search filter used to find users. If you leave this field empty, all existing users in the search scope will be added as members of the team. | + +**Immediately Sync Team Members** + +Select this option to immediately run an LDAP sync operation after saving the +configuration for the team. It may take a moment before the members of the team +are fully synced. ## Manage team permissions diff --git a/datacenter/ucp/2.1/guides/admin/manage-users/permission-levels.md b/datacenter/ucp/2.1/guides/admin/manage-users/permission-levels.md index 6a86b5d6a4..47a375d246 100644 --- a/datacenter/ucp/2.1/guides/admin/manage-users/permission-levels.md +++ b/datacenter/ucp/2.1/guides/admin/manage-users/permission-levels.md @@ -1,6 +1,6 @@ --- -description: Learn about the permission levels available in Docker Universal Control - Plane. +description: Learn about the permission levels available in Docker Universal + Control Plane. keywords: authorization, authentication, users, teams, UCP title: Permission levels --- @@ -34,13 +34,11 @@ access to full control over the resources. | `Restricted Control` | The user can view and edit volumes, networks, and images. They can create containers, but can't see other users containers, run `docker exec`, or run containers that require privileged access to the host. | | `Full Control` | The user can view and edit volumes, networks, and images, They can create containers without any restriction, but can't see other users containers. | -When a user only has a default permission assigned, only them and admin -users can see the containers they deploy in the cluster. +If a user has Restricted Control or Full Control default permissions, they can create resources without labels, and only the user and Admins can see and access the resources. Default permissions also affect ability for a user to access things that can't have labels, images and nodes. ## Team permission levels -Teams allow you to define fine-grain permissions to services, containers, and -networks that have the label `com.docker.ucp.access.label` applied to them. +Teams and labels give the administrator fine-grained control over permissions. Each team can have multiple labels. Each label has a key of `com.docker.ucp.access.label`. The label is then applied to the containers, services, networks, secrets and volumes. Labels are not currently available for nodes and images. DTR has its own permissions. There are four permission levels: @@ -55,3 +53,4 @@ There are four permission levels: * [Create and manage users](create-and-manage-users.md) * [Create and manage teams](create-and-manage-teams.md) +* [Docker Reference Architecture: Securing Docker Datacenter and Security Best Practices](https://success.docker.com/KBase/Docker_Reference_Architecture%3A_Securing_Docker_Datacenter_and_Security_Best_Practices) diff --git a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/index.md b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/index.md index cefc0d5dee..e4095a4c32 100644 --- a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/index.md +++ b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/index.md @@ -1,70 +1,65 @@ --- +title: Monitor the cluster status description: Monitor your Docker Universal Control Plane installation, and learn how to troubleshoot it. keywords: Docker, UCP, troubleshoot -title: Monitor your cluster --- -This article gives you an overview of how to monitor your Docker UCP. +You can monitor the status of UCP by using the web UI or the CLI. +You can also use the `_ping` endpoint to build monitoring automation. -## Check the cluster status from the UI +## Check status from the UI -To monitor your UCP cluster, the first thing to check is the **Nodes** -screen on the UCP web app. +The first place to check the status of UCP is the **UCP web UI**, since it +shows warnings for situations that require your immediate attention. +Administrators might see more warnings than regular users. + +![UCP dashboard](../../images/monitor-ucp-0.png){: .with-border} + +You can also navigate to the **Nodes** page, to see if all the nodes +managed by UCP are healthy or not. ![UCP dashboard](../../images/monitor-ucp-1.png){: .with-border} -In the nodes screen you can see if all the nodes in the cluster are healthy, or -if there is any problem. +Each node has a status message explaining any problems with the node. +[Learn more about node status](troubleshoot-node-messages.md). -If you're an administrator you can also check the state and logs of the -UCP internal services. -To check the state of the `ucp-agent` service, navigate to the **Services** page -and toggle the **Show system services** option. - -![](../../images/monitor-ucp-2.png){: .with-border} - -The `ucp-agent` service monitors the node where it is running, deploys other -UCP internal components, and ensures they keep running. The UCP components that -are deployed on a node, depend on whether the node is a manager or worker. -[Learn more about the UCP architecture](../../architecture.md) - -To check the state and logs of other UCP internal components, go to the -**Containers** page, and appply the **System containers** filter. -This can help validate that all UCP internal components are up and running. - -![](../../images/monitor-ucp-3.png){: .with-border} - -It's normal for the `ucp-reconcile` to be stopped. This container only runs when -the `ucp-agent` detects that a UCP internal component should be running but for -some reason it's not. In this case the `ucp-agent` starts the `ucp-reconcile` -service to start all UCP services that need to be running. Once that is done, -the `ucp-agent` stops. - -## Check the cluster status from the CLI +## Check status from the CLI You can also monitor the status of a UCP cluster using the Docker CLI client. -There are two ways to do this, using a -[client certificate bundle](../../user/access-ucp/cli-based-access.md), or logging into -one of the manager nodes using ssh. +Download a UCP client certificate bundle](../../user/access-ucp/cli-based-access.md) +and then run: -Then you can use regular Docker CLI commands to check the status and logs -of the [UCP internal services and containers](../../architecture.md). +```none +$ docker node ls +``` -## Automated status checking +As a rule of thumb, if the status message starts with `[Pending]`, then the +current state is transient and the node is expected to correct itself back +into a healthy state. [Learn more about node status](troubleshoot-node-messages.md). -You can use the `https:///_ping` endpoint to perform automated -monitoring tasks. When you access this endpoint, UCP validates that all its -internal components are working, and returns the following HTTP error codes: + +## Monitoring automation + +You can use the `https:///_ping` endpoint to check the health +of a single UCP manager node. When you access this endpoint, the UCP manager +validates that all its internal components are working, and returns one of the +following HTTP error codes: * 200, if all components are healthy * 500, if one or more components are not healthy -If you're accessing this endpoint through a load balancer, you'll have no way to -know which UCP manager node is not healthy. So make sure you make a request -directly to each manager node. +If an administrator client certificate is used as a TLS client certificate for +the `_ping` endpoint, a detailed error message is returned if any component is +unhealthy. + +If you're accessing the `_ping` endpoint through a load balancer, you'll have no +way of knowing which UCP manager node is not healthy, since any manager node +might be serving your request. Make sure you're connecting directly to the +URL of a manager node, and not a load balancer. ## Where to go next * [Troubleshoot with logs](troubleshoot-with-logs.md) +* [Troubleshoot node states](./troubleshoot-node-messages.md) diff --git a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations.md b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations.md index 01a29bd7d4..5528e070b8 100644 --- a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations.md +++ b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-configurations.md @@ -1,22 +1,26 @@ --- +title: Troubleshoot cluster configurations description: Learn how to troubleshoot your Docker Universal Control Plane cluster. keywords: ectd, rethinkdb, key, value, store, database, ucp -title: Troubleshoot cluster configurations --- -Docker UCP persists configuration data on an [etcd](https://coreos.com/etcd/) +UCP automatically tries to heal itself by monitoring it's internal +components and trying to bring them to an healthy state. + +In most cases, if a single UCP component is persistently in a +failed state, you should be able to restore the cluster to a healthy state by +removing the unhealthy node from the cluster and joining it again. +[Lean how to remove and join modes](../configure/scale-your-cluster.md). + +## Troubleshoot the etcd key-value store + +UCP persists configuration data on an [etcd](https://coreos.com/etcd/) key-value store and [RethinkDB](https://rethinkdb.com/) database that are replicated on all manager nodes of the UCP cluster. These data stores are for internal use only, and should not be used by other applications. -This article shows how you can access the key-value store and database, for -troubleshooting configuration problems in your cluster. - -## etcd Key-Value Store - -### Using the REST API - -In this example we'll be using `curl` for making requests to the key-value +### With the HTTP API +In this example we'll use `curl` for making requests to the key-value store REST API, and `jq` to process the responses. You can install these tools on a Ubuntu distribution by running: @@ -41,19 +45,16 @@ $ curl -s \ ${KV_URL}/v2/keys | jq "." ``` -To learn more about the key-value store rest API check the +To learn more about the key-value store REST API check the [etcd official documentation](https://coreos.com/etcd/docs/latest/). - -### Using a CLI client +### With the CLI client The containers running the key-value store, include `etcdctl`, a command line client for etcd. You can run it using the `docker exec` command. The examples below assume you are logged in with ssh into a UCP manager node. -#### Check the health of the etcd cluster - ```bash $ docker exec -it ucp-kv etcdctl \ --endpoint https://127.0.0.1:2379 \ diff --git a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages.md b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages.md new file mode 100644 index 0000000000..9a331f7d85 --- /dev/null +++ b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-node-messages.md @@ -0,0 +1,28 @@ +--- +title: Troubleshoot UCP Node States +description: Learn how to troubleshoot individual UCP nodes. +keywords: Docker, UCP, troubleshoot, health +--- + +There are several cases in the lifecycle of UCP when a node is actively +transitioning from one state to another, such as when a new node is joining the +cluster or during node promotion and demotion. In these cases, the current step +of the transition will be reported by UCP as a node message. You can view the +state of each individual node by following the same steps required to [monitor +cluster status](index.md). + + +## UCP node states + +The following table lists all possible node states that may be reported for a +UCP node, their explanation, and the expected duration of a given step. + +| Message | Description | Typical step duration | +|:-----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------| +| Completing node registration | Waiting for the node to appear in KV node inventory. This is expected to occur when a node first joins the UCP cluster. | 5 - 30 seconds | +| The ucp-agent task is | The `ucp-agent` task on the target node is not in a running state yet. This is an expected message when configuration has been updated, or when a new node was first joined to the UCP cluster. This step may take a longer time duration than expected if the UCP images need to be pulled from Docker Hub on the affected node. | 1-10 seconds | +| Unable to determine node state | The `ucp-reconcile` container on the target node just started running and we are not able to determine its state. | 1-10 seconds | +| Node is being reconfigured | The `ucp-reconcile` container is currently converging the current state of the node to the desired state. This process may involve issuing certificates, pulling missing images and starting containers, depending on the current node state. | 1 - 60 seconds | +| Reconfiguration pending | The target node is expected to be a manager but the `ucp-reconcile` container has not been started yet. | 1 - 10 seconds | +| Unhealthy UCP Controller: node is unreachable | Other manager nodes of the cluster have not received a heartbeat message from the affected node within a predetermined timeout. This usually indicates that there's either a temporary or permanent interruption in the network link to that manager node. Please ensure the underlying networking infrastructure is operational and contact support if the symptom persists. | Until resolved | +| Unhealthy UCP Controller: unable to reach controller | The controller that we are currently communicating with is not reachable a predetermine timeout. Please refresh the node listing to see if the symptom persists. If the symptom appears intermittently, this could indicate latency spikes between manager nodes, which can lead to temporary loss in the availability of UCP itself. Please ensure the underlying networking infrastructure is operational and contact support if the symptom persists. | Until resolved | diff --git a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md index d12f3507ff..1c4053eade 100644 --- a/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md +++ b/datacenter/ucp/2.1/guides/admin/monitor-and-troubleshoot/troubleshoot-with-logs.md @@ -16,7 +16,7 @@ page of UCP. By default the UCP system containers are hidden. Click the **Show all containers** option for the UCP system containers to be listed as well. -![](../../images/troubleshoot-ucp-1.png){: .with-border} +![](../../images/troubleshoot-with-logs-1.png){: .with-border} You can click on a container to see more details like its configurations and logs. @@ -53,6 +53,43 @@ specially useful if the UCP web application is not working. {"level":"info","license_key":"PUagrRqOXhMH02UgxWYiKtg0kErLY8oLZf1GO4Pw8M6B","msg":"/v1.22/containers/ucp/ucp-controller/logs","remote_addr":"192.168.10.1:59546","tags":["api","v1.22","get"],"time":"2016-04-25T23:49:27Z","type":"api","username":"dave.lauper"} ``` +## Get a support dump + +Before making any changes to UCP, download a [support dump](../../get-support.md). +This allows you to troubleshoot which problems were already happening before +changing UCP configurations. + +Then you can increase the UCP log levels to debug, making it easier to understand +the status of the UCP cluster. Changing the UCP log level restarts all UCP +system components and introduces a small downtime window to UCP. Your +applications won't be affected by this. + +To increase the UCP log levels, navigate to the **UCP web UI**, go to the +**Admin Settings** tab, and choose **Logs**. + +![](../../images/troubleshoot-with-logs-2.png){: .with-border} + +Once you change the log level to **Debug** the UCP containers are restarted. +Now that the UCP components are creating more descriptive logs, you can download +again a support dump and use it to troubleshoot the component causing the +problem. + +Depending on the problem you are experiencing, it's more likely that you'll +find related messages in the logs of specific components on manager nodes: + +* If the problem occurs after a node was added or removed, check the logs +of the `ucp-reconcile` container. +* If the problem occurs in the normal state of the system, check the logs +of the `ucp-controller` container. +* If you are able to visit the UCP web UI but unable to log in, check the +logs of the `ucp-auth-api` and `ucp-auth-store` containers. + +It's normal for the `ucp-reconcile` container to be in a stopped state. This +container is only started when the `ucp-agent` detects that a node needs to +transition to a different state, and it is responsible for creating and removing +containers, issuing certificates and pulling missing images. + + ## Where to go next * [Troubleshoot configurations](troubleshoot-configurations.md) diff --git a/datacenter/ucp/2.1/guides/admin/upgrade/index.md b/datacenter/ucp/2.1/guides/admin/upgrade/index.md index e473012849..cb27ccee08 100644 --- a/datacenter/ucp/2.1/guides/admin/upgrade/index.md +++ b/datacenter/ucp/2.1/guides/admin/upgrade/index.md @@ -50,6 +50,33 @@ Starting with the manager nodes, and then worker nodes: You can upgrade UCP from the web UI or the CLI. +### Using the UI to perform an upgrade + +When an upgrade is available for a UCP installation, a banner will be shown. + +![](../../images/upgrade-ucp-1.png){: .with-border} + +Clicking this message takes an admin user directly to the upgrade process. +It can be found under the **Cluster Configuration** tab of the **Admin + Settings** section. + +![](../../images/upgrade-ucp-2.png){: .with-border} + +Select a version to upgrade to using the **Available UCP Versions** dropdown, +then click to upgrade. + +Before the upgrade happens, a confirmation dialog along with important +information regarding cluster and UI availability will be displayed. + +![](../../images/upgrade-ucp-3.png){: .with-border} + +During the upgrade the UI will be unavailable and it is recommended to wait +until completion before continuing to interact with it. Upon upgrade +completion, the user will see a notification that a newer version of the UI +is available and a browser refresh is required to see the latest UI. + +### Using the CLI to perform an upgrade + To upgrade from the CLI, log into a UCP manager node using ssh, and run: ``` @@ -69,6 +96,8 @@ for any necessary configuration values. Once the upgrade finishes, navigate to the **UCP web UI** and make sure that all the nodes managed by UCP are healthy. +![](../../images/upgrade-ucp-4.png) + ## Where to go next * [UCP release notes](release-notes.md) diff --git a/datacenter/ucp/2.1/guides/architecture.md b/datacenter/ucp/2.1/guides/architecture.md index 0a8be715b8..ab77c12381 100644 --- a/datacenter/ucp/2.1/guides/architecture.md +++ b/datacenter/ucp/2.1/guides/architecture.md @@ -61,7 +61,7 @@ persist the state of UCP. These are the UCP services running on manager nodes: | UCP component | Description | |:--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP services, it starts the ucp-reconcile service to start or stop the necessary services to converge the node to its desired state | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | | ucp-auth-api | The centralized service for identity and authentication used by UCP and DTR | | ucp-auth-store | Stores authentication configurations, and data for users, organizations and teams | | ucp-auth-worker | Performs scheduled LDAP synchronizations and cleans authentication and authorization data | @@ -84,7 +84,7 @@ services running on worker nodes: | UCP component | Description | |:--------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ucp-agent | Monitors the node and ensures the right UCP services are running | -| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP services, it starts the ucp-reconcile service to start or stop the necessary services to converge the node to its desired state | +| ucp-reconcile | When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy. | | ucp-proxy | A TLS proxy. It allows secure access to the local Docker Engine to UCP components | ## Volumes used by UCP @@ -95,14 +95,14 @@ Docker UCP uses these named volumes to persist data in all nodes where it runs: |:----------------------------|:-----------------------------------------------------------------------------------------| | ucp-auth-api-certs | Certificate and keys for the authentication and authorization service | | ucp-auth-store-certs | Certificate and keys for the authentication and authorization store | -| ucp-auth-store-data | Data of the authentication and authorization store | +| ucp-auth-store-data | Data of the authentication and authorization store, replicated across managers | | ucp-auth-worker-certs | Certificate and keys for authentication worker | | ucp-auth-worker-data | Data of the authentication worker | | ucp-client-root-ca | Root key material for the UCP root CA that issues client certificates | | ucp-cluster-root-ca | Root key material for the UCP root CA that issues certificates for swarm members | | ucp-controller-client-certs | Certificate and keys used by the UCP web server to communicate with other UCP components | | ucp-controller-server-certs | Certificate and keys for the UCP web server running in the node | -| ucp-kv | UCP configuration data | +| ucp-kv | UCP configuration data, replicated across managers. | | ucp-kv-certs | Certificates and keys for the key-value store | | ucp-metrics-data | Monitoring data gathered by UCP | | ucp-metrics-inventory | Configuration file used by the ucp-metrics service | diff --git a/datacenter/ucp/2.1/guides/get-support.md b/datacenter/ucp/2.1/guides/get-support.md index 64d093de84..8854069bce 100644 --- a/datacenter/ucp/2.1/guides/get-support.md +++ b/datacenter/ucp/2.1/guides/get-support.md @@ -15,10 +15,26 @@ If you need help, you can file a ticket via: Be sure to use your company email when filing tickets. -## Download a support dump +Docker Support engineers may ask you to provide a UCP support dump, which is an +archive that contains UCP system logs and diagnostic information. To obtain a +support dump: -Docker Support engineers may ask you to provide a UCP support dump. For this: - -1. Log into UCP with an administrator account. +## From the UI +1. Log into the UCP UI with an administrator account. 2. On the top-right menu, **click your username**, and choose **Support Dump**. + +![](images/get-support-1.png){: .with-border} + +## From the CLI + +To get the support dump from the CLI, use SSH to log into a UCP manager node +and run: + +```none +docker run --rm \ + --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + {{ page.docker_image }} \ + support > docker-support.tgz +``` diff --git a/datacenter/ucp/2.1/guides/images/add-sans-to-cluster-1.png b/datacenter/ucp/2.1/guides/images/add-sans-to-cluster-1.png new file mode 100644 index 0000000000..d1087dfee1 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/add-sans-to-cluster-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/create-and-manage-teams-5.png b/datacenter/ucp/2.1/guides/images/create-and-manage-teams-5.png new file mode 100644 index 0000000000..0b41dad48d Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/create-and-manage-teams-5.png differ diff --git a/datacenter/ucp/2.1/guides/images/deploy-app-ui-1.png b/datacenter/ucp/2.1/guides/images/deploy-app-ui-1.png index c2face85ed..2f6839fb57 100644 Binary files a/datacenter/ucp/2.1/guides/images/deploy-app-ui-1.png and b/datacenter/ucp/2.1/guides/images/deploy-app-ui-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/deployed_visualizer.png b/datacenter/ucp/2.1/guides/images/deployed_visualizer.png index fc2f442588..8ef5026a2a 100644 Binary files a/datacenter/ucp/2.1/guides/images/deployed_visualizer.png and b/datacenter/ucp/2.1/guides/images/deployed_visualizer.png differ diff --git a/datacenter/ucp/2.1/guides/images/get-support-1.png b/datacenter/ucp/2.1/guides/images/get-support-1.png new file mode 100644 index 0000000000..142d6fe679 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/get-support-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/install-production-1.png b/datacenter/ucp/2.1/guides/images/install-production-1.png deleted file mode 100644 index f2e44097c4..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/install-production-1.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/install-production-2.png b/datacenter/ucp/2.1/guides/images/install-production-2.png deleted file mode 100644 index 6b6a345688..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/install-production-2.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/install-production-3.png b/datacenter/ucp/2.1/guides/images/install-production-3.png deleted file mode 100644 index a2aeb13e1c..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/install-production-3.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/install-production-4.png b/datacenter/ucp/2.1/guides/images/install-production-4.png deleted file mode 100644 index 73931dbfa1..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/install-production-4.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/ldap-integration-2.png b/datacenter/ucp/2.1/guides/images/ldap-integration-2.png new file mode 100644 index 0000000000..8fd9f9fe7b Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/ldap-integration-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-1.png b/datacenter/ucp/2.1/guides/images/manage-secrets-1.png index 83d0cadd73..0b122c802e 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-1.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-2.png b/datacenter/ucp/2.1/guides/images/manage-secrets-2.png index 02e83ea8f2..982d25d525 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-2.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-3.png b/datacenter/ucp/2.1/guides/images/manage-secrets-3.png index 4f6265fc71..59b873de18 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-3.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-3.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-4.png b/datacenter/ucp/2.1/guides/images/manage-secrets-4.png index caee742356..c1052855f3 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-4.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-4.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-5.png b/datacenter/ucp/2.1/guides/images/manage-secrets-5.png index c13f1c225f..5b301c4891 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-5.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-5.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-7.png b/datacenter/ucp/2.1/guides/images/manage-secrets-7.png index a46800a073..0b6195186b 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-7.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-7.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-8.png b/datacenter/ucp/2.1/guides/images/manage-secrets-8.png index af736dd168..f7477559fb 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-8.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-8.png differ diff --git a/datacenter/ucp/2.1/guides/images/manage-secrets-9.png b/datacenter/ucp/2.1/guides/images/manage-secrets-9.png index be3411634d..9284180df1 100644 Binary files a/datacenter/ucp/2.1/guides/images/manage-secrets-9.png and b/datacenter/ucp/2.1/guides/images/manage-secrets-9.png differ diff --git a/datacenter/ucp/2.1/guides/images/monitor-ucp-0.png b/datacenter/ucp/2.1/guides/images/monitor-ucp-0.png new file mode 100644 index 0000000000..d464aaf086 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/monitor-ucp-0.png differ diff --git a/datacenter/ucp/2.1/guides/images/monitor-ucp-1.png b/datacenter/ucp/2.1/guides/images/monitor-ucp-1.png index 2af858a661..fa1494a348 100644 Binary files a/datacenter/ucp/2.1/guides/images/monitor-ucp-1.png and b/datacenter/ucp/2.1/guides/images/monitor-ucp-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/monitor-ucp-2.png b/datacenter/ucp/2.1/guides/images/monitor-ucp-2.png index 21d63ee0b6..9a4a0c57f3 100644 Binary files a/datacenter/ucp/2.1/guides/images/monitor-ucp-2.png and b/datacenter/ucp/2.1/guides/images/monitor-ucp-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/monitor-ucp-3.png b/datacenter/ucp/2.1/guides/images/monitor-ucp-3.png index ff8cfc9d81..c40d40c6f8 100644 Binary files a/datacenter/ucp/2.1/guides/images/monitor-ucp-3.png and b/datacenter/ucp/2.1/guides/images/monitor-ucp-3.png differ diff --git a/datacenter/ucp/2.1/guides/images/overview-1.png b/datacenter/ucp/2.1/guides/images/overview-1.png deleted file mode 100644 index 6b9a5f5969..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/overview-1.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/overview-2.png b/datacenter/ucp/2.1/guides/images/overview-2.png deleted file mode 100644 index e32675bb09..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/overview-2.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/scale-your-cluster-1.svg b/datacenter/ucp/2.1/guides/images/scale-your-cluster-0.svg similarity index 100% rename from datacenter/ucp/2.1/guides/images/scale-your-cluster-1.svg rename to datacenter/ucp/2.1/guides/images/scale-your-cluster-0.svg diff --git a/datacenter/ucp/2.1/guides/images/scale-your-cluster-1.png b/datacenter/ucp/2.1/guides/images/scale-your-cluster-1.png index 864aa80f62..5aff7a7319 100644 Binary files a/datacenter/ucp/2.1/guides/images/scale-your-cluster-1.png and b/datacenter/ucp/2.1/guides/images/scale-your-cluster-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/scale-your-cluster-2.png b/datacenter/ucp/2.1/guides/images/scale-your-cluster-2.png index fa2ebba2f6..fa1494a348 100644 Binary files a/datacenter/ucp/2.1/guides/images/scale-your-cluster-2.png and b/datacenter/ucp/2.1/guides/images/scale-your-cluster-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/scale-your-cluster-3.png b/datacenter/ucp/2.1/guides/images/scale-your-cluster-3.png index e32675bb09..5114c2d923 100644 Binary files a/datacenter/ucp/2.1/guides/images/scale-your-cluster-3.png and b/datacenter/ucp/2.1/guides/images/scale-your-cluster-3.png differ diff --git a/datacenter/ucp/2.1/guides/images/scale-your-cluster-4.png b/datacenter/ucp/2.1/guides/images/scale-your-cluster-4.png deleted file mode 100644 index 542dd82a60..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/scale-your-cluster-4.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/step-6-one-node.png b/datacenter/ucp/2.1/guides/images/step-6-one-node.png new file mode 100644 index 0000000000..4dfd6d823d Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/step-6-one-node.png differ diff --git a/datacenter/ucp/2.1/guides/images/step-6-two-nodes.png b/datacenter/ucp/2.1/guides/images/step-6-two-nodes.png new file mode 100644 index 0000000000..04b6c0e5e9 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/step-6-two-nodes.png differ diff --git a/datacenter/ucp/2.1/guides/images/troubleshoot-ucp-1.png b/datacenter/ucp/2.1/guides/images/troubleshoot-ucp-1.png deleted file mode 100644 index ff8cfc9d81..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/troubleshoot-ucp-1.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-1.png b/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-1.png new file mode 100644 index 0000000000..30e16f750c Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-2.png b/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-2.png new file mode 100644 index 0000000000..d43372e145 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/troubleshoot-with-logs-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/upgrade-ucp-1.png b/datacenter/ucp/2.1/guides/images/upgrade-ucp-1.png new file mode 100644 index 0000000000..8c4c93b2b3 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/upgrade-ucp-1.png differ diff --git a/datacenter/ucp/2.1/guides/images/upgrade-ucp-2.png b/datacenter/ucp/2.1/guides/images/upgrade-ucp-2.png new file mode 100644 index 0000000000..a77d037b8d Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/upgrade-ucp-2.png differ diff --git a/datacenter/ucp/2.1/guides/images/upgrade-ucp-3.png b/datacenter/ucp/2.1/guides/images/upgrade-ucp-3.png new file mode 100644 index 0000000000..18bf9fcd76 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/upgrade-ucp-3.png differ diff --git a/datacenter/ucp/2.1/guides/images/upgrade-ucp-4.png b/datacenter/ucp/2.1/guides/images/upgrade-ucp-4.png new file mode 100644 index 0000000000..5c8d4befe5 Binary files /dev/null and b/datacenter/ucp/2.1/guides/images/upgrade-ucp-4.png differ diff --git a/datacenter/ucp/2.1/guides/images/web-based-access-1.png b/datacenter/ucp/2.1/guides/images/web-based-access-1.png deleted file mode 100644 index 6b9a5f5969..0000000000 Binary files a/datacenter/ucp/2.1/guides/images/web-based-access-1.png and /dev/null differ diff --git a/datacenter/ucp/2.1/guides/index.md b/datacenter/ucp/2.1/guides/index.md index 493f58c820..8b3dc9a1d9 100644 --- a/datacenter/ucp/2.1/guides/index.md +++ b/datacenter/ucp/2.1/guides/index.md @@ -3,8 +3,6 @@ description: Learn about Docker Universal Control Plane, the enterprise-grade cl management solution from Docker. keywords: docker, ucp, overview, orchestration, clustering title: Universal Control Plane overview -redirect_from: -- /ucp/overview/ --- Docker Universal Control Plane (UCP) is the enterprise-grade cluster management @@ -12,7 +10,7 @@ solution from Docker. You install it on-premises or in your virtual private cloud, and it helps you manage your Docker cluster and applications from a single place. -![](images/overview-1.png){: .with-border} +![](../../../images/ucp.png){: .with-border} ## Centralized cluster management @@ -23,7 +21,7 @@ by Docker to make it easier to manage your cluster from a centralized place. You can manage and monitor your container cluster using a graphical UI. -![](images/overview-2.png){: .with-border} +![](../../../images/try-ddc-2.png){: .with-border} Since UCP exposes the standard Docker API, you can continue using the tools you already know, including the Docker CLI client, to deploy and manage your diff --git a/datacenter/ucp/2.1/guides/user/access-ucp/cli-based-access.md b/datacenter/ucp/2.1/guides/user/access-ucp/cli-based-access.md index 70ce67742e..16b94ea81a 100644 --- a/datacenter/ucp/2.1/guides/user/access-ucp/cli-based-access.md +++ b/datacenter/ucp/2.1/guides/user/access-ucp/cli-based-access.md @@ -22,7 +22,7 @@ There are two different types of client certificates: * Admin user certificate bundles: allow running docker commands on the Docker Engine of any node, * User certificate bundles: only allow running docker commands through a UCP -controller node. +manager node. ## Download client certificates diff --git a/datacenter/ucp/2.1/guides/user/access-ucp/index.md b/datacenter/ucp/2.1/guides/user/access-ucp/index.md index 50f77fc93c..afd3493691 100644 --- a/datacenter/ucp/2.1/guides/user/access-ucp/index.md +++ b/datacenter/ucp/2.1/guides/user/access-ucp/index.md @@ -7,7 +7,7 @@ title: Web-based access Docker Universal Control Plane allows you to manage your cluster in a visual way, from your browser. -![](../../images/web-based-access-1.png){: .with-border} +![](../../../../../images/ucp.png){: .with-border} Docker UCP secures your cluster with role-based access control. From the diff --git a/datacenter/ucp/2.1/guides/user/secrets/grant-revoke-access.md b/datacenter/ucp/2.1/guides/user/secrets/grant-revoke-access.md index 734ac9d638..1c6723793d 100644 --- a/datacenter/ucp/2.1/guides/user/secrets/grant-revoke-access.md +++ b/datacenter/ucp/2.1/guides/user/secrets/grant-revoke-access.md @@ -18,10 +18,9 @@ the secret. Users that are part of a team with access to that label will be able to see and use the secret. -In this example, if Jenny is part of -a team that has 'Restricted Control' over the `com.docker.ucp.access.label=blog` -label, she will be able to use the secret in her services, as long as the -service also has the same label. +In this example, if Jenny is part of a team that has 'Restricted Control' over +the `com.docker.ucp.access.label=blog` label, she will be able to use the +secret in her services, as long as the service also has the same label. This ensures that users can use a secret in their services without having permissions to attach to the container running the service and inspect the diff --git a/datacenter/ucp/2.1/guides/user/secrets/index.md b/datacenter/ucp/2.1/guides/user/secrets/index.md index 65df1811f7..7019617ade 100644 --- a/datacenter/ucp/2.1/guides/user/secrets/index.md +++ b/datacenter/ucp/2.1/guides/user/secrets/index.md @@ -48,7 +48,9 @@ you won't be able to edit it or see the secret data again. ![](../../images/manage-secrets-2.png){: .with-border} Assign a unique name to the service and set its value. You can optionally define -a permission label so that other users have permission to use this secret. +a permission label so that other users have permission to use this secret. Also +note that a service and secret must have the same permission label (or both +must have no permission label at all) in order to be used together. In this example our secret is named `wordpress-password-v1`, to make it easier to track which version of the password our services are using. @@ -67,13 +69,18 @@ default configurations. Start by creating the MySQL service. Navigate to the **Services** page, click **Create Service**, and choose **Use Wizard**. Use the following configurations: -| Field | Value | -|:---------------------|:------------------------------------------------------------| -| Service name | wordpress-db | -| Image name | mysql:5.7 | -| Attached network | wordpress-network | -| Secret | wordpress-password-v1 | -| Environment variable | MYSQL_ROOT_PASSWORD_FILE=/run/secrets/wordpress-password-v1 | +| Field | Value | +|:---------------------------|:-----------------------------------| +| Service name | wordpress-db | +| Image name | mysql:5.7 | +| Attached network | wordpress-network | +| Secret | wordpress-password-v1 | +| Environment variable name | MYSQL_ROOT_PASSWORD_FILE | +| Environment variable value | /run/secrets/wordpress-password-v1 | + +Remember, if you specified a permission label on the secret, you must also set +the same permission label on this service. If the secret does not have a +permission label, then this service must also not have a permission label. This creates a MySQL service that's attached to the `wordpress-network` network, and that uses the `wordpress-password-v1`, which by default will create a file @@ -98,6 +105,7 @@ configurations: | Image name | wordpress:latest | | Published ports | target: 80, ingress:8000 | | Attached network | wordpress-network | +| Secret | wordpress-password-v1 | | Environment variable | WORDPRESS_DB_HOST=wordpress-db:3306 | | Environment variable | WORDPRESS_DB_PASSWORD_FILE=/run/secrets/wordpress-password-v1 | diff --git a/datacenter/ucp/2.1/guides/user/services/index.md b/datacenter/ucp/2.1/guides/user/services/index.md index 9f78c09955..ea5ec935ee 100644 --- a/datacenter/ucp/2.1/guides/user/services/index.md +++ b/datacenter/ucp/2.1/guides/user/services/index.md @@ -7,12 +7,12 @@ title: Deploy an app from the UI With Docker Universal Control Plane you can deploy applications from the UI using `docker-compose.yml` files. In this example, we're going to deploy an -application that allows users to vote on whether they prefer cats or dogs. +application that allows users to vote on whether they prefer cats or dogs. 😺 🐶 ## Deploy voting application -In your browser, **log in** to UCP, and navigate to the **Applications** page. -There, click the **Deploy compose.yml** button, to deploy a new application. +In your browser, **log in** to UCP, and navigate to the **Stacks & Applications** page. +There, click the **Deploy** button, to deploy a new application. ![](../../images/deploy-app-ui-1.png){: .with-border} @@ -25,7 +25,7 @@ The application we're going to deploy is composed of several services: * `db`: A PostgreSQL service which provides permanent storage on a host volume * `worker`: A background service that transfers votes from the queue to permanent storage -Click **Services** and paste the following YAML into the **DOCKER-COMPOSE.YML** +Click **Deploy** and paste the following YAML into the **APPLICATION DEFINITION** field. ```none @@ -125,7 +125,7 @@ documentation](http://docker-docs-vnext-compose.netlify.com/compose/compose-file Give the application a name (such as "VotingApp," used here), and click **Create**. -Once UCP deploys the application, click on **Services** on the left navigation, +Once UCP deploys the application, click on **VotingApp** or go to **Services** on the left navigation, to see the details of the services you have deployed across your nodes. Try clicking on the `visualizer` service, and scroll to the bottom of the detail page. You'll see a link to your UCP instance's URL that includes the published port diff --git a/docker-for-aws/faqs.md b/docker-for-aws/faqs.md index 779373b998..6e0770438f 100644 --- a/docker-for-aws/faqs.md +++ b/docker-for-aws/faqs.md @@ -46,9 +46,10 @@ This AWS documentation page will describe how you can tell if you have EC2-Class ### Possible fixes to the EC2-Classic region issue: There are a few work arounds that you can try to get Docker for AWS up and running for you. -1. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue. -2. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem. -3. You can try and contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs +1. Create your own VPC, then [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc). +2. Use a region that doesn't have **EC2-Classic**. The most common region with this issue is `us-east-1`. So try another region, `us-west-1`, `us-west-2`, or the new `us-east-2`. These regions will more then likely be setup with **EC2-VPC** and you will not longer have this issue. +3. Create an new AWS account, all new accounts will be setup using **EC2-VPC** and will not have this problem. +4. Contact AWS support to convert your **EC2-Classic** account to a **EC2-VPC** account. For more information checkout the following answer for **"Q. I really want a default VPC for my existing EC2 account. Is that possible?"** on https://aws.amazon.com/vpc/faqs/#Default_VPCs ### Helpful links: - http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html @@ -60,7 +61,46 @@ There are a few work arounds that you can try to get Docker for AWS up and runni ## Can I use my existing VPC? -Not at this time, but it is on our roadmap for future releases. +Yes, see [install Docker for AWS with a pre-existing VPC](index.md#install-with-an-existing-vpc) for more info. + +## Recommended VPC and subnet setup + +#### VPC + +* **CIDR:** 172.31.0.0/16 +* **DNS hostnames:** yes +* **DNS resolution:** yes +* **DHCP option set:** DHCP Options (Below) + +#### Internet gateway +* **VPC:** VPC (above) + +#### DHCP option set + +* **domain-name:** ec2.internal +* **domain-name-servers:** AmazonProvidedDNS + +#### Subnet1 +* **CIDR:** 172.31.16.0/20 +* **Auto-assign public IP:** yes +* **Availability-Zone:** A + +#### Subnet2 +* **CIDR:** 172.31.32.0/20 +* **Auto-assign public IP:** yes +* **Availability-Zone:** B + +#### Subnet3 +* **CIDR:** 172.31.0.0/20 +* **Auto-assign public IP:** yes +* **Availability-Zone:** C + +#### Route table +* **Destination CIDR block:** 0.0.0.0/0 +* **Subnets:** Subnet1, Subnet2, Subnet3 + +##### Subnet note: +If you are using the `10.0.0.0/16` CIDR in your VPC. When you create a docker network, you will need to make sure you pick a subnet (using `docker network create —subnet` option) that doesn't conflict with the `10.0.0.0` network. ## Which AWS regions will this work with? @@ -68,7 +108,7 @@ Docker for AWS should work with all regions except for AWS China, which is a lit ## How many Availability Zones does Docker for AWS use? -All of Amazons regions have at least 2 AZ's, and some have more. To make sure Docker for AWS works in all regions, only 2 AZ's are used even if more are available. +Docker for AWS determines the correct amount of Availability Zone's to use based on the region. In regions that support it, we will use 3 Availability Zones, and 2 for the rest of the regions. We recommend running production workloads only in regions that have at least 3 Availability Zones. ## What do I do if I get `KeyPair error` on AWS? As part of the prerequisites, you need to have an SSH key uploaded to the AWS region you are trying to deploy to. diff --git a/docker-for-aws/iam-permissions.md b/docker-for-aws/iam-permissions.md index febaaa31c1..65ee557592 100644 --- a/docker-for-aws/iam-permissions.md +++ b/docker-for-aws/iam-permissions.md @@ -6,7 +6,7 @@ title: Docker for AWS IAM permissions The following IAM permissions are required to use Docker for AWS. -Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly. +Before you deploy Docker for AWS, your account needs these permissions for the stack to deploy correctly. If you create and use an IAM role with these permissions for creating the stack, CloudFormation will use the role's permissions instead of your own, using the AWS CloudFormation Service Role feature. This feature is called [AWS CloudFormation Service Role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html?icmpid=docs_cfn_console) @@ -114,6 +114,7 @@ follow the link for more information. "ec2:DisassociateRouteTable", "ec2:GetConsoleOutput", "ec2:GetConsoleScreenshot", + "ec2:ModifyNetworkInterfaceAttribute", "ec2:ModifyVpcAttribute", "ec2:RebootInstances", "ec2:ReleaseAddress", @@ -309,6 +310,16 @@ follow the link for more information. "Resource": [ "*" ] + }, + { + "Sid": "Stmt1487169681000", + "Effect": "Allow", + "Action": [ + "elasticfilesystem:*" + ], + "Resource": [ + "*" + ] } ] } diff --git a/docker-for-aws/index.md b/docker-for-aws/index.md index a7c641eba6..838e19b7be 100644 --- a/docker-for-aws/index.md +++ b/docker-for-aws/index.md @@ -12,7 +12,7 @@ redirect_from: ## Quickstart If your account [has the proper -permissions](https://docs.docker.com/docker-for-aws/iam-permissions/), you can +permissions](/docker-for-aws/iam-permissions.md), you can use the blue button from the stable or beta channel to bootstrap Docker for AWS using CloudFormation. For more about stable and beta channels, see the [FAQs](/docker-for-aws/faqs.md#stable-and-beta-channels). @@ -33,11 +33,43 @@ using CloudFormation. For more about stable and beta channels, see the {{aws_blue_latest}} - {{aws_blue_beta}} + {{aws_blue_edge}} + + + + + + + {{aws_blue_vpc_edge}} +## Deployment options + +There are two ways to deploy Docker for AWS: + +- With a pre-existing VPC +- With a new VPC created by Docker + +We recommend allowing Docker for AWS to create the VPC since it allows Docker to optimize the environment. Installing in an existing VPC requires more work. + +### Create a new VPC +This approach creates a new VPC, subnets, gateways and everything else needed in order to run Docker for AWS. It is the easiest way to get started, and requires the least amount of work. + +All you need to do is run the CloudFormation template, answer some questions, and you are good to go. + +### Install with an Existing VPC +If you need to install Docker for AWS with an existing VPC, you need to do a few preliminary steps. See [recommended VPC and Subnet setup](faqs.md#recommended-vpc-and-subnet-setup) for more details. + +1. Pick a VPC in a region you want to use. + +2. Make sure the selected VPC is setup with an Internet Gateway, Subnets, and Route Tables + +3. You need to have three different subnets, ideally each in their own availability zone. If you are running in a region with only two Availability Zones, you will need to add more than one subnet into one of the availability zones. For production deployments we recommend only deploying to regions that have three or more Availability Zones. + +4. When you launch the docker for AWS CloudFormation stack, make sure you use the one for existing VPCs. This template will prompt you for the VPC and subnets that you want to use for Docker for AWS. + ## Prerequisites - Access to an AWS account with permissions to use CloudFormation and creating the following objects. [Full set of required permissions](iam-permissions.md). @@ -133,7 +165,7 @@ Elastic Load Balancers (ELBs) are set up to help with routing traffic to your sw Docker for AWS automatically configures logging to Cloudwatch for containers you run on Docker for AWS. A Log Group is created for each Docker for AWS install, and a log stream for each container. -`docker logs` and `docker service logs` are not supported on Docker for AWS. Instead, you should check container in CloudWatch. +The `docker logs` and `docker service logs` commands are not supported on Docker for AWS when using Cloudwatch for logs. Instead, check container logs in CloudWatch. ## System containers diff --git a/docker-for-aws/persistent-data-volumes.md b/docker-for-aws/persistent-data-volumes.md new file mode 100644 index 0000000000..fef782b23e --- /dev/null +++ b/docker-for-aws/persistent-data-volumes.md @@ -0,0 +1,65 @@ +--- +description: Persistent data volumes +keywords: aws persistent data volumes +title: Docker for AWS persistent data volumes +--- + +## What is Cloudstor? + +Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for AWS. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by AWS to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements. + +## Use Cloudstor + +After creating a swarm on Docker for AWS and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group: + +```bash +$ docker plugin ls +ID NAME DESCRIPTION ENABLED +f416c95c0dcc docker4x/cloudstor:aws-v1.13.1-beta18 cloud storage plugin for Docker true +``` + +**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here. + +The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver. + +### Share the same volume between tasks: + +```bash +docker service create --replicas 5 --name ping1 \ + --mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \ + alpine ping docker.com +``` + +Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared. + +With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node). + +### Use a unique volume per task: + +```bash +{% raw %} +docker service create --replicas 5 --name ping2 \ + --mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \ + alpine ping docker.com +{% endraw %} +``` + +Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to. + +In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume. + +### List or remove volumes created by Cloudstor + +You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta. + +### Configure IO performance + +If you want a higher level of IO performance like the maxIO mode for EFS, a perfmode parameter can be specified as volume-opt: + +```bash +{% raw %} +docker service create --replicas 5 --name ping3 \ + --mount type=volume,volume-driver=docker4x/cloudstor:aws-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol5,destination=/mydata,volume-opt=perfmode=maxio \ + alpine ping docker.com +{% endraw %} +``` diff --git a/docker-for-aws/release-notes.md b/docker-for-aws/release-notes.md index ab97de66db..039a824c23 100644 --- a/docker-for-aws/release-notes.md +++ b/docker-for-aws/release-notes.md @@ -27,6 +27,21 @@ Release date: 01/18/2017 ## Beta Channel +### 1.13.1-beta18 +Release date: 02/16/2017 + +**New** + +- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md) +- Added a second CloudFormation template that allows you to [install Docker for AWS into a pre-existing VPC](index.md#install-into-an-existing-vpc). +- Added Swarm wide support for [persistent storage volumes](persistent-data-volumes.md) +- Added the following engine labels + - **os** (linux) + - **region** (us-east-1, etc) + - **availability_zone** (us-east-1a, etc) + - **instance_type** (t2.micro, etc) + - **node_type** (worker, manager) + ### 1.13.1-rc2-beta17 Release date: 02/07/2017 diff --git a/docker-for-azure/faqs.md b/docker-for-azure/faqs.md index 3f3c0d6826..341aa36ca2 100644 --- a/docker-for-azure/faqs.md +++ b/docker-for-azure/faqs.md @@ -5,7 +5,23 @@ title: Docker for Azure Frequently asked questions (FAQ) toc_max: 2 --- -## FAQ +## Stable and beta channels + +Two different download channels are available for Docker for AWS: + +* The **stable channel** provides a general availability release-ready deployment + for a fully baked and tested, more reliable cluster. The stable version of Docker + for AWS comes with the latest released version of Docker Engine. The release + schedule is synched with Docker Engine releases and hotfixes. On the stable + channel, you can select whether to send usage statistics and other data. + +* The **beta channel** provides a deployment with new features we are working on, + but is not necessarily fully tested. It comes with the experimental version of + Docker Engine. Bugs, crashes and issues are more likely to occur with the beta + cluster, but you get a chance to preview new functionality, experiment, and provide + feedback as the deployment evolve. Releases are typically more frequent than for + stable, often one or more per month. Usage statistics and crash reports are sent + by default. You do not have the option to disable this on the beta channel. ## Can I use my own VHD? No, at this time we only support the default Docker for Azure VHD. diff --git a/docker-for-azure/index.md b/docker-for-azure/index.md index a8b6c7ed18..9d8b0d801f 100644 --- a/docker-for-azure/index.md +++ b/docker-for-azure/index.md @@ -11,20 +11,26 @@ redirect_from: ## Quickstart If your account has the [proper permissions](#prerequisites), you can generate the [Service Principal](#service-principal) and -then use the below button from the stable channel to bootstrap Docker for Azure using Azure Resource Manager. +then choose from the stable or beta channel to bootstrap Docker for Azure using Azure Resource Manager. +For more about stable and beta channels, see the [FAQs](/docker-for-azure/faqs.md#stable-and-beta-channels) + - + - +
Stable channelBeta channel
This deployment is fully baked and tested, and comes with the latest GA version of Docker Engine.

This is the best channel to use if you want a reliable platform to work with.

These releases follow a version schedule with a longer lead time than the betas, synched with Docker Engine releases and hotfixes. +
The stable deployment is fully baked and tested, and comes with the latest GA version of Docker Engine.

This is the best channel to use if you want a reliable platform to work with.

These releases follow a version schedule with a longer lead time than the betas, synched with Docker Engine releases and hotfixes.
The beta deployment offers cutting edge features and comes with the experimental version of Docker Engine, described in the Docker Experimental Features README on GitHub.

This is the best channel to use if you want to experiment with features under development, and can weather some instability and bugs. This channel is a continuation of the beta program, where you can provide feedback as the apps evolve. Releases are typically more frequent than for stable, often one or more per month.

We collect usage data on betas across the board.
+ {{azure_blue_latest}} + {{azure_blue_edge}} +
diff --git a/docker-for-azure/persistent-data-volumes.md b/docker-for-azure/persistent-data-volumes.md new file mode 100644 index 0000000000..e0e079325a --- /dev/null +++ b/docker-for-azure/persistent-data-volumes.md @@ -0,0 +1,53 @@ +--- +description: Persistent data volumes +keywords: azure persistent data volumes +title: Docker for Azure persistent data volumes +--- + +## What is Cloudstor? + +Cloudstor a volume plugin managed by Docker. It comes pre-installed and pre-configured in swarms deployed on Docker for Azure. Swarm tasks use a volume created through Cloudstor to mount a persistent data volume that stays attached to the swarm tasks no matter which swarm node they get scheduled or migrated to. Cloudstor relies on shared storage infrastructure provided by Azure to allow swarm tasks to create/mount their persistent volumes on any node in the swarm. In a future release we will introduce support for direct attached storage to satisfy very low latency/high IOPs requirements. + +## Use Cloudstor + +After creating a swarm on Docker for Azure and connecting to any manager using SSH, verify that Cloudstor is already installed and configured for the stack/resource group: + +```bash +$ docker plugin ls +ID NAME DESCRIPTION ENABLED +f416c95c0dcc docker4x/cloudstor:azure-v1.13.1-beta18 cloud storage plugin for Docker true +``` + +**Note**: Make note of the plugin tag name, because it will change between versions, and yours may be different then listed here. + +The following examples show how to create swarm services that require data persistence using the --mount flag and specifying Cloudstor as the driver. + +### Share the same volume between tasks: + +```bash +docker service create --replicas 5 --name ping1 \ + --mount type=volume,volume-driver=docker4x/cloudstor:azure-v1.13.1-beta18,source=sharedvol1,destination=/shareddata \ + alpine ping docker.com +``` + +Here all replicas/tasks of the service `ping1` share the same persistent volume `sharedvol1` mounted at `/shareddata` path within the container. Docker Swarm takes care of interacting with the Cloudstor plugin to make sure the common backing store is mounted on all nodes in the swarm where tasks of the service are scheduled. Each task needs to make sure they don't write concurrently on the same file at the same time and cause corruptions since the volume is shared. + +With the above example, you can make sure that the volume is indeed shared by logging into one of the containers in one swarm node, writing to a file under `/shareddata/` and reading the file under `/shareddata/` from another container (in the same node or a different node). + +### Use a unique volume per task: + +```bash +{% raw %} +docker service create --replicas 5 --name ping2 \ + --mount type=volume,volume-driver=docker4x/cloudstor:azure-v1.13.1-beta18,source={{.Service.Name}}-{{.Task.Slot}}-vol,destination=/mydata \ + alpine ping docker.com +{% endraw %} +``` + +Here the templatized notation is used to indicate to Docker Swarm that a unique volume be created and mounted for each replica/task of the service `ping2`. After initial creation of the volumes corresponding to the tasks they are attached to (in the nodes the tasks are scheduled in), if a task is rescheduled on a different node, Docker Swarm will interact with the Cloudstor plugin to create and mount the volume corresponding to the task on the node the task got scheduled on. It's highly recommended that you use the `.Task.Slot` template to make sure task N always gets access to vol N no matter which node it is executing on/scheduled to. + +In the above example, each task has it's own volume mounted at `/mydata/` and the files under there are unique to the task mounting the volume. + +#### List or remove volumes created by Cloudstor + +You can use `docker volume ls` to enumerate all volumes created on a node including those backed by Cloudstor. Note that if a swarm service task starts off in a node and has a Cloudstor volume associated and later gets rescheduled to a different node, `docker volume ls` in the initial node will continue to list the Cloudstor volume that was created for the task that no longer executes on the node although the volume is mounted elsewhere. Do NOT prune/rm the volumes that gets enumerated on a node without any tasks associated since these actions will result in data loss if the same volume is mounted in another node (i.e. the volume shows up in the `docker volume ls` output on another node in the swarm). We can try to detect this and block/handle in post-Beta. diff --git a/docker-for-azure/release-notes.md b/docker-for-azure/release-notes.md index b1b11a8cd1..061607ee5f 100644 --- a/docker-for-azure/release-notes.md +++ b/docker-for-azure/release-notes.md @@ -6,12 +6,21 @@ title: Docker for Azure Release Notes {% include d4a_buttons.md %} -## 1.13.0-1 -Release date: 1/18/2017 +## Stable Channel + +### 1.13.1-2 (stable) +Release date: 02/08/2017 {{azure_button_latest}} -### New +**New** + +- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md) + +### 1.13.0-1 +Release date: 1/18/2017 + +**New** - Docker Engine upgraded to [Docker 1.13.0](https://github.com/docker/docker/blob/master/CHANGELOG.md) - Writing to home directory no longer requires `sudo` @@ -19,22 +28,32 @@ Release date: 1/18/2017 - Added support to scale the number of nodes in manager and worker vm scale sets through Azure UI/CLI for managing the number of nodes in a scale set - Improved logging and remote diagnostics mechanisms for system containers -## 1.13.0-beta12 +## Beta Channel + +### 1.13.1-beta18 +Release date: 02/16/2017 + +**New** + +- Docker Engine upgraded to [Docker 1.13.1](https://github.com/docker/docker/blob/master/CHANGELOG.md) +- Added Swarm wide support for [persistent storage volumes](persistent-data-volumes.md) + +### 1.13.0-beta12 Release date: 12/09/2016 -### New +**New** - Docker Engine upgraded to [Docker 1.13.0-rc2](https://github.com/docker/docker/blob/master/CHANGELOG.md) - SSH access has been added to the worker nodes - The Docker daemon no longer listens on port 2375 - Added a `swarm-exec` to execute a docker command across all of the swarm nodes. See [Executing Docker commands in all swarm nodes](deploy.md#execute-docker-commands-in-all-swarm-nodes) for more details. -## 1.12.3-beta10 +### 1.12.3-beta10 Release date: 11/08/2016 -### New +**New** - Docker Engine upgraded to Docker 1.12.3 - Fixed the shell container that runs on the managers, to remove a ssh host key that was accidentally added to the image. @@ -44,21 +63,21 @@ This could have led to a potential man in the middle (MITM) attack. The ssh host - All container logs can be found in the `xxxxlog` storage account - You can connect to each manager using SSH by following our deploy [guide](deploy.md) -## 1.12.2-beta9 +### 1.12.2-beta9 Release date: 10/17/2016 -### New +**New** - Docker Engine upgraded to Docker 1.12.2 - Manager behind its own LB - Added sudo support to the shell container on manager nodes -## 1.12.1-beta5 +### 1.12.1-beta5 Release date: 8/19/2016 -### New +**New** * Docker Engine upgraded to 1.12.1 @@ -66,11 +85,11 @@ Release date: 8/19/2016 * To assist with debugging, the Docker Engine API is available internally in the Azure VPC on TCP port 2375. These ports cannot be accessed from outside the cluster, but could be used from within the cluster to obtain privileged access on other cluster nodes. In future releases, direct remote access to the Docker API will not be available. -## 1.12.0-beta4 +### 1.12.0-beta4 Release date: 8/9/2016 -### New +**New** * First release diff --git a/docker-for-mac/images/mac-install-success-docker-cloud.png b/docker-for-mac/images/mac-install-success-docker-cloud.png new file mode 100644 index 0000000000..1e5a911bb2 Binary files /dev/null and b/docker-for-mac/images/mac-install-success-docker-cloud.png differ diff --git a/docker-for-mac/index.md b/docker-for-mac/index.md index 78ed6587fa..540326bd6b 100644 --- a/docker-for-mac/index.md +++ b/docker-for-mac/index.md @@ -17,8 +17,12 @@ Welcome to Docker for Mac! Docker is a full development platform for creating containerized apps, and Docker for Mac is the best way to get started with Docker on a Mac. -> **Got Docker for Mac?** If you have not yet installed Docker for Mac, please see [Install Docker for Mac](install.md) for an explanation of stable and beta channels, download and install information. - +> **Got Docker for Mac?** If you have not yet installed Docker for Mac, please see [Install Docker for Mac](install.md) for an explanation of stable and beta +channels, system requirements, and download/install information. +> +**Looking for system requirements?** Check out +[What to know before you install](install.md#what-to-know-before-you-install), which has moved to the new install topic. +{: id="what-to-know-before-you-install" } ## Check versions of Docker Engine, Compose, and Machine @@ -118,12 +122,7 @@ can set the following runtime options. diagnostics, crash reports, and usage data. This information can help Docker improve the application and get more context for troubleshooting problems. Uncheck this to opt out and prevent auto-send of data. Docker may prompt for - more information in some cases, even with auto-send enabled. Also, you can - enable or disable these auto-reporting settings with one click on the - information popup when you first start Docker. - - ![Startup information](/docker-for-mac/images/mac-install-success-docker-wait.png) - + more information in some cases, even with auto-send enabled. ### File sharing @@ -332,14 +331,24 @@ forum](https://forums.docker.com/c/docker-for-mac). To report bugs or problems, log on to [Docker for Mac issues on GitHub](https://github.com/docker/for-mac/issues), where you can review -community reported issues, and file new ones. See -[Diagnose problems, send feedback, and create GitHub issues](troubleshoot.md#diagnose-problems-send-feedback-and-create-github-issues). +community reported issues, and file new ones. See [Diagnose problems, send +feedback, and create GitHub +issues](troubleshoot.md#diagnose-problems-send-feedback-and-create-github-issues). As a part of reporting issues on GitHub, we can help you troubleshoot the log data. To give us feedback on the documentation or update it yourself, use the Feedback options at the bottom of each docs page. +## Docker Store + +Choose **Docker Store** from the Docker for Mac menu +to get to the Docker app downloads site. +[Docker store](https://store.docker.com/) is a +component of the next-generation Docker Hub, and the best place +to find compliant, trusted commercial and free software +distributed as Docker Images. + ## Where to go next * Try out the tutorials and sample app walkthroughs at [Learn Docker](/learn.md), including: diff --git a/docker-for-mac/install.md b/docker-for-mac/install.md index a4ce17f4ad..55e1a74626 100644 --- a/docker-for-mac/install.md +++ b/docker-for-mac/install.md @@ -61,15 +61,15 @@ channels, see the [FAQs](/docker-for-mac/faqs.md#stable-and-beta-channels). >**Important Notes**: > > - Docker for Mac requires OS X El Capitan 10.11 or newer macOS release running on a 2010 or -> newer Mac, with Intel's hardware support for MMU virtualization. The app will run on 10.10.3 Yosemite, but with limited support. Please see -> [What to know before you install](index.md#what-to-know-before-you-install) -> for a full explanation and list of prerequisites. + newer Mac, with Intel's hardware support for MMU virtualization. The app will run on 10.10.3 Yosemite, but with limited support. Please see + [What to know before you install](#what-to-know-before-you-install) + for a full explanation and list of prerequisites. > > - You can switch between beta and stable versions, but you must have only one -> app installed at a time. Also, you will need to save images and export -> containers you want to keep before uninstalling the current version before -> installing another. For more about this, see the -> [FAQs about beta and stable channels](faqs.md#stable-and-beta-channels). + app installed at a time. Also, you will need to save images and export + containers you want to keep before uninstalling the current version before + installing another. For more about this, see the + [FAQs about beta and stable channels](faqs.md#stable-and-beta-channels). ## What to know before you install @@ -140,10 +140,10 @@ channels, see the [FAQs](/docker-for-mac/faqs.md#stable-and-beta-channels). ![Whale in menu bar](/docker-for-mac/images/whale-in-menu-bar.png) If you just installed the app, you also get a success message with suggested - next steps and a link to this documentation. Click the whale ![whale](/docker-for-mac/images/whale-x.png)) + next steps and a link to this documentation. Click the whale (![whale](/docker-for-mac/images/whale-x.png)) in the status bar to dismiss this popup. - ![Docker success](images/mac-install-success-docker-ps.png) + ![Startup information](/docker-for-mac/images/mac-install-success-docker-cloud.png) 3. Click the whale (![whale-x](images/whale-x.png)) to get Preferences and other options. diff --git a/docker-for-mac/release-notes.md b/docker-for-mac/release-notes.md index 42cf8d9591..8439607f3a 100644 --- a/docker-for-mac/release-notes.md +++ b/docker-for-mac/release-notes.md @@ -281,6 +281,26 @@ events or unexpected unmounts. ## Beta Release Notes +### Docker Community Edition 17.03.0 Release Notes (2017-02-22 17.03.0-ce-rc1-mac1) + +**New** + +- Introduce Docker Community Edition +- Integration with Docker Cloud to control remote Swarms from the local CLI and view your repositories. This feature will be rolled out to all users progressively +- Docker will now use keychain access to secure your IDs + +**Upgrades** + +- Docker 17.03.0-ce-rc1 +- Linux Kernel 4.9.11 + +**Bug fixes and minor changes** + +- VPNKit: fixed unmarshalling of DNS packets containing pointers to pointers to labels +- osxfs: catch EPERM when reading extended attributes of non-files +- Added `page_poison=1` to boot args +- Added a new disk flushing option + ### Beta 41 Release Notes (2017-02-07-2017-1.13.1-rc2-beta41) **Upgrades** diff --git a/docker-for-windows/images/config-menu-solo.png b/docker-for-windows/images/config-menu-solo.png new file mode 100644 index 0000000000..15f83c640b Binary files /dev/null and b/docker-for-windows/images/config-menu-solo.png differ diff --git a/docker-for-windows/images/config-popup-menu-win-switch-containers.png b/docker-for-windows/images/config-popup-menu-win-switch-containers.png deleted file mode 100644 index 0f0f4e6072..0000000000 Binary files a/docker-for-windows/images/config-popup-menu-win-switch-containers.png and /dev/null differ diff --git a/docker-for-windows/images/config-popup-menu-win.png b/docker-for-windows/images/config-popup-menu-win.png index cc6e0e364c..0f872445c6 100644 Binary files a/docker-for-windows/images/config-popup-menu-win.png and b/docker-for-windows/images/config-popup-menu-win.png differ diff --git a/docker-for-windows/images/config-popup-win-linux-switch.png b/docker-for-windows/images/config-popup-win-linux-switch.png new file mode 100644 index 0000000000..76c7f6407e Binary files /dev/null and b/docker-for-windows/images/config-popup-win-linux-switch.png differ diff --git a/docker-for-windows/images/d4win-download-error.png b/docker-for-windows/images/d4win-download-error.png new file mode 100644 index 0000000000..a40087932d Binary files /dev/null and b/docker-for-windows/images/d4win-download-error.png differ diff --git a/docker-for-windows/images/docker-store.png b/docker-for-windows/images/docker-store.png new file mode 100644 index 0000000000..07ed3af5d4 Binary files /dev/null and b/docker-for-windows/images/docker-store.png differ diff --git a/docker-for-windows/images/win-install-success-popup-cloud.png b/docker-for-windows/images/win-install-success-popup-cloud.png new file mode 100644 index 0000000000..48d12b8124 Binary files /dev/null and b/docker-for-windows/images/win-install-success-popup-cloud.png differ diff --git a/docker-for-windows/index.md b/docker-for-windows/index.md index 524e3bf666..b6b7ebd1a1 100644 --- a/docker-for-windows/index.md +++ b/docker-for-windows/index.md @@ -17,7 +17,12 @@ Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows systems. -> **Got Docker for Windows?** If you have not yet installed Docker for Windows, please see [Install Docker for Windows](install.md) for an explanation of stable and beta channels, download and install information. +> **Got Docker for Windows?** If you have not yet installed Docker for Windows, please see [Install Docker for Windows](install.md) for an explanation of stable +and beta channels, system requirements, and download/install information. +> +**Looking for system requirements?** Check out +[What to know before you install](install.md#what-to-know-before-you-install), which has moved to the new install topic. +{: id="what-to-know-before-you-install" } ## Check versions of Docker Engine, Compose, and Machine @@ -262,7 +267,7 @@ PowerShell Module as follows. if (-Not (Test-Path $PROFILE)) { New-Item $PROFILE –Type File –Force } - + Add-Content $PROFILE "`nImport-Module posh-docker" ``` @@ -326,12 +331,9 @@ perform a factory reset. diagnostics, crash reports, and usage data. This information can help Docker improve the application and get more context for troubleshooting problems. - Uncheck any of the options to opt out and prevent auto-send of data. Docker - may prompt for more information in some cases, even with auto-send enabled. - Also, you can enable or disable these auto-reporting settings with one click - on the information popup when you first start Docker. - - ![Startup information](/docker-for-windows/images/win-install-success-popup.png) + Uncheck any of the options to opt out and prevent auto-send of + data. Docker may prompt for more information in some cases, + even with auto-send enabled. ### Shared Drives @@ -530,10 +532,12 @@ Linux VM. ### Switch between Windows and Linux containers -Starting with Beta 26 and Stable 1.13.0, you can select which daemon (Linux or Windows) the Docker +You can select which daemon (Linux or Windows) the Docker CLI talks to. Select **Switch to Windows containers** to toggle to Windows containers. Select **Switch to Linux containers**. +![Windows-Linux container types switch](images/config-popup-win-linux-switch.png) + Microsoft Developer Network has preliminary/draft information on Windows containers [here](https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/about_overview). @@ -605,12 +609,21 @@ forum](https://forums.docker.com/c/docker-for-windows). To report bugs or problems, log on to [Docker for Windows issues on GitHub](https://github.com/docker/for-win/issues), where you can review community reported issues, and file new ones. As a part of reporting issues on -GitHub, we can help you troubleshoot the log data. See the [Diagnose and -Feedback](#diagnose-and-feedback) topic below. +GitHub, we can help you troubleshoot the log data. See the +[Diagnose and Feedback](#diagnose-and-feedback) topic below. To give feedback on the documentation or update it yourself, use the Feedback options at the bottom of each docs page. +### Docker Store + +Choose **Docker Store** from the Docker for Windows menu +to get to the Docker app downloads site. +[Docker store](https://store.docker.com/) is a +component of the next-generation Docker Hub, and the best place +to find compliant, trusted commercial and free software +distributed as Docker Images. + ### Diagnose and Feedback If you encounter problems for which you do not find solutions in this documentation, searching [Docker for Windows issues on GitHub](https://github.com/docker/for-win/issues) already filed by other users, or on the [Docker for Windows forum](https://forums.docker.com/c/docker-for-windows), we can help you troubleshoot the log data. diff --git a/docker-for-windows/install.md b/docker-for-windows/install.md index 6a7d9e217b..11939f013f 100644 --- a/docker-for-windows/install.md +++ b/docker-for-windows/install.md @@ -65,7 +65,7 @@ beta channels, see the > - Docker for Windows requires 64bit Windows 10 Pro, Enterprise and Education > (1511 November update, Build 10586 or later) and Microsoft Hyper-V. Please > see -> [What to know before you install](/docker-for-windows/index.md#what-to-know-before-you-install) +> [What to know before you install](/docker-for-windows/#what-to-know-before-you-install) > for a full list of prerequisites. > > - You can switch between beta and stable versions, but you must have only one @@ -109,7 +109,7 @@ guarantees (i.e., not officially supported). For more information, see Looking for information on using Windows containers? -* [Switch between Windows and Linux containers](#switch-between-windows-and-linux-containers) describes the Linux / Windows containers toggle in Docker for Windows and points you to the tutorial mentioned above. +* [Switch between Windows and Linux containers](/docker-for-windows/index.md#switch-between-windows-and-linux-containers) describes the Linux / Windows containers toggle in Docker for Windows and points you to the tutorial mentioned above.

* [Getting Started with Windows Containers (Lab)](https://github.com/docker/labs/blob/master/windows/windows-containers/README.md) provides a tutorial on how to set up and run Windows containers on Windows 10 or @@ -142,7 +142,7 @@ The whale in the status bar indicates that Docker is running, and accessible fro If you just installed the app, you also get a popup success message with suggested next steps, and a link to this documentation. -![Install success](/docker-for-windows/images/win-install-success-popup.png) +![Startup information](/docker-for-windows/images/win-install-success-popup-cloud.png) When initialization is complete, select **About Docker** from the notification area icon to verify that you have the latest version. diff --git a/docker-for-windows/release-notes.md b/docker-for-windows/release-notes.md index 0d8f423744..63628e06d4 100644 --- a/docker-for-windows/release-notes.md +++ b/docker-for-windows/release-notes.md @@ -9,7 +9,7 @@ title: Docker for Windows Release notes Here are the main improvements and issues per release, starting with the current release. The documentation is always updated for each release. -For system requirements, please see +For system requirements, please see [What to know before you install](install.md#what-to-know-before-you-install). Release notes for _stable_ and _beta_ releases are listed below. You can learn @@ -269,6 +269,27 @@ We did not distribute a 1.12.4 stable release ## Beta Release Notes +### Docker Community Edition 17.03.0 Release Notes (2017-02-22 17.03.0-ce-rc1) + +**New** + +- Introduce Docker Community Edition +- Integration with Docker Cloud: control remote Swarms from the local CLI and view your repositories. This feature will be rolled out to all users progressively. + +**Upgrades** + +- Docker 17.03.0-ce-rc1 +- Linux Kernel 4.9.11 + +**Upgrades** + +- VPNKit: Fixed unmarshalling of DNS packets containing pointers to pointers to labels +- Match Hyper-V Integration Services by ID, not name +- Don't consume 100% CPU when the service is stopped +- Log the diagnostic ID when uploading +- Improved Firewall handling: stop listing the rules since it can take a lot of time +- Don't rollback to the previous engine when the desired engine fails to start + ### Beta 41 Release Notes (2017-02-07 1.13.1-rc2-beta41) **Upgrades** diff --git a/engine/admin/logging/etwlogs.md b/engine/admin/logging/etwlogs.md index 74868b2361..af79e0e924 100644 --- a/engine/admin/logging/etwlogs.md +++ b/engine/admin/logging/etwlogs.md @@ -1,6 +1,8 @@ --- description: Describes how to use the etwlogs logging driver. keywords: ETW, docker, logging, driver +title: ETW logging driver +--- The ETW logging driver forwards container logs as ETW events. ETW stands for Event Tracing in Windows, and is the common framework @@ -54,6 +56,6 @@ Here is an example event message: A client can parse this message string to get both the log message, as well as its context information. Note that the time stamp is also available within the ETW event. -**Note** This ETW provider emits only a message string, and not a specially -structured ETW event. Therefore, it is not required to register a manifest file -with the system to read and interpret its ETW events. +> **Note:** This ETW provider emits only a message string, and not a specially +> structured ETW event. Therefore, it is not required to register a manifest file +> with the system to read and interpret its ETW events. diff --git a/engine/admin/logging/overview.md b/engine/admin/logging/overview.md index d9d63840cd..93cf100d79 100644 --- a/engine/admin/logging/overview.md +++ b/engine/admin/logging/overview.md @@ -37,7 +37,7 @@ Logging Driver: json-file When you start a container, you can configure it to use a different logging driver than the Docker daemon's default. If the logging driver has configurable options, you can set them using one or more instances of the -`--log-opt -` flag. Even if the container uses the default logging +`--log-opt =` flag. Even if the container uses the default logging driver, it can use different configurable options. To find the current logging driver for a running container, if the daemon @@ -281,7 +281,7 @@ $ docker run -dit \ ``` For detailed information on working with the `fluentd` logging driver, see -[the fluentd logging driver](fluentd.md) +[the fluentd logging driver](fluentd.md). ## `awslogs` @@ -417,7 +417,7 @@ $ docker run --log-driver=gcplogs \ ``` For detailed information about working with the Google Cloud logging driver, see -the [Google Cloud Logging driver](gcplogs.md). reference documentation. +the [Google Cloud Logging driver](gcplogs.md) reference documentation. ## NATS logging options diff --git a/engine/admin/logging/view_container_logs.md b/engine/admin/logging/view_container_logs.md index de54183d7a..ec6d3ee234 100644 --- a/engine/admin/logging/view_container_logs.md +++ b/engine/admin/logging/view_container_logs.md @@ -16,7 +16,7 @@ the keyboard or input from another command. `STDOUT` is usually a command's normal output, and `STDERR` is typically used to output error messages. By default, `docker logs` shows the command's `STDOUT` and `STDERR`. To read more about I/O and Linux, see the -[Linux Documentation Project article on I/O redirection](http://www.tldp.org/LDP/abs/html/io-redirection.html) +[Linux Documentation Project article on I/O redirection](http://www.tldp.org/LDP/abs/html/io-redirection.html). In some cases, `docker logs` may not show useful information unless you take additional steps. diff --git a/engine/admin/resource_constraints.md b/engine/admin/resource_constraints.md index c70f84b60e..ad9825b6bf 100644 --- a/engine/admin/resource_constraints.md +++ b/engine/admin/resource_constraints.md @@ -97,7 +97,7 @@ and higher, you can also configure the The CFS is the Linux kernel CPU scheduler for normal Linux processes. Several runtime flags allow you to configure the amount of access to CPU resources your -container has. When you use these settings, Docker modifies the settings for the +container has. When you use these settings, Docker modifies the settings for the container's cgroup on the host machine. | Option | Description | diff --git a/engine/api/index.md b/engine/api/index.md index c33d08e459..e4112b85a6 100644 --- a/engine/api/index.md +++ b/engine/api/index.md @@ -7,7 +7,7 @@ redirect_from: - /reference/api/docker_remote_api/ --- -The Engine API is the API served by Docker Engine. It allows you to control every aspect of Docker from within your own applications. You to build tools to manage and monitor applications running on Docker, and even use it to build apps on Docker itself. +The Engine API is the API served by Docker Engine. It allows you to control every aspect of Docker from within your own applications, build tools to manage and monitor applications running on Docker, and even use it to build apps on Docker itself. It is the API the Docker client uses to communicate with the Engine, so everything the Docker client can do can be done with the API. For example: diff --git a/engine/faq.md b/engine/faq.md index 1aeb42cddb..537c99ecc5 100644 --- a/engine/faq.md +++ b/engine/faq.md @@ -156,6 +156,7 @@ Linux: [SuSE](installation/linux/suse.md), and many others. Microsoft Windows: + - Windows Server 2016 - Windows 10 diff --git a/engine/getstarted-voting-app/cleanup.md b/engine/getstarted-voting-app/cleanup.md new file mode 100644 index 0000000000..b282ff9c2e --- /dev/null +++ b/engine/getstarted-voting-app/cleanup.md @@ -0,0 +1,120 @@ +--- +description: Graceful shutdown, reboot, clean-up +keywords: voting app, docker-machine +title: Graceful shutdown, reboot, and clean-up +--- + + +The voting app will continue to run on the swarm while the `manager` and `worker` machines are running, unless you explicitly stop it. + +## Stopping the voting app + +To shut down the voting app, simply stop the machines on which it is running. If you are using local hosts, follow the steps below. If you are using cloud hosts, stop them per your cloud setup. + +1. Open a terminal window and run `docker-machine ls` to list the current machines. + + ``` + $ docker-machine ls + NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS + manager - virtualbox Running tcp://192.168.99.100:2376 v1.13.1 + worker - virtualbox Running tcp://192.168.99.101:2376 v1.13.1 + ``` +2. Use `docker-machine stop` to shut down each machine, beginning with the worker. + + ``` + $ docker-machine stop worker + Stopping "worker"... + Machine "worker" was stopped. + + $ docker-machine stop manager + Stopping "manager"... + Machine "manager" was stopped. + ``` + +## Restarting the voting app + +If you want to come back to your `manager` and `worker` machines later, you can +keep them around. One advantage of this is that you can simply restart the +machines to launch the sample voting app again. + +To restart local machines, follow the steps below. To restart cloud instances, +start them per your cloud setup. + +1. Open a terminal window and list the machines. + + ``` + $ docker-machine ls + NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS + manager - virtualbox Stopped Unknown + worker - virtualbox Stopped Unknown + ``` + +3. Run `docker-machine start` to start each machine, beginning with the manager. + + ``` + $ docker-machine start manager + Starting "manager"... + (manager) Check network to re-create if needed... + (manager) Waiting for an IP... + Machine "manager" was started. + Waiting for SSH to be available... + Detecting the provisioner... + Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command. + + $ docker-machine start worker + Starting "worker"... + (worker) Check network to re-create if needed... + (worker) Waiting for an IP... + Machine "worker" was started. + Waiting for SSH to be available... + Detecting the provisioner... + Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command. + ``` + +3. Run the following commands to log into the manager and see if the swarm is up. + + ``` + docker-machine ssh manager + + docker@manager:~$ docker stack services vote + ID NAME MODE REPLICAS IMAGE + 74csdxb99tg9 vote_visualizer replicated 1/1 dockersamples/visualizer:stable + jm0g1vahcid9 vote_redis replicated 2/2 redis:alpine + mkk6lee494t4 vote_db replicated 1/1 postgres:9.4 + o3sl1wr35yd6 vote_worker replicated 1/1 dockersamples/examplevotingapp_worker:latest + qcc8dw2zafc1 vote_vote replicated 2/2 dockersamples/examplevotingapp_vote:after + x5wcvknlnnh7 vote_result replicated 1/1 dockersamples/examplevotingapp_result:after + ``` + +At this point, the app is back up. The web pages you looked at in the [test drive](test-drive.md) should be available, and you could experiment, modify the app, and [redeploy](customize-app.md). + +## Removing the machines + +If you prefer to remove your local machines altogether, use `docker-machine rm` +to do so. (Or, `docker-machine rm -f` will force-remove running machines.) + +``` +$ docker-machine rm worker +About to remove worker +WARNING: This action will delete both local reference and remote instance. +Are you sure? (y/n): y +Successfully removed worker + +$ docker-machine rm manager +About to remove manager +WARNING: This action will delete both local reference and remote instance. +Are you sure? (y/n): y +Successfully removed manager +``` + +The Docker images you pulled were all running on the virtual machines you +created (either local or cloud), so no other cleanup of images or processes is +needed once you stop and/or remove the virtual machines. + +## What's next? + +See the [Docker Machine topics](/machine/overview/) for more on working +with `docker-machine`. + +Check out the [list of resources](customize-app.md#resources) for more on Docker +labs, sample apps, and swarm mode. diff --git a/engine/getstarted-voting-app/customize-app.md b/engine/getstarted-voting-app/customize-app.md index 2a0099e017..9626bc694b 100644 --- a/engine/getstarted-voting-app/customize-app.md +++ b/engine/getstarted-voting-app/customize-app.md @@ -4,7 +4,6 @@ keywords: multi-container, services, swarm mode, cluster, voting app, docker-sta title: Customize the app and redeploy --- - In this step, we'll make a simple change to the application and redeploy it. We'll change the focus of the poll from Cats or Dogs to .NET or Java. @@ -72,7 +71,6 @@ did not change. * [Introducing Docker 1.13.0](https://blog.docker.com/2017/01/whats-new-in-docker-1-13/) blog post from [Docker Core Engineering](https://blog.docker.com/author/core_eng/) - * A deeper dive voting app walkthrough is available on [Docker Labs](https://github.com/docker/labs/) as [Deploying an app to a Swarm](https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md). The lab walkthrough provides more technical detail on deployment configuration, @@ -88,10 +86,17 @@ networking. Docker and topics on images in [Learn by example](/engine/tutorials/index.md). * For more about the `docker-stack.yml` and the `docker stack deploy` commands, -see [deploy](/compose/compose-file.md#deploy) in the [Compose file -reference](/compose/compose-file.md) and [`docker stack -deploy`](/engine/reference/commandline/stack_deploy.md) in the Docker Engine -command line reference. +see [deploy](/compose/compose-file.md#deploy) in the Compose file reference and +[`docker stack deploy`](/engine/reference/commandline/stack_deploy.md) +in the Docker Engine command line reference. -* To learn more about swarm mode, start with the +* To learn about all new features in Compose, see +[Compose file version 3 reference](/compose/compose-file/index.md) and +[Compose file versions and upgrading](/compose/compose-file/compose-versioning.md). + +* For more about swarm mode, start with the [Swarm mode overview](/engine/swarm/index.md). + +## What's next? + +* To learn about shutting down the sample app and cleaning up, see [Graceful shutdown, reboot, and clean-up](cleanup.md). diff --git a/engine/getstarted-voting-app/deploy-app.md b/engine/getstarted-voting-app/deploy-app.md index 8806ecacd5..297a05d1de 100644 --- a/engine/getstarted-voting-app/deploy-app.md +++ b/engine/getstarted-voting-app/deploy-app.md @@ -11,7 +11,7 @@ deploy the voting application to the swarm you just created. The `docker-stack.yml` file must be located on a manager for the swarm where you want to deploy the application stack. -1. Get `docker-stack.yml` either from the [source code in the lab](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml) or by copying it from the example given [here](https://docs.docker.com/engine/getstarted-voting-app/#/docker-stackyml-deployment-configuration-file). +1. Get `docker-stack.yml` either from the [source code in the lab](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml) or by copying it from the example given [here](index.md#docker-stackyml-deployment-configuration-file). If you prefer to download the file directly from our GitHub repository rather than copy it from the documentation, you can use a tool like `curl`. This command downloads the raw file to the current directory on your local host. You can copy-paste it into your shell if you have `curl`: @@ -79,12 +79,12 @@ We'll deploy the application from the manager. ``` docker@manager:~$ docker stack services vote ID NAME MODE REPLICAS IMAGE - 0y3q6lgc0drn vote_result replicated 2/2 dockersamples/examplevotingapp_result:before - fvsaqvuec4yw vote_redis replicated 2/2 redis:alpine - igev2xk5s3zo vote_worker replicated 1/1 dockersamples/examplevotingapp_worker:latest - vpfjr9b0qc01 vote_visualizer replicated 1/1 dockersamples/visualizer:stable - wctxjnwl22k4 vote_vote replicated 2/2 dockersamples/examplevotingapp_vote:before - zp0zyvgaguox vote_db replicated 1/1 postgres:9.4 + 1zkatkq7sf8n vote_result replicated 1/1 dockersamples/examplevotingapp_result:after + hphnxyt93h42 vote_redis replicated 2/2 redis:alpine + jd0wafumrcil vote_vote replicated 2/2 dockersamples/examplevotingapp_vote:after + msief4cqme29 vote_visualizer replicated 1/1 dockersamples/visualizer:stable + qa6y8sfmtjoz vote_db replicated 1/1 postgres:9.4 + w04bh1vumnep vote_worker replicated 1/1 dockersamples/examplevotingapp_worker:latest ``` ## What's next? diff --git a/engine/getstarted-voting-app/images/customize-after.png b/engine/getstarted-voting-app/images/customize-after.png index 97d0c88fb0..7500d1c74a 100644 Binary files a/engine/getstarted-voting-app/images/customize-after.png and b/engine/getstarted-voting-app/images/customize-after.png differ diff --git a/engine/getstarted-voting-app/images/customize-before.png b/engine/getstarted-voting-app/images/customize-before.png index e0d5144cfc..d3d824f1fd 100644 Binary files a/engine/getstarted-voting-app/images/customize-before.png and b/engine/getstarted-voting-app/images/customize-before.png differ diff --git a/engine/getstarted-voting-app/images/replicas-constraint.png b/engine/getstarted-voting-app/images/replicas-constraint.png index a0f6f6c943..ad5628792b 100644 Binary files a/engine/getstarted-voting-app/images/replicas-constraint.png and b/engine/getstarted-voting-app/images/replicas-constraint.png differ diff --git a/engine/getstarted-voting-app/images/visualizer-2.png b/engine/getstarted-voting-app/images/visualizer-2.png index 8935edf927..29ea9ca59e 100644 Binary files a/engine/getstarted-voting-app/images/visualizer-2.png and b/engine/getstarted-voting-app/images/visualizer-2.png differ diff --git a/engine/getstarted-voting-app/images/visualizer-manager-constraint.png b/engine/getstarted-voting-app/images/visualizer-manager-constraint.png index 610e94ddaa..435857e679 100644 Binary files a/engine/getstarted-voting-app/images/visualizer-manager-constraint.png and b/engine/getstarted-voting-app/images/visualizer-manager-constraint.png differ diff --git a/engine/getstarted-voting-app/images/visualizer.png b/engine/getstarted-voting-app/images/visualizer.png index e235428007..86bed62323 100644 Binary files a/engine/getstarted-voting-app/images/visualizer.png and b/engine/getstarted-voting-app/images/visualizer.png differ diff --git a/engine/getstarted-voting-app/index.md b/engine/getstarted-voting-app/index.md index 842cf0395b..cfdddafe13 100644 --- a/engine/getstarted-voting-app/index.md +++ b/engine/getstarted-voting-app/index.md @@ -17,18 +17,17 @@ Docker Engine. ## Got Docker? -If you haven't yet downloaded Docker or installed it, go to -[Get Docker](/engine/getstarted/step_one.md#step-1-get-docker) -and grab Docker for your platform. You can follow along and -run this example using Docker for Mac, Docker for Windows or -Docker for Linux. +If you haven't yet downloaded Docker or installed it, go to [Get +Docker](/engine/getstarted/step_one.md#step-1-get-docker) and grab Docker for +your platform. You can follow along and run this example using Docker for Mac, +Docker for Windows or Docker for Linux. -Once you have Docker installed, you can run `docker run hello-world` -or other commands described in the Get Started with Docker -tutorial to [verify your installation](/engine/getstarted/step_one.md#step-3-verify-your-installation). -If you are totally new to Docker, you might continue through -the full [Get Started with Docker tutorial](/engine/getstarted/index.md) -first, then come back. +Once you have Docker installed, you can run `docker run hello-world` or other +commands described in the Get Started with Docker tutorial to [verify your +installation](/engine/getstarted/step_one.md#step-3-verify-your-installation). +If you are totally new to Docker, you might continue through the full [Get +Started with Docker tutorial](/engine/getstarted/index.md) first, then come +back. ## What you'll learn and do @@ -53,7 +52,7 @@ A [service](/engine/reference/glossary.md#service) is a bit of executable code d a specific task. A service can run in one or more containers. Defining a service configuration for your app (above and beyond `docker run` commands in a Dockerfile) enables you to -deploy the app to a swarm and manage it as a distributed, +deploy the app to a swarm and manage it as a distributed, multi-container application. The voting app you are about to deploy is composed @@ -98,9 +97,7 @@ The `depends_on` key allows you to specify that a service is only deployed after another service. In our example, `vote` only deploys after `redis`. -The `deploy` key specifies aspects of a swarm deployment, as described below in -[Compose Version 3 features and -compatibility](#compose-v3-features-and-compatibility). +The `deploy` key specifies aspects of a swarm deployment. For example, in this configuration we create _replicas_ of the `vote` and `result` services (2 containers of each will be deployed to the swarm), and we constrain some services (`db` and `visualizer`) to run only on a `manager` node. ## docker-stack.yml deployment configuration file @@ -118,15 +115,15 @@ For this tutorial, the images are pre-built, and we will use `docker-stack.yml` (a Version 3 Compose file) instead of a Dockerfile to run the images. When we deploy, each image will run as a service in a container (or in multiple containers, for those that have replicas defined to -scale the app). +scale the app). This example relies on Compose version 3, which is designed to be compatible with Docker Engine swarm mode. To follow along with the example, you need only have Docker running and the copy of `docker-stack.yml` we provide [here](https://github.com/docker/example-voting-app/blob/master/docker-stack.yml). -This file defines all the services shown in the [table -above](#services-and-images-overview), their base images, configuration details -such as ports, networks, volumes, application dependencies, and the swarm -configuration. +This file defines all the services shown in the +[table above](#services-and-images-overview), their base images, +configuration details such as ports, networks, volumes, +application dependencies, and the swarm configuration. ``` version: "3" @@ -177,7 +174,7 @@ services: depends_on: - db deploy: - replicas: 2 + replicas: 1 update_config: parallelism: 2 delay: 10s @@ -198,6 +195,8 @@ services: delay: 10s max_attempts: 3 window: 120s + placement: + constraints: [node.role == manager] visualizer: image: dockersamples/visualizer:stable @@ -218,70 +217,16 @@ volumes: db-data: ``` -## Compose Version 3 features and compatibility +## Docker stacks and services -To deploy the voting application, we will run the `docker stack deploy` command -with this `docker-stack.yml` file to pull the referenced images and launch the -services in a swarm as configured in the `.yml`. +To deploy the voting app, we will run the `docker stack deploy` command with +this `docker-stack.yml` file to pull the referenced images and launch the +services in a swarm. This allows us to run the application across multiple +servers, and use swarm mode for load balancing and performance. Rather than +thinking about running individual containers, we can start to model deployments +as application stacks and services. -Note that at the top of the `docker-stack.yml` file, the version is indicated as -`version: "3" `. The voting app example relies on Compose version 3, which is -designed to be cross-compatible with Compose and Docker Engine swarm mode. - -Before we get started, let's take a look at some aspects of Compose files and -deployment options that are new in Compose Version 3, and that we want to highlight in -this walkthrough. - -- [docker-stack.yml](#docker-stackyml) - - [deploy key and swarm mode](#deploy-key-and-swarm-mode) -- [docker stack deploy command](#docker-stack-deploy-command) -- [Docker stacks and services](#docker-stacks-and-services) - -### docker-stack.yml - -`docker-stack.yml` is a new type of [Compose file](/compose/compose-file.md) -only compatible with Compose Version 3. - -#### deploy key and swarm mode - -The [deploy](/compose/compose-file.md#deploy) key allows you to specify various properties of a swarm deployment. - -For example, the voting app configuration uses this to create replicas of the -`vote` and `result` services (2 containers of each will be deployed to the -swarm). - -The voting app also uses the `deploy` key to constrain some services to run only -on a manager node. - -### docker stack deploy command - -[docker stack deploy](/engine/reference/commandline/stack_deploy.md) -is the command we will use to deploy with `docker-stack.yml`. - -* This command supports only `version: "3" ` Compose files. - -* It does not support the `build` key supported in standard Compose files, which -builds based on a Dockerfile. You need to use pre-built images -with `docker stack deploy`. - -* It can take the place of running `docker compose up` to run -Version 3 compatible applications. - -### Docker stacks and services - -Taken together, these new options can help when you want to configure an app to -run its component functions across multiple servers, and use swarm mode for load -balancing and performance. Rather than thinking about running individual -containers, we can start to model deployments as application stacks and -services. - -### Compose file reference - -For more on what's new in Compose Version 3: - -* [Introducing Docker 1.13.0](https://blog.docker.com/2017/01/whats-new-in-docker-1-13/) blog post from [Docker Core Engineering](https://blog.docker.com/author/core_eng/) - -* [Versioning](/compose/compose-file.md#versioning), [Version 3](/compose/compose-file.md#version-3), and [Upgrading](/compose/compose-file.md#upgrading) in [Compose file reference](/compose/compose-file.md) +If you are interested in learning more about new Compose version 3.x features, Docker Engine 1.13.x, and swarm mode integration, check out the [list of resources](customize-app.md#resources) at the end of this tutorial. ## What's next? diff --git a/engine/getstarted-voting-app/node-setup.md b/engine/getstarted-voting-app/node-setup.md index 48aeb4c7d5..5d311e77ec 100644 --- a/engine/getstarted-voting-app/node-setup.md +++ b/engine/getstarted-voting-app/node-setup.md @@ -9,25 +9,66 @@ for the swarm nodes. You could create these Docker hosts on different physical machines, virtual machines, or cloud providers. For this example, we use [Docker Machine](/machine/get-started.md) to create two -virtual machines on a single system. (See [Docker Machine -Overview](/machine/overview.md) to learn more.) We'll also verify the setup, and +virtual machines on a single system. We'll also verify the setup, and run some basic commmands to interact with the machines. +## Prerequisites + +* **Docker Machine** - These steps rely on use of +[Docker Machine](/machine/get-started.md) (`docker-machine`), which +comes auto-installed with both Docker for Mac and Docker for Windows. + +

+ +* **VirtualBox driver on Docker for Mac** - On Docker for Mac, you'll +use `docker-machine` with the `virtualbox` driver to create machines. If you had +a legacy installation of Docker Toolbox, you already have Oracle VirtualBox +installed as part of that. If you started fresh with Docker for Mac, then you +need to install VirtualBox independently. We recommend doing this rather than +using the Toolbox installer because it can +[conflict](/docker-for-mac/docker-toolbox.md) with Docker for Mac. You can +[download VirtualBox for `OS X hosts` +here](https://www.virtualbox.org/wiki/Downloads), and follow install +instructions. You do not need to start VirtualBox. The `docker-machine create` +command will call it via the driver. +

+* **Hyper-V driver on Docker for Windows** - On Docker for Windows, you +will use `docker-machine` with the [`Hyper-V`](/machine/drivers/hyper-v/) driver +to create machines. You will need to follow the instructions in the [Hyper-V +example](/machine/drivers/hyper-v#example) reference topic to set up a new +external network switch (a one-time task), reboot, and then +[create the machines (nodes)](/machine/drivers/hyper-v.md#create-the-nodes-with-docker-machine-and-the-microsoft-hyper-v-driver) +in an elevated PowerShell per those instructions. + +### Commands to create machines + +The Docker Machine commands to create local virtual machines on Mac and Windows +are are as follows. + +#### Mac + +``` +docker-machine create --driver virtualbox HOST-NAME +``` + +#### Windows + +``` +docker-machine create -d hyperv --hyperv-virtual-switch "NETWORK-SWITCH" +MACHINE-NAME +``` + +This must be done in an elevated PowerShell, using a custom-created external network switch. See [Hyper-V example](/machine/drivers/hyper-v#example). + ## Create manager and worker machines -The Docker Machine command to create a local virtual machine is: - -``` -docker-machine create --driver virtualbox -``` - Create two machines and name them to anticipate what their roles will be in the swarm: * manager * worker -Here is an example of creating the `manager`. Create this one, then do the same for `worker`. +Here is an example of creating the `manager` on Docker for Mac. Create this one, then do the same for `worker`. ``` $ docker-machine create --driver virtualbox manager @@ -80,11 +121,20 @@ You will need the IP address of the manager for a later step. ## Interacting with the machines -There are several ways to interact with these machines directly on the command line or programatically. We'll cover two methods for managing the machines directly from the command line: +There are a few ways to interact with these machines directly on the command +line or programatically. We'll cover two methods for managing the machines +directly from the command line: + +* [Manage the machines from a pre-configured shell](#manage-the-machines-from-a-pre-configured-shell) + +* [`docker ssh` into a machine](#ssh-into-a-machine) #### Manage the machines from a pre-configured shell -You can use `docker-machine` to set up environment variables in a shell that connect to the Docker client on a virtual machine. With this setup, the Docker commands you type in your local shell will run on the given machine. We'll set up a shell to talk to our manager machine. +You can use `docker-machine` to set up environment variables in a shell that +connect to the Docker client on a virtual machine. With this setup, the Docker +commands you type in your local shell will run on the given machine. As an +example, we'll set up a shell to talk to our manager machine. 1. Run `docker-machine env manager` to get environment variables for the manager. @@ -100,15 +150,22 @@ You can use `docker-machine` to set up environment variables in a shell that con ``` 2. Connect your shell to the manager. + + On Mac: + ``` $ eval $(docker-machine env manager) ``` - This sets environment variables for the current shell that the Docker client will read which specify the TLS settings. + On Windows PowerShell: - **Note**: If you are using `fish`, or a Windows shell such as - Powershell/`cmd.exe` the above method will not work as described. - Instead, see the [env command reference documentation](/machine/reference/env.md) to learn how to set the environment variables for your shell. + ``` + & docker-machine.exe env manager | Invoke-Expression + ``` + + This sets [environment variables](/machine/reference/env.md) for the current +shell. The rest of the `docker-machine` commands we cover are the same on both +Mac and Windows. 3. Run `docker-machine ls` again. @@ -119,11 +176,19 @@ You can use `docker-machine` to set up environment variables in a shell that con worker - virtualbox Running tcp://192.168.99.101:2376 v1.13.0-rc6 ``` - The asterisk next `manager` indicates that the current shell is connected to that machine. Docker commands run in this shell will execute on the `manager.` (Note that you could change this by re-running the above commands to connect to the `worker`, or open multiple terminals to talk to multiple machines.) + The asterisk next `manager` indicates that the current shell is connected to +that machine. Docker commands run in this shell will execute on the `manager.` +(Note that you could change this by re-running the above commands to connect to +the `worker`, or open multiple terminals to talk to multiple machines.) + +If you use this method, you'll need to re-configure the environment setup each +time you want to switch between the manager and the worker, or keep two shells +open. #### ssh into a machine -You can use the command `docker-machine ssh ` to log into a machine: +Alternatively, you can use the command `docker-machine ssh ` to +log into a machine. ``` $ docker-machine ssh manager @@ -147,8 +212,15 @@ Boot2Docker version 1.13.0-rc6, build HEAD : 5ab2289 - Wed Jan 11 23:37:52 UTC 2 Docker version 1.13.0-rc6, build 2f2d055 ``` -We'll use this method later in the example. +You _do not_ have to set up `docker-machine` environment variables, as in the +previous section, for `docker ssh` to work. You can run this command in that +same shell you configured to talk to the manager, or in a new one, and it will +work either way. + +This tutorial will employ the `docker ssh` method to run commands on the +machines, but which approach you use is really a matter of personal preference. ## What's next? -In the next step, we'll [create a swarm](create-swarm.md) across these two Docker machines. +In the next step, we'll [create a swarm](create-swarm.md) across these two +Docker machines. diff --git a/engine/getstarted-voting-app/test-drive.md b/engine/getstarted-voting-app/test-drive.md index fd50211963..4617161c04 100644 --- a/engine/getstarted-voting-app/test-drive.md +++ b/engine/getstarted-voting-app/test-drive.md @@ -20,7 +20,7 @@ Click on either cats or dogs to vote. ## View the results tally -Now, go to `5001` in a web browser to view the voting results tally, as one might do in the role of poll coordinator. The tally is shown by percentage in the current configuration of the app. +Now, go to `MANAGER-IP:5001` in a web browser to view the voting results tally, as one might do in the role of poll coordinator. The tally is shown by percentage in the current configuration of the app. ![Results web page](images/vote-results.png) @@ -48,9 +48,10 @@ action here. For example: * Two of the services are replicated: * `vote` (represented in the visulizer by `vote_vote`) - * `result` (represented in the visulizer by `vote_result`) + * `redis` (represented in the visulizer by `vote_redis`) - Both of these services are configured as `replicas: 2` under the `deploy` key. In the current state of this app, one of each is running on a manager and on a worker. However, since neither are explicitly constrained to either node in `docker-stack.yml`, all or some of these services could be running on either node, depending on workload and re-balancing choices we've left to the swarm orchestration. + Both of these services are configured as `replicas: 2` under + the `deploy` key. In the current state of this app (shown in the visualizer), one of each of these containers is running on a manager and on a worker. However, since neither are explicitly constrained to either node in `docker-stack.yml`, all or some of these services could be running on either node, depending on workload and re-balancing choices we've left to the swarm orchestration. ![replicas in yml](images/replicas-constraint.png) diff --git a/engine/installation/linux/oracle.md b/engine/installation/linux/oracle.md index 7d0aa984ae..cd87db46b5 100644 --- a/engine/installation/linux/oracle.md +++ b/engine/installation/linux/oracle.md @@ -14,29 +14,37 @@ To get started with Docker on Oracle Linux, make sure you ### OS requirements -To install Docker, you need the 64-bit version of Oracle Linux 6 or 7. +To install Docker, you need the 64-bit version of Oracle Linux 6 or 7 running the +Unbreakable Enterprise Kernel Release 4 (4.1.12) or higher. -To use `btrfs`, you need to install the Unbreakable Enterprise Kernel (UEK) -version 4.1.12 or higher. For Oracle Linux 6, you need to enable extra repositories -to install UEK4. See +For Oracle Linux 6, you need to enable extra repositories to install UEK4. See [Obtaining and installing the UEK packages](https://docs.oracle.com/cd/E37670_01/E37355/html/ol_obtain_uek.html){: target="_blank" class="_" }. +The [OverlayFS2 storage driver](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/) is only supported +when running the UEK4. + ### Remove unofficial Docker packages -Oracle's repositories contain an older version of Docker, with the package name -`docker` instead of `docker-engine`. If you installed this version of Docker, +Oracle's repositories used to contain an older version of Docker with the package name +`docker` instead of `docker-engine`. If you still have this version of Docker installed, remove it using the following command: ```bash $ sudo yum -y remove docker ``` +Oracle's repositories also contain the Oracle-supported version of Docker which uses +the same `docker-engine` name as the official Docker package. If you have installed this +version of Docker, it will automatically be upgraded by the official version. + You may also have to remove the package `docker-engine-selinux` which conflicts with the official `docker-engine` package. Remove it with the following command: ```bash $ sudo yum -y remove docker-engine-selinux ``` +This package has already been deprecated by Oracle so will only exist on systems that have +an older version of the Oracle-supported package installed. The contents of `/var/lib/docker` are not removed, so any images, containers, or volumes you created using the older version of Docker are preserved. @@ -45,7 +53,7 @@ or volumes you created using the older version of Docker are preserved. You can install Docker in different ways, depending on your needs: -- Most users +- Users who do not require Docker support from Oracle can [set up the official Docker repositories](#install-using-the-repository) and install from them, for ease of installation and upgrade tasks. This is the recommended approach. @@ -53,10 +61,10 @@ You can install Docker in different ways, depending on your needs: - Some users download the RPM package and install it manually and manage upgrades completely manually. -- Some users cannot use third-party repositories, and must rely on - the version of Docker in the Oracle repositories. This version of Docker may - be out of date. Those users should consult the Oracle documentation and not - follow these procedures. +- Some users cannot use third-party repositories or who wish to receive Docker support + from Oracle must rely on the version of Docker in the Oracle repositories. + This version of Docker may be out of date. Those users should consult the Oracle + Linux Docker User's Guide and not follow these procedures. ### Install using the repository @@ -123,11 +131,6 @@ Docker from the repository. $ sudo yum makecache fast ``` - If this is the first time you have refreshed the package index since adding - the Docker repositories, you will be prompted to accept the GPG key, and - the key's fingerprint will be shown. Verify that the fingerprint matches - `58118E89F3A912897C070ADBF76221572C52609D` and if so, accept the key. - 2. Install the latest version of Docker, or go to the next step to install a specific version. @@ -135,6 +138,23 @@ Docker from the repository. $ sudo yum -y install docker-engine ``` + If this is the first time you are installing a package from + the Docker repositories, the GPG key and fingerprint will be shown and + automatically accepted. + + If you do not want to automatically accept the GPG key, remove the `-y` + parameter so that yum prompts you to accept the GPG manually. + + Ensure the GPG details match the following: + + ```none + Retrieving key from https://yum.dockerproject.org/gpg + Importing GPG key 0x2C52609D: + Userid : "Docker Release Tool (releasedocker) " + Fingerprint: 5811 8e89 f3a9 1289 7c07 0adb f762 2157 2c52 609d + From : https://yum.dockerproject.org/gpg + ``` + > **Warning**: If you have both stable and unstable repositories enabled, > installing or updating Docker without specifying a version in the > `yum install` or `yum upgrade` command will always install the highest @@ -160,10 +180,15 @@ Docker from the repository. The contents of the list depend upon which repositories you have enabled, and will be specific to your version of Oracle Linux (indicated by the `.el7` suffix on the version, in this example). Choose a specific version to - install. The second column is the version string. The third column is the - repository name, which indicates which repository the package is from and by - extension extension its stability level. To install a specific version, - append the version string to the package name and separate them by a hyphen + install. The second column is the version string. + + The third column is the repository name, which indicates which repository the + package is from and by extension extension its stability level. Oracle ships + the `docker-engine` in its addon repository so you may also see that repository + in this column if you have it enabled. + + To install a specific version, append the version string to the package name + and separate them by a hyphen (`-`): ```bash @@ -203,9 +228,10 @@ users to run Docker commands and for other optional configuration steps. #### Upgrade Docker -To upgrade Docker, first run `sudo yum makecache fast`, then follow the -[installation instructions](#install-docker), choosing the new version you want -to install. +To upgrade Docker, either run `sudo yum update docker-engine` to upgrade to the latest available +version or if you would prefer to upgrade to a specific version, first run +`sudo yum makecache fast`, then follow the [installation instructions](#install-docker), +choosing the new version you want to install. ### Install from a package @@ -282,10 +308,12 @@ instead of `yum -y install`, and pointing to the new file. $ sudo rm -rf /var/lib/docker ``` - > **Note**: This won't work when the `btrfs` graph driver has been used, - > because the `rm -rf` command cannot remove the subvolumes that Docker - > creates. See the output of `man btrfs-subvolume` for information on - > removing `btrfs` subvolumes. + > **Note**: If you are using the `btrfs` graph driver, you will need to manually + > remove any subvolumes that were created by the Docker Engine before removing the + > rest of the data. + > Review the [Oracle Linux 7 Administrator Guide](http://docs.oracle.com/cd/E52668_01/E54669/html/ol7-use-case3-btrfs.html) + > for more information on how to remove btrfs subvolumes or see the output of + > `man btrfs-subvolume` for information on removing `btrfs` subvolumes. You must delete any edited configuration files manually. diff --git a/engine/reference/commandline/README b/engine/reference/commandline/README new file mode 100644 index 0000000000..32d97bbd64 --- /dev/null +++ b/engine/reference/commandline/README @@ -0,0 +1,29 @@ +# About these files + +The files in this directory are stub files which include the file +`/_includes/cli.md`, which parses YAML files generated from the +[`docker/docker`](https://github.com/docker/docker) repository. The YAML files +are parsed into output files like +[https://docs.docker.com/engine/reference/commandline/build/](https://docs.docker.com/engine/reference/commandline/build/). + +## How the output is generated + +The output files are composed from two sources: + +- The **Description** and **Usage** sections comes directly from + the CLI source code in that repository. + +- The **Extended Description** and **Examples** sections are pulled into the + YAML from the files in [https://github.com/docker/docker/tree/master/docs/reference/commandline](https://github.com/docker/docker/tree/master/docs/reference/commandline) + Specifically, the Markdown inside the `## Description` and `## Examples` + headings are parsed. Please submit corrections to the text in that repository. + +# Updating the YAML files + +The process for generating the YAML files is still in flux. Check with +@thajestah or @frenchben. Be sure to generate the YAML files with the correct +branch of `docker/docker` checked out (probably not `master`). + +After generating the YAML files, replace the YAML files in +[https://github.com/docker/docker.github.io/tree/master/_data/engine-cli](https://github.com/docker/docker.github.io/tree/master/_data/engine-cli) +with the newly-generated files. Submit a pull request. diff --git a/engine/reference/commandline/container_attach.md b/engine/reference/commandline/container_attach.md index bf65a52673..64fb43a980 100644 --- a/engine/reference/commandline/container_attach.md +++ b/engine/reference/commandline/container_attach.md @@ -13,34 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Attaching to a container - -In this example the top command is run inside a container, from an image called -fedora, in detached mode. The ID from the container is passed into the **docker -attach** command: - -```bash -$ ID=$(sudo docker run -d fedora /usr/bin/top -b) - -$ sudo docker attach $ID -top - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 -Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie -Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st -Mem: 373572k total, 355560k used, 18012k free, 27872k buffers -Swap: 786428k total, 0k used, 786428k free, 221740k cached - -PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND -1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top - -top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05 -Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie -Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st -Mem: 373572k total, 355244k used, 18328k free, 27872k buffers -Swap: 786428k total, 0k used, 786428k free, 221776k cached - -PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND -1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top -``` diff --git a/engine/reference/commandline/container_commit.md b/engine/reference/commandline/container_commit.md index 5cea5e22c3..afe4930216 100644 --- a/engine/reference/commandline/container_commit.md +++ b/engine/reference/commandline/container_commit.md @@ -13,28 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Creating a new image from an existing container -An existing Fedora based container has had Apache installed while running -in interactive mode with the bash shell. Apache is also running. To -create a new image run `docker ps` to find the container's ID and then run: - -```bash -$ docker commit -m="Added Apache to Fedora base image" \ - -a="A D Ministrator" 98bd7fc99854 fedora/fedora_httpd:20 -``` - -Note that only `a-z0-9-_.` are allowed when naming images from an -existing container. - -### Apply specified Dockerfile instructions while committing the image -If an existing container was created without the DEBUG environment -variable set to "true", you can create a new image based on that -container by first getting the container's ID with `docker ps` and -then running: - -```bash -$ docker container commit -c="ENV DEBUG true" 98bd7fc99854 debug-image -``` diff --git a/engine/reference/commandline/container_cp.md b/engine/reference/commandline/container_cp.md index adb1db7b02..6e911bf411 100644 --- a/engine/reference/commandline/container_cp.md +++ b/engine/reference/commandline/container_cp.md @@ -13,70 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Suppose a container has finished producing some output as a file it saves -to somewhere in its filesystem. This could be the output of a build job or -some other computation. You can copy these outputs from the container to a -location on your local host. - -If you want to copy the `/tmp/foo` directory from a container to the -existing `/tmp` directory on your host. If you run `docker container cp` in your `~` -(home) directory on the local host: - - $ docker container cp compassionate_darwin:tmp/foo /tmp - -Docker creates a `/tmp/foo` directory on your host. Alternatively, you can omit -the leading slash in the command. If you execute this command from your home -directory: - - $ docker container cp compassionate_darwin:tmp/foo tmp - -If `~/tmp` does not exist, Docker will create it and copy the contents of -`/tmp/foo` from the container into this new directory. If `~/tmp` already -exists as a directory, then Docker will copy the contents of `/tmp/foo` from -the container into a directory at `~/tmp/foo`. - -When copying a single file to an existing `LOCALPATH`, the `docker container cp` command -will either overwrite the contents of `LOCALPATH` if it is a file or place it -into `LOCALPATH` if it is a directory, overwriting an existing file of the same -name if one exists. For example, this command: - - $ docker container cp sharp_ptolemy:/tmp/foo/myfile.txt /test - -If `/test` does not exist on the local machine, it will be created as a file -with the contents of `/tmp/foo/myfile.txt` from the container. If `/test` -exists as a file, it will be overwritten. Lastly, if `/test` exists as a -directory, the file will be copied to `/test/myfile.txt`. - -Next, suppose you want to copy a file or folder into a container. For example, -this could be a configuration file or some other input to a long running -computation that you would like to place into a created container before it -starts. This is useful because it does not require the configuration file or -other input to exist in the container image. - -If you have a file, `config.yml`, in the current directory on your local host -and wish to copy it to an existing directory at `/etc/my-app.d` in a container, -this command can be used: - - $ docker container cp config.yml myappcontainer:/etc/my-app.d - -If you have several files in a local directory `/config` which you need to copy -to a directory `/etc/my-app.d` in a container: - - $ docker container cp /config/. myappcontainer:/etc/my-app.d - -The above command will copy the contents of the local `/config` directory into -the directory `/etc/my-app.d` in the container. - -Finally, if you want to copy a symbolic link into a container, you typically -want to copy the linked target and not the link itself. To copy the target, use -the `-L` option, for example: - - $ ln -s /tmp/somefile /tmp/somefile.ln - $ docker container cp -L /tmp/somefile.ln myappcontainer:/tmp/ - -This command copies content of the local `/tmp/somefile` into the file -`/tmp/somefile.ln` in the container. Without `-L` option, the `/tmp/somefile.ln` -preserves its symbolic link but not its content. diff --git a/engine/reference/commandline/container_create.md b/engine/reference/commandline/container_create.md index 448c58677c..237599bcc8 100644 --- a/engine/reference/commandline/container_create.md +++ b/engine/reference/commandline/container_create.md @@ -13,18 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Specify isolation technology for container (--isolation) - -This option is useful in situations where you are running Docker containers on -Windows. The `--isolation=` option sets a container's isolation -technology. On Linux, the only supported is the `default` option which uses -Linux namespaces. On Microsoft Windows, you can specify these values: - -* `default`: Use the value specified by the Docker daemon's `--exec-opt` . If the `daemon` does not specify an isolation technology, Microsoft Windows uses `process` as its default value. -* `process`: Namespace isolation only. -* `hyperv`: Hyper-V hypervisor partition-based isolation. - -Specifying the `--isolation` flag without a value is the same as setting `--isolation="default"`. diff --git a/engine/reference/commandline/container_diff.md b/engine/reference/commandline/container_diff.md index 7cf5e730f9..afa1956e3b 100644 --- a/engine/reference/commandline/container_diff.md +++ b/engine/reference/commandline/container_diff.md @@ -13,31 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Inspect the changes to an `nginx` container: - -```bash -$ docker diff 1fdfd1f54c1b - -C /dev -C /dev/console -C /dev/core -C /dev/stdout -C /dev/fd -C /dev/ptmx -C /dev/stderr -C /dev/stdin -C /run -A /run/nginx.pid -C /var/lib/nginx/tmp -A /var/lib/nginx/tmp/client_body -A /var/lib/nginx/tmp/fastcgi -A /var/lib/nginx/tmp/proxy -A /var/lib/nginx/tmp/scgi -A /var/lib/nginx/tmp/uwsgi -C /var/log/nginx -A /var/log/nginx/access.log -A /var/log/nginx/error.log -``` diff --git a/engine/reference/commandline/container_exec.md b/engine/reference/commandline/container_exec.md index e7cefba1bb..01d30649bf 100644 --- a/engine/reference/commandline/container_exec.md +++ b/engine/reference/commandline/container_exec.md @@ -13,18 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - - $ docker run --name ubuntu_bash --rm -i -t ubuntu bash - -This will create a container named `ubuntu_bash` and start a Bash session. - - $ docker exec -d ubuntu_bash touch /tmp/execWorks - -This will create a new file `/tmp/execWorks` inside the running container -`ubuntu_bash`, in the background. - - $ docker exec -it ubuntu_bash bash - -This will create a new Bash session in the container `ubuntu_bash`. diff --git a/engine/reference/commandline/container_export.md b/engine/reference/commandline/container_export.md index 220bf2f544..67d383d886 100644 --- a/engine/reference/commandline/container_export.md +++ b/engine/reference/commandline/container_export.md @@ -13,22 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Export the contents of the container called angry_bell to a tar file -called angry_bell.tar: - -```bash -$ docker export angry_bell > angry_bell.tar - -$ docker export --output=angry_bell-latest.tar angry_bell - -$ ls -sh angry_bell.tar - -321M angry_bell.tar - -$ ls -sh angry_bell-latest.tar - -321M angry_bell-latest.tar -``` diff --git a/engine/reference/commandline/container_logs.md b/engine/reference/commandline/container_logs.md index 40f1bcbd60..55e92f3949 100644 --- a/engine/reference/commandline/container_logs.md +++ b/engine/reference/commandline/container_logs.md @@ -13,99 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Display all containers, including non-running - -```bash -$ docker container ls -a - -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -a87ecb4f327c fedora:20 /bin/sh -c #(nop) MA 20 minutes ago Exit 0 desperate_brattain -01946d9d34d8 vpavlin/rhel7:latest /bin/sh -c #(nop) MA 33 minutes ago Exit 0 thirsty_bell -c1d3b0166030 acffc0358b9e /bin/sh -c yum -y up 2 weeks ago Exit 1 determined_torvalds -41d50ecd2f57 fedora:20 /bin/sh -c #(nop) MA 2 weeks ago Exit 0 drunk_pike -``` - -### Display only IDs of all containers, including non-running - -```bash -$ docker container ls -a -q - -a87ecb4f327c -01946d9d34d8 -c1d3b0166030 -41d50ecd2f57 -``` - -# Display only IDs of all containers that have the name `determined_torvalds` - -```bash -$ docker container ls -a -q --filter=name=determined_torvalds - -c1d3b0166030 -``` - -### Display containers with their commands - -```bash -{% raw %} -$ docker container ls --format "{{.ID}}: {{.Command}}" - -a87ecb4f327c: /bin/sh -c #(nop) MA -01946d9d34d8: /bin/sh -c #(nop) MA -c1d3b0166030: /bin/sh -c yum -y up -41d50ecd2f57: /bin/sh -c #(nop) MA -{% endraw %} -``` - -### Display containers with their labels in a table - -```bash -{% raw %} -$ docker container ls --format "table {{.ID}}\t{{.Labels}}" - -CONTAINER ID LABELS -a87ecb4f327c com.docker.swarm.node=ubuntu,com.docker.swarm.storage=ssd -01946d9d34d8 -c1d3b0166030 com.docker.swarm.node=debian,com.docker.swarm.cpu=6 -41d50ecd2f57 com.docker.swarm.node=fedora,com.docker.swarm.cpu=3,com.docker.swarm.storage=ssd -{% endraw %} -``` - -### Display containers with their node label in a table - -```bash -{% raw %} -$ docker container ls --format 'table {{.ID}}\t{{(.Label "com.docker.swarm.node")}}' - -CONTAINER ID NODE -a87ecb4f327c ubuntu -01946d9d34d8 -c1d3b0166030 debian -41d50ecd2f57 fedora -{% endraw %} -``` - -### Display containers with `remote-volume` mounted - -```bash -{% raw %} -$ docker container ls --filter volume=remote-volume --format "table {{.ID}}\t{{.Mounts}}" - -CONTAINER ID MOUNTS -9c3527ed70ce remote-volume -{% endraw %} -``` - -### Display containers with a volume mounted in `/data` - -```bash -{% raw %} -$ docker container ls --filter volume=/data --format "table {{.ID}}\t{{.Mounts}}" - -CONTAINER ID MOUNTS -9c3527ed70ce remote-volume -{% endraw %} -``` diff --git a/engine/reference/commandline/container_port.md b/engine/reference/commandline/container_port.md index 36e349467f..5a00bed24a 100644 --- a/engine/reference/commandline/container_port.md +++ b/engine/reference/commandline/container_port.md @@ -13,43 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker ps - -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -b650456536c7 busybox:latest top 54 minutes ago Up 54 minutes 0.0.0.0:1234->9876/tcp, 0.0.0.0:4321->7890/tcp test -``` - -### Find out all the ports mapped - -```bash -$ docker container port test - -7890/tcp -> 0.0.0.0:4321 -9876/tcp -> 0.0.0.0:1234 -``` - -### Find out a specific mapping - -```bash -$ docker container port test 7890/tcp - -0.0.0.0:4321 -``` - -```bash -$ docker container port test 7890 - -0.0.0.0:4321 -``` - -### An example showing error for non-existent mapping - -```bash -$ docker container port test 7890/udp - -2014/06/24 11:53:36 Error: No public port '7890/udp' published for test -``` diff --git a/engine/reference/commandline/container_prune.md b/engine/reference/commandline/container_prune.md index fbb2c9031a..8005e0a5c0 100644 --- a/engine/reference/commandline/container_prune.md +++ b/engine/reference/commandline/container_prune.md @@ -13,78 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker container prune -WARNING! This will remove all stopped containers. -Are you sure you want to continue? [y/N] y -Deleted Containers: -4a7f7eebae0f63178aff7eb0aa39cd3f0627a203ab2df258c1a00b456cf20063 -f98f9c2aa1eaf727e4ec9c0283bc7d4aa4762fbdba7f26191f26c97f64090360 - -Total reclaimed space: 212 B -``` - -### Filtering - -The filtering flag (`-f` or `--filter`) format is of "key=value". If there is more -than one filter, then pass multiple flags (e.g., `--filter "foo=bar" --filter "bif=baz"`) - -The currently supported filters are: - -* until (``) - only remove containers created before given timestamp - -The `until` filter can be Unix timestamps, date formatted -timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed -relative to the daemon machine’s time. Supported formats for date -formatted time stamps include RFC3339Nano, RFC3339, `2006-01-02T15:04:05`, -`2006-01-02T15:04:05.999999999`, `2006-01-02Z07:00`, and `2006-01-02`. The local -timezone on the daemon will be used if you do not provide either a `Z` or a -`+-00:00` timezone offset at the end of the timestamp. When providing Unix -timestamps enter seconds[.nanoseconds], where seconds is the number of seconds -that have elapsed since January 1, 1970 (midnight UTC/GMT), not counting leap -seconds (aka Unix epoch or Unix time), and the optional .nanoseconds field is a -fraction of a second no more than nine digits long. - -The following removes containers created more than 5 minutes ago: -```bash -{% raw %} -$ docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}' -CONTAINER ID IMAGE COMMAND CREATED AT STATUS -61b9efa71024 busybox "sh" 2017-01-04 13:23:33 -0800 PST Exited (0) 41 seconds ago -53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800 PST Exited (0) 12 minutes ago - -$ docker container prune --force --filter "until=5m" -Deleted Containers: -53a9bc23a5168b6caa2bfbefddf1b30f93c7ad57f3dec271fd32707497cb9369 - -Total reclaimed space: 25 B - -$ docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}' -CONTAINER ID IMAGE COMMAND CREATED AT STATUS -61b9efa71024 busybox "sh" 2017-01-04 13:23:33 -0800 PST Exited (0) 44 seconds ago -{% endraw %} -``` - -The following removes containers created before `2017-01-04T13:10:00`: - -```bash -{% raw %} -$ docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}' -CONTAINER ID IMAGE COMMAND CREATED AT STATUS -53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800 PST Exited (0) 7 minutes ago -4a75091a6d61 busybox "sh" 2017-01-04 13:09:53 -0800 PST Exited (0) 9 minutes ago - -$ docker container prune --force --filter "until=2017-01-04T13:10:00" -Deleted Containers: -4a75091a6d618526fcd8b33ccd6e5928ca2a64415466f768a6180004b0c72c6c - -Total reclaimed space: 27 B - -$ docker ps -a --format 'table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.CreatedAt}}\t{{.Status}}' -CONTAINER ID IMAGE COMMAND CREATED AT STATUS -53a9bc23a516 busybox "sh" 2017-01-04 13:11:59 -0800 PST Exited (0) 9 minutes ago -{% endraw %} -``` diff --git a/engine/reference/commandline/container_rm.md b/engine/reference/commandline/container_rm.md index 9686dd8b32..b17168450a 100644 --- a/engine/reference/commandline/container_rm.md +++ b/engine/reference/commandline/container_rm.md @@ -13,38 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Removing a container using its ID - -To remove a container using its ID, find either from a **docker ps -a** -command, or use the ID returned from the **docker run** command, or retrieve -it from a file used to store it using the **docker run --cidfile**: - - $ docker container rm abebf7571666 - -### Removing a container using the container name - -The name of the container can be found using the **docker ps -a** -command. The use that name as follows: - - $ docker container rm hopeful_morse - -### Removing a container and all associated volumes - - $ docker container rm -v redis - redis - -This command will remove the container and any volumes associated with it. -Note that if a volume was specified with a name, it will not be removed. - - $ docker create -v awesome:/foo -v /bar --name hello redis - - hello - - $ docker container rm -v hello - -In this example, the volume for `/foo` will remain in tact, but the volume for -`/bar` will be removed. The same behavior holds for volumes inherited with -`--volumes-from`. diff --git a/engine/reference/commandline/container_stats.md b/engine/reference/commandline/container_stats.md index 9498f7bb17..26bf990bdc 100644 --- a/engine/reference/commandline/container_stats.md +++ b/engine/reference/commandline/container_stats.md @@ -13,20 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Running `docker container stats` on all running containers - - $ docker container stats - CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O - 1285939c1fd3 0.07% 796 KiB / 64 MiB 1.21% 788 B / 648 B 3.568 MB / 512 KB - 9c76f7834ae2 0.07% 2.746 MiB / 64 MiB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B - d1ea048f04e4 0.03% 4.583 MiB / 64 MiB 6.30% 2.854 KB / 648 B 27.7 MB / 0 B - -Running `docker container stats` on multiple containers by name and id. - - $ docker container stats fervent_panini 5acfcb1b4fd1 - CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O - 5acfcb1b4fd1 0.00% 115.2 MiB/1.045 GiB 11.03% 1.422 kB/648 B - fervent_panini 0.02% 11.08 MiB/1.045 GiB 1.06% 648 B/648 B diff --git a/engine/reference/commandline/container_top.md b/engine/reference/commandline/container_top.md index 918115a9aa..42fc4ceb16 100644 --- a/engine/reference/commandline/container_top.md +++ b/engine/reference/commandline/container_top.md @@ -13,11 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Run **docker container top** with the ps option of -x: - - $ docker container top 8601afda2b -x - PID TTY STAT TIME COMMAND - 16623 ? Ss 0:00 sleep 99999 diff --git a/engine/reference/commandline/container_update.md b/engine/reference/commandline/container_update.md index fcaf7485c8..eb1ee8ea8b 100644 --- a/engine/reference/commandline/container_update.md +++ b/engine/reference/commandline/container_update.md @@ -13,72 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Update a container's cpu-shares - -To limit a container's cpu-shares to 512, first identify the container -name or ID. You can use **docker ps** to find these values. You can also -use the ID returned from the **docker run** command. Then, do the following: - -```bash -$ docker container update --cpu-shares 512 abebf7571666 -``` - -### Update a container with cpu-shares and memory - -To update multiple resource configurations for multiple containers: - -```bash -$ docker container update --cpu-shares 512 -m 300M abebf7571666 hopeful_morse -``` - -### Update a container's kernel memory constraints - -You can update a container's kernel memory limit using the **--kernel-memory** -option. On kernel version older than 4.6, this option can be updated on a -running container only if the container was started with **--kernel-memory**. -If the container was started *without* **--kernel-memory** you need to stop -the container before updating kernel memory. - -For example, if you started a container with this command: - -```bash -$ docker run -dit --name test --kernel-memory 50M ubuntu bash -``` - -You can update kernel memory while the container is running: - -```bash -$ docker container update --kernel-memory 80M test -``` - -If you started a container *without* kernel memory initialized: - -```bash -$ docker run -dit --name test2 --memory 300M ubuntu bash -``` - -Update kernel memory of running container `test2` will fail. You need to stop -the container before updating the **--kernel-memory** setting. The next time you -start it, the container uses the new value. - -Kernel version newer than (include) 4.6 does not have this limitation, you -can use `--kernel-memory` the same way as other options. - -### Update a container's restart policy - -You can change a container's restart policy on a running container. The new -restart policy takes effect instantly after you run `docker container update` on a -container. - -To update restart policy for one or more containers: - -```bash -$ docker container update --restart=on-failure:3 abebf7571666 hopeful_morse -``` - -Note that if the container is started with "--rm" flag, you cannot update the restart -policy for it. The `AutoRemove` and `RestartPolicy` are mutually exclusive for the -container. diff --git a/engine/reference/commandline/container_wait.md b/engine/reference/commandline/container_wait.md index b1f309f2d4..4ec5462592 100644 --- a/engine/reference/commandline/container_wait.md +++ b/engine/reference/commandline/container_wait.md @@ -13,17 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -# EXAMPLES - -```bash -$ docker run -d fedora sleep 99 - -079b83f558a2bc52ecad6b2a5de13622d584e6bb1aea058c11b36511e85e7622 - -$ docker container wait 079b83f558a2bc - -0 -``` diff --git a/engine/reference/commandline/image_build.md b/engine/reference/commandline/image_build.md index 71a7fc4e14..0b2eb54c84 100644 --- a/engine/reference/commandline/image_build.md +++ b/engine/reference/commandline/image_build.md @@ -13,267 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Build with PATH - -```bash -$ docker build . - -Uploading context 10240 bytes -Step 1/3 : FROM busybox -Pulling repository busybox - ---> e9aa60c60128MB/2.284 MB (100%) endpoint: https://cdn-registry-1.docker.io/v1/ -Step 2/3 : RUN ls -lh / - ---> Running in 9c9e81692ae9 -total 24 -drwxr-xr-x 2 root root 4.0K Mar 12 2013 bin -drwxr-xr-x 5 root root 4.0K Oct 19 00:19 dev -drwxr-xr-x 2 root root 4.0K Oct 19 00:19 etc -drwxr-xr-x 2 root root 4.0K Nov 15 23:34 lib -lrwxrwxrwx 1 root root 3 Mar 12 2013 lib64 -> lib -dr-xr-xr-x 116 root root 0 Nov 15 23:34 proc -lrwxrwxrwx 1 root root 3 Mar 12 2013 sbin -> bin -dr-xr-xr-x 13 root root 0 Nov 15 23:34 sys -drwxr-xr-x 2 root root 4.0K Mar 12 2013 tmp -drwxr-xr-x 2 root root 4.0K Nov 15 23:34 usr - ---> b35f4035db3f -Step 3/3 : CMD echo Hello world - ---> Running in 02071fceb21b - ---> f52f38b7823e -Successfully built f52f38b7823e -Removing intermediate container 9c9e81692ae9 -Removing intermediate container 02071fceb21b -``` - -This example specifies that the `PATH` is `.`, and so all the files in the -local directory get `tar`d and sent to the Docker daemon. The `PATH` specifies -where to find the files for the "context" of the build on the Docker daemon. -Remember that the daemon could be running on a remote machine and that no -parsing of the Dockerfile happens at the client side (where you're running -`docker build`). That means that *all* the files at `PATH` get sent, not just -the ones listed to [*ADD*](../builder.md#add) in the Dockerfile. - -The transfer of context from the local machine to the Docker daemon is what the -`docker` client means when you see the "Sending build context" message. - -If you wish to keep the intermediate containers after the build is complete, -you must use `--rm=false`. This does not affect the build cache. - -### Build with URL - -```bash -$ docker build github.com/creack/docker-firefox -``` - -This will clone the GitHub repository and use the cloned repository as context. -The Dockerfile at the root of the repository is used as Dockerfile. You can -specify an arbitrary Git repository by using the `git://` or `git@` scheme. - -```bash -$ docker build -f ctx/Dockerfile http://server/ctx.tar.gz - -Downloading context: http://server/ctx.tar.gz [===================>] 240 B/240 B -Step 1/3 : FROM busybox - ---> 8c2e06607696 -Step 2/3 : ADD ctx/container.cfg / - ---> e7829950cee3 -Removing intermediate container b35224abf821 -Step 3/3 : CMD /bin/ls - ---> Running in fbc63d321d73 - ---> 3286931702ad -Removing intermediate container fbc63d321d73 -Successfully built 377c409b35e4 -``` - -This sends the URL `http://server/ctx.tar.gz` to the Docker daemon, which -downloads and extracts the referenced tarball. The `-f ctx/Dockerfile` -parameter specifies a path inside `ctx.tar.gz` to the `Dockerfile` that is used -to build the image. Any `ADD` commands in that `Dockerfile` that refers to local -paths must be relative to the root of the contents inside `ctx.tar.gz`. In the -example above, the tarball contains a directory `ctx/`, so the `ADD -ctx/container.cfg /` operation works as expected. - -### Build with - - -```bash -$ docker build - < Dockerfile -``` - -This will read a Dockerfile from `STDIN` without context. Due to the lack of a -context, no contents of any local directory will be sent to the Docker daemon. -Since there is no context, a Dockerfile `ADD` only works if it refers to a -remote URL. - -```bash -$ docker build - < context.tar.gz -``` - -This will build an image for a compressed context read from `STDIN`. Supported -formats are: bzip2, gzip and xz. - -### Usage of .dockerignore - -```bash -$ docker build . - -Uploading context 18.829 MB -Uploading context -Step 1/2 : FROM busybox - ---> 769b9341d937 -Step 2/2 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 -Successfully built 99cc1ad10469 -$ echo ".git" > .dockerignore -$ docker build . -Uploading context 6.76 MB -Uploading context -Step 1/2 : FROM busybox - ---> 769b9341d937 -Step 2/2 : CMD echo Hello world - ---> Using cache - ---> 99cc1ad10469 -Successfully built 99cc1ad10469 -``` - -This example shows the use of the `.dockerignore` file to exclude the `.git` -directory from the context. Its effect can be seen in the changed size of the -uploaded context. The builder reference contains detailed information on -[creating a .dockerignore file](../builder.md#dockerignore-file) - -### Tag image (-t) - -```bash -$ docker build -t vieux/apache:2.0 . -``` - -This will build like the previous example, but it will then tag the resulting -image. The repository name will be `vieux/apache` and the tag will be `2.0`. -[Read more about valid tags](tag.md). - -You can apply multiple tags to an image. For example, you can apply the `latest` -tag to a newly built image and add another tag that references a specific -version. -For example, to tag an image both as `whenry/fedora-jboss:latest` and -`whenry/fedora-jboss:v2.1`, use the following: - -```bash -$ docker build -t whenry/fedora-jboss:latest -t whenry/fedora-jboss:v2.1 . -``` -### Specify Dockerfile (-f) - -```bash -$ docker build -f Dockerfile.debug . -``` - -This will use a file called `Dockerfile.debug` for the build instructions -instead of `Dockerfile`. - -```bash -$ docker build -f dockerfiles/Dockerfile.debug -t myapp_debug . -$ docker build -f dockerfiles/Dockerfile.prod -t myapp_prod . -``` - -The above commands will build the current build context (as specified by the -`.`) twice, once using a debug version of a `Dockerfile` and once using a -production version. - -```bash -$ cd /home/me/myapp/some/dir/really/deep -$ docker build -f /home/me/myapp/dockerfiles/debug /home/me/myapp -$ docker build -f ../../../../dockerfiles/debug /home/me/myapp -``` - -These two `docker build` commands do the exact same thing. They both use the -contents of the `debug` file instead of looking for a `Dockerfile` and will use -`/home/me/myapp` as the root of the build context. Note that `debug` is in the -directory structure of the build context, regardless of how you refer to it on -the command line. - -> **Note:** -> `docker build` will return a `no such file or directory` error if the -> file or directory does not exist in the uploaded context. This may -> happen if there is no context, or if you specify a file that is -> elsewhere on the Host system. The context is limited to the current -> directory (and its children) for security reasons, and to ensure -> repeatable builds on remote Docker hosts. This is also the reason why -> `ADD ../file` will not work. - -### Optional parent cgroup (--cgroup-parent) - -When `docker build` is run with the `--cgroup-parent` option the containers -used in the build will be run with the [corresponding `docker run` -flag](../run.md#specifying-custom-cgroups). - -### Set ulimits in container (--ulimit) - -Using the `--ulimit` option with `docker build` will cause each build step's -container to be started using those [`--ulimit` -flag values](./run.md#set-ulimits-in-container-ulimit). - -### Set build-time variables (--build-arg) - -You can use `ENV` instructions in a Dockerfile to define variable -values. These values persist in the built image. However, often -persistence is not what you want. Users want to specify variables differently -depending on which host they build an image on. - -A good example is `http_proxy` or source versions for pulling intermediate -files. The `ARG` instruction lets Dockerfile authors define values that users -can set at build-time using the `--build-arg` flag: - -```bash -$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 . -``` - -This flag allows you to pass the build-time variables that are -accessed like regular environment variables in the `RUN` instruction of the -Dockerfile. Also, these values don't persist in the intermediate or final images -like `ENV` values do. - -Using this flag will not alter the output you see when the `ARG` lines from the -Dockerfile are echoed during the build process. - -For detailed information on using `ARG` and `ENV` instructions, see the -[Dockerfile reference](../builder.md). - -### Optional security options (--security-opt) - -This flag is only supported on a daemon running on Windows, and only supports -the `credentialspec` option. The `credentialspec` must be in the format -`file://spec.txt` or `registry://keyname`. - -### Specify isolation technology for container (--isolation) - -This option is useful in situations where you are running Docker containers on -Windows. The `--isolation=` option sets a container's isolation -technology. On Linux, the only supported is the `default` option which uses -Linux namespaces. On Microsoft Windows, you can specify these values: - - -| Value | Description | -|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `default` | Use the value specified by the Docker daemon's `--exec-opt` . If the `daemon` does not specify an isolation technology, Microsoft Windows uses `process` as its default value. | -| `process` | Namespace isolation only. | -| `hyperv` | Hyper-V hypervisor partition-based isolation. | - -Specifying the `--isolation` flag without a value is the same as setting `--isolation="default"`. - - -### Squash an image's layers (--squash) **Experimental Only** - -Once the image is built, squash the new layers into a new image with a single -new layer. Squashing does not destroy any existing image, rather it creates a new -image with the content of the squashed layers. This effectively makes it look -like all `Dockerfile` commands were created with a single layer. The build -cache is preserved with this method. - -**Note**: using this option means the new image will not be able to take -advantage of layer sharing with other images and may use significantly more -space. - -**Note**: using this option you may see significantly more space used due to -storing two copies of the image, one for the build cache with all the cache -layers in tact, and one for the squashed version. diff --git a/engine/reference/commandline/image_history.md b/engine/reference/commandline/image_history.md index 566a928e62..8a4b7063dc 100644 --- a/engine/reference/commandline/image_history.md +++ b/engine/reference/commandline/image_history.md @@ -13,28 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker history fedora - -IMAGE CREATED CREATED BY SIZE COMMENT -105182bb5e8b 5 days ago /bin/sh -c #(nop) ADD file:71356d2ad59aa3119d 372.7 MB -73bd853d2ea5 13 days ago /bin/sh -c #(nop) MAINTAINER Lokesh Mandvekar 0 B -511136ea3c5a 10 months ago 0 B Imported from - -``` - -### Display comments in the image history - -The `docker commit` command has a **-m** flag for adding comments to the image. These comments will be displayed in the image history. - -```bash -$ sudo docker history docker:scm - -IMAGE CREATED CREATED BY SIZE COMMENT -2ac9d1098bf1 3 months ago /bin/bash 241.4 MB Added Apache to Fedora base image -88b42ffd1f7c 5 months ago /bin/sh -c #(nop) ADD file:1fd8d7f9f6557cafc7 373.7 MB -c69cab00d6ef 5 months ago /bin/sh -c #(nop) MAINTAINER Lokesh Mandvekar 0 B -511136ea3c5a 19 months ago 0 B Imported from - -``` diff --git a/engine/reference/commandline/image_import.md b/engine/reference/commandline/image_import.md index ab4957a1b8..bbd0abe245 100644 --- a/engine/reference/commandline/image_import.md +++ b/engine/reference/commandline/image_import.md @@ -13,39 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Import from a remote location - - # docker image import http://example.com/exampleimage.tgz example/imagerepo - -### Import from a local file - -Import to docker via pipe and stdin: - - # cat exampleimage.tgz | docker image import - example/imagelocal - -Import with a commit message. - - # cat exampleimage.tgz | docker image import --message "New image imported from tarball" - exampleimagelocal:new - -Import to a Docker image from a local file. - - # docker image import /path/to/exampleimage.tgz - - -### Import from a local file and tag - -Import to docker via pipe and stdin: - - # cat exampleimageV2.tgz | docker image import - example/imagelocal:V-2.0 - -### Import from a local directory - - # tar -c . | docker image import - exampleimagedir - -### Apply specified Dockerfile instructions while importing the image -This example sets the docker image ENV variable DEBUG to true by default. - - # tar -c . | docker image import -c="ENV DEBUG true" - exampleimagedir diff --git a/engine/reference/commandline/image_load.md b/engine/reference/commandline/image_load.md index f0490bd15b..d48334568c 100644 --- a/engine/reference/commandline/image_load.md +++ b/engine/reference/commandline/image_load.md @@ -13,33 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker images - -REPOSITORY TAG IMAGE ID CREATED SIZE -busybox latest 769b9341d937 7 weeks ago 2.489 MB - -$ docker load --input fedora.tar - -# […] - -Loaded image: fedora:rawhide - -# […] - -Loaded image: fedora:20 - -# […] - -$ docker images - -REPOSITORY TAG IMAGE ID CREATED SIZE -busybox latest 769b9341d937 7 weeks ago 2.489 MB -fedora rawhide 0d20aec6529d 7 weeks ago 387 MB -fedora 20 58394af37342 7 weeks ago 385.5 MB -fedora heisenbug 58394af37342 7 weeks ago 385.5 MB -fedora latest 58394af37342 7 weeks ago 385.5 MB -``` diff --git a/engine/reference/commandline/image_ls.md b/engine/reference/commandline/image_ls.md index 75b6a9be07..14bc5947e8 100644 --- a/engine/reference/commandline/image_ls.md +++ b/engine/reference/commandline/image_ls.md @@ -13,95 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Listing the images - -To list the images in a local repository (not the registry) run: - - $ docker image ls - -The list will contain the image repository name, a tag for the image, and an -image ID, when it was created and its virtual size. Columns: REPOSITORY, TAG, -IMAGE ID, CREATED, and SIZE. - -The `docker image ls` command takes an optional `[REPOSITORY[:TAG]]` argument -that restricts the list to images that match the argument. If you specify -`REPOSITORY`but no `TAG`, the `docker image ls` command lists all images in the -given repository. - - $ docker image ls java - -The `[REPOSITORY[:TAG]]` value must be an "exact match". This means that, for example, -`docker image ls jav` does not match the image `java`. - -If both `REPOSITORY` and `TAG` are provided, only images matching that -repository and tag are listed. To find all local images in the "java" -repository with tag "8" you can use: - - $ docker image ls java:8 - -To get a verbose list of images which contains all the intermediate images -used in builds use **-a**: - - $ docker image ls -a - -Previously, the docker image ls command supported the --tree and --dot arguments, -which displayed different visualizations of the image data. Docker core removed -this functionality in the 1.7 version. If you liked this functionality, you can -still find it in the third-party dockviz tool: https://github.com/justone/dockviz. - -### Listing images in a desired format - -When using the --format option, the image command will either output the data -exactly as the template declares or, when using the `table` directive, will -include column headers as well. You can use special characters like `\t` for -inserting tab spacing between columns. - -The following example uses a template without headers and outputs the ID and -Repository entries separated by a colon for all images: - -```bash -{% raw %} -$ docker images --format "{{.ID}}: {{.Repository}}" - -77af4d6b9913: -b6fa739cedf5: committ -78a85c484bad: ipbabble -30557a29d5ab: docker -5ed6274db6ce: -746b819f315e: postgres -746b819f315e: postgres -746b819f315e: postgres -746b819f315e: postgres -{% endraw %} -``` - -To list all images with their repository and tag in a table format you can use: - -```bash -{% raw %} -$ docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}" - -IMAGE ID REPOSITORY TAG -77af4d6b9913 -b6fa739cedf5 committ latest -78a85c484bad ipbabble -30557a29d5ab docker latest -5ed6274db6ce -746b819f315e postgres 9 -746b819f315e postgres 9.3 -746b819f315e postgres 9.3.5 -746b819f315e postgres latest -{% endraw %} -``` - -Valid template placeholders are listed above. - -### Listing only the shortened image IDs - -Listing just the shortened image IDs. This can be useful for some automated -tools. - - $ docker image ls -q diff --git a/engine/reference/commandline/image_pull.md b/engine/reference/commandline/image_pull.md index 2d4b73ac3a..3a2cad8f4b 100644 --- a/engine/reference/commandline/image_pull.md +++ b/engine/reference/commandline/image_pull.md @@ -13,185 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Pull an image from Docker Hub - -To download a particular image, or set of images (i.e., a repository), use -`docker image pull`. If no tag is provided, Docker Engine uses the `:latest` tag as a -default. This command pulls the `debian:latest` image: - - $ docker image pull debian - - Using default tag: latest - latest: Pulling from library/debian - fdd5d7827f33: Pull complete - a3ed95caeb02: Pull complete - Digest: sha256:e7d38b3517548a1c71e41bffe9c8ae6d6d29546ce46bf62159837aad072c90aa - Status: Downloaded newer image for debian:latest - -Docker images can consist of multiple layers. In the example above, the image -consists of two layers; `fdd5d7827f33` and `a3ed95caeb02`. - -Layers can be reused by images. For example, the `debian:jessie` image shares -both layers with `debian:latest`. Pulling the `debian:jessie` image therefore -only pulls its metadata, but not its layers, because all layers are already -present locally: - - $ docker image pull debian:jessie - - jessie: Pulling from library/debian - fdd5d7827f33: Already exists - a3ed95caeb02: Already exists - Digest: sha256:a9c958be96d7d40df920e7041608f2f017af81800ca5ad23e327bc402626b58e - Status: Downloaded newer image for debian:jessie - -To see which images are present locally, use the **docker-images(1)** -command: - - $ docker images - - REPOSITORY TAG IMAGE ID CREATED SIZE - debian jessie f50f9524513f 5 days ago 125.1 MB - debian latest f50f9524513f 5 days ago 125.1 MB - -Docker uses a content-addressable image store, and the image ID is a SHA256 -digest covering the image's configuration and layers. In the example above, -`debian:jessie` and `debian:latest` have the same image ID because they are -actually the *same* image tagged with different names. Because they are the -same image, their layers are stored only once and do not consume extra disk -space. - -For more information about images, layers, and the content-addressable store, -refer to [understand images, containers, and storage drivers](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/) -in the online documentation. - - -### Pull an image by digest (immutable identifier) - -So far, you've pulled images by their name (and "tag"). Using names and tags is -a convenient way to work with images. When using tags, you can `docker image pull` an -image again to make sure you have the most up-to-date version of that image. -For example, `docker image pull ubuntu:14.04` pulls the latest version of the Ubuntu -14.04 image. - -In some cases you don't want images to be updated to newer versions, but prefer -to use a fixed version of an image. Docker enables you to pull an image by its -*digest*. When pulling an image by digest, you specify *exactly* which version -of an image to pull. Doing so, allows you to "pin" an image to that version, -and guarantee that the image you're using is always the same. - -To know the digest of an image, pull the image first. Let's pull the latest -`ubuntu:14.04` image from Docker Hub: - - $ docker image pull ubuntu:14.04 - - 14.04: Pulling from library/ubuntu - 5a132a7e7af1: Pull complete - fd2731e4c50c: Pull complete - 28a2f68d1120: Pull complete - a3ed95caeb02: Pull complete - Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - Status: Downloaded newer image for ubuntu:14.04 - -Docker prints the digest of the image after the pull has finished. In the example -above, the digest of the image is: - - sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - -Docker also prints the digest of an image when *pushing* to a registry. This -may be useful if you want to pin to a version of the image you just pushed. - -A digest takes the place of the tag when pulling an image, for example, to -pull the above image by digest, run the following command: - - $ docker image pull ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - - sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2: Pulling from library/ubuntu - 5a132a7e7af1: Already exists - fd2731e4c50c: Already exists - 28a2f68d1120: Already exists - a3ed95caeb02: Already exists - Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - Status: Downloaded newer image for ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - -Digest can also be used in the `FROM` of a Dockerfile, for example: - - FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 - MAINTAINER some maintainer - -> **Note**: Using this feature "pins" an image to a specific version in time. -> Docker will therefore not pull updated versions of an image, which may include -> security updates. If you want to pull an updated image, you need to change the -> digest accordingly. - -### Pulling from a different registry - -By default, `docker image pull` pulls images from Docker Hub. It is also possible to -manually specify the path of a registry to pull from. For example, if you have -set up a local registry, you can specify its path to pull from it. A registry -path is similar to a URL, but does not contain a protocol specifier (`https://`). - -The following command pulls the `testing/test-image` image from a local registry -listening on port 5000 (`myregistry.local:5000`): - - $ docker image pull myregistry.local:5000/testing/test-image - -Registry credentials are managed by **docker-login(1)**. - -Docker uses the `https://` protocol to communicate with a registry, unless the -registry is allowed to be accessed over an insecure connection. Refer to the -[insecure registries](/engine/reference/commandline/dockerd.md#insecure-registries) -section in the online documentation for more information. - - -### Pull a repository with multiple images - -By default, `docker image pull` pulls a *single* image from the registry. A repository -can contain multiple images. To pull all images from a repository, provide the -`-a` (or `--all-tags`) option when using `docker image pull`. - -This command pulls all images from the `fedora` repository: - - $ docker image pull --all-tags fedora - - Pulling repository fedora - ad57ef8d78d7: Download complete - 105182bb5e8b: Download complete - 511136ea3c5a: Download complete - 73bd853d2ea5: Download complete - .... - - Status: Downloaded newer image for fedora - -After the pull has completed use the `docker images` command to see the -images that were pulled. The example below shows all the `fedora` images -that are present locally: - - $ docker images fedora - - REPOSITORY TAG IMAGE ID CREATED SIZE - fedora rawhide ad57ef8d78d7 5 days ago 359.3 MB - fedora 20 105182bb5e8b 5 days ago 372.7 MB - fedora heisenbug 105182bb5e8b 5 days ago 372.7 MB - fedora latest 105182bb5e8b 5 days ago 372.7 MB - - -### Canceling a pull - -Killing the `docker image pull` process, for example by pressing `CTRL-c` while it is -running in a terminal, will terminate the pull operation. - - $ docker image pull fedora - - Using default tag: latest - latest: Pulling from library/fedora - a3ed95caeb02: Pulling fs layer - 236608c7b546: Pulling fs layer - ^C - -> **Note**: Technically, the Engine terminates a pull operation when the -> connection between the Docker Engine daemon and the Docker Engine client -> initiating the pull is lost. If the connection with the Engine daemon is -> lost for other reasons than a manual interaction, the pull is also aborted. diff --git a/engine/reference/commandline/image_push.md b/engine/reference/commandline/image_push.md index b72e77cafd..b948a186ac 100644 --- a/engine/reference/commandline/image_push.md +++ b/engine/reference/commandline/image_push.md @@ -13,28 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Pushing a new image to a registry - -First save the new image by finding the container ID (using **docker container ls**) -and then committing it to a new image name. Note that only `a-z0-9-_.` are -allowed when naming images: - - $ docker container commit c16378f943fe rhel-httpd - -Now, push the image to the registry using the image ID. In this example the -registry is on host named `registry-host` and listening on port `5000`. To do -this, tag the image with the host name or IP address, and the port of the -registry: - - $ docker image tag rhel-httpd registry-host:5000/myadmin/rhel-httpd - $ docker image push registry-host:5000/myadmin/rhel-httpd - -Check that this worked by running: - - $ docker image ls - -You should see both `rhel-httpd` and `registry-host:5000/myadmin/rhel-httpd` -listed. diff --git a/engine/reference/commandline/image_rm.md b/engine/reference/commandline/image_rm.md index 8879cfb99b..642a31a279 100644 --- a/engine/reference/commandline/image_rm.md +++ b/engine/reference/commandline/image_rm.md @@ -13,11 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Removing an image - -Here is an example of removing an image: - - $ docker image rm fedora/httpd diff --git a/engine/reference/commandline/image_save.md b/engine/reference/commandline/image_save.md index d8dfae1295..c1b4d33081 100644 --- a/engine/reference/commandline/image_save.md +++ b/engine/reference/commandline/image_save.md @@ -13,20 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Save all fedora repository images to a fedora-all.tar and save the latest -fedora image to a fedora-latest.tar: - - $ docker image save fedora > fedora-all.tar - - $ docker image save --output=fedora-latest.tar fedora:latest - - $ ls -sh fedora-all.tar - - 721M fedora-all.tar - - $ ls -sh fedora-latest.tar - - 367M fedora-latest.tar diff --git a/engine/reference/commandline/image_tag.md b/engine/reference/commandline/image_tag.md index e24fbf0e86..2b58ab6601 100644 --- a/engine/reference/commandline/image_tag.md +++ b/engine/reference/commandline/image_tag.md @@ -13,36 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Tagging an image referenced by ID - -To tag a local image with ID "0e5574283393" into the "fedora" repository with -"version1.0": - - docker image tag 0e5574283393 fedora/httpd:version1.0 - -### Tagging an image referenced by Name - -To tag a local image with name "httpd" into the "fedora" repository with -"version1.0": - - docker image tag httpd fedora/httpd:version1.0 - -Note that since the tag name is not specified, the alias is created for an -existing local version `httpd:latest`. - -### Tagging an image referenced by Name and Tag - -To tag a local image with name "httpd" and tag "test" into the "fedora" -repository with "version1.0.test": - - docker image tag httpd:test fedora/httpd:version1.0.test - -### Tagging an image for a private repository - -To push an image to a private registry and not the central Docker -registry you must tag it with the registry hostname and port (if needed). - - docker image tag 0e5574283393 myregistryhost:5000/fedora/httpd:version1.0 diff --git a/engine/reference/commandline/inspect.md b/engine/reference/commandline/inspect.md index dcd311e62b..bf668cedfb 100644 --- a/engine/reference/commandline/inspect.md +++ b/engine/reference/commandline/inspect.md @@ -11,302 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -Get information about an image when image name conflicts with the container name, -e.g. both image and container are named rhel7: - - $ docker inspect --type=image rhel7 - [ - { - "Id": "fe01a428b9d9de35d29531e9994157978e8c48fa693e1bf1d221dffbbb67b170", - "Parent": "10acc31def5d6f249b548e01e8ffbaccfd61af0240c17315a7ad393d022c5ca2", - .... - } - ] - -### Getting information on a container - -To get information on a container use its ID or instance name: - - $ docker inspect d2cc496561d6 - [{ - "Id": "d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47", - "Created": "2015-06-08T16:18:02.505155285Z", - "Path": "bash", - "Args": [], - "State": { - "Running": false, - "Paused": false, - "Restarting": false, - "OOMKilled": false, - "Dead": false, - "Pid": 0, - "ExitCode": 0, - "Error": "", - "StartedAt": "2015-06-08T16:18:03.643865954Z", - "FinishedAt": "2015-06-08T16:57:06.448552862Z" - }, - "Image": "ded7cd95e059788f2586a51c275a4f151653779d6a7f4dad77c2bd34601d94e4", - "NetworkSettings": { - "Bridge": "", - "SandboxID": "6b4851d1903e16dd6a567bd526553a86664361f31036eaaa2f8454d6f4611f6f", - "HairpinMode": false, - "LinkLocalIPv6Address": "", - "LinkLocalIPv6PrefixLen": 0, - "Ports": {}, - "SandboxKey": "/var/run/docker/netns/6b4851d1903e", - "SecondaryIPAddresses": null, - "SecondaryIPv6Addresses": null, - "EndpointID": "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d", - "Gateway": "172.17.0.1", - "GlobalIPv6Address": "", - "GlobalIPv6PrefixLen": 0, - "IPAddress": "172.17.0.2", - "IPPrefixLen": 16, - "IPv6Gateway": "", - "MacAddress": "02:42:ac:12:00:02", - "Networks": { - "bridge": { - "NetworkID": "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", - "EndpointID": "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d", - "Gateway": "172.17.0.1", - "IPAddress": "172.17.0.2", - "IPPrefixLen": 16, - "IPv6Gateway": "", - "GlobalIPv6Address": "", - "GlobalIPv6PrefixLen": 0, - "MacAddress": "02:42:ac:12:00:02" - } - } - - }, - "ResolvConfPath": "/var/lib/docker/containers/d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47/resolv.conf", - "HostnamePath": "/var/lib/docker/containers/d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47/hostname", - "HostsPath": "/var/lib/docker/containers/d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47/hosts", - "LogPath": "/var/lib/docker/containers/d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47/d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47-json.log", - "Name": "/adoring_wozniak", - "RestartCount": 0, - "Driver": "devicemapper", - "MountLabel": "", - "ProcessLabel": "", - "Mounts": [ - { - "Source": "/data", - "Destination": "/data", - "Mode": "ro,Z", - "RW": false - "Propagation": "" - } - ], - "AppArmorProfile": "", - "ExecIDs": null, - "HostConfig": { - "Binds": null, - "ContainerIDFile": "", - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 0, - "CpuPeriod": 0, - "CpusetCpus": "", - "CpusetMems": "", - "CpuQuota": 0, - "BlkioWeight": 0, - "OomKillDisable": false, - "Privileged": false, - "PortBindings": {}, - "Links": null, - "PublishAllPorts": false, - "Dns": null, - "DnsSearch": null, - "DnsOptions": null, - "ExtraHosts": null, - "VolumesFrom": null, - "Devices": [], - "NetworkMode": "bridge", - "IpcMode": "", - "PidMode": "", - "UTSMode": "", - "CapAdd": null, - "CapDrop": null, - "RestartPolicy": { - "Name": "no", - "MaximumRetryCount": 0 - }, - "SecurityOpt": null, - "ReadonlyRootfs": false, - "Ulimits": null, - "LogConfig": { - "Type": "json-file", - "Config": {} - }, - "CgroupParent": "" - }, - "GraphDriver": { - "Name": "devicemapper", - "Data": { - "DeviceId": "5", - "DeviceName": "docker-253:1-2763198-d2cc496561d6d520cbc0236b4ba88c362c446a7619992123f11c809cded25b47", - "DeviceSize": "171798691840" - } - }, - "Config": { - "Hostname": "d2cc496561d6", - "Domainname": "", - "User": "", - "AttachStdin": true, - "AttachStdout": true, - "AttachStderr": true, - "ExposedPorts": null, - "Tty": true, - "OpenStdin": true, - "StdinOnce": true, - "Env": null, - "Cmd": [ - "bash" - ], - "Image": "fedora", - "Volumes": null, - "VolumeDriver": "", - "WorkingDir": "", - "Entrypoint": null, - "NetworkDisabled": false, - "MacAddress": "", - "OnBuild": null, - "Labels": {}, - "Memory": 0, - "MemorySwap": 0, - "CpuShares": 0, - "Cpuset": "", - "StopSignal": "SIGTERM" - } - } - ] -### Getting the IP address of a container instance - -To get the IP address of a container use: - -```bash -{% raw %} -$ docker inspect \ - --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' \ - d2cc496561d6 - -172.17.0.2 -{% endraw %} -``` - -### Listing all port bindings - -One can loop over arrays and maps in the results to produce simple text -output: - -```bash -{% raw %} -$ docker inspect \ - --format='{{range $p, $conf := .NetworkSettings.Ports}} \ - {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' \ - d2cc496561d6 - - 80/tcp -> 80 -{% endraw %} -``` - -You can get more information about how to write a Go template from: -https://golang.org/pkg/text/template/. - -### Getting size information on a container - -```bash -$ docker inspect -s d2cc496561d6 - -[ -{ -.... -"SizeRw": 0, -"SizeRootFs": 972, -.... -} -] -``` -### Getting information on an image - -Use an image's ID or name (e.g., repository/name[:tag]) to get information -about the image: - -```bash -$ docker inspect ded7cd95e059 - -[{ -"Id": "ded7cd95e059788f2586a51c275a4f151653779d6a7f4dad77c2bd34601d94e4", -"Parent": "48ecf305d2cf7046c1f5f8fcbcd4994403173441d4a7f125b1bb0ceead9de731", -"Comment": "", -"Created": "2015-05-27T16:58:22.937503085Z", -"Container": "76cf7f67d83a7a047454b33007d03e32a8f474ad332c3a03c94537edd22b312b", -"ContainerConfig": { - "Hostname": "76cf7f67d83a", - "Domainname": "", - "User": "", - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "ExposedPorts": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": [ - "/bin/sh", - "-c", - "#(nop) ADD file:4be46382bcf2b095fcb9fe8334206b584eff60bb3fad8178cbd97697fcb2ea83 in /" - ], - "Image": "48ecf305d2cf7046c1f5f8fcbcd4994403173441d4a7f125b1bb0ceead9de731", - "Volumes": null, - "VolumeDriver": "", - "WorkingDir": "", - "Entrypoint": null, - "NetworkDisabled": false, - "MacAddress": "", - "OnBuild": null, - "Labels": {} -}, -"DockerVersion": "1.6.0", -"Author": "Lokesh Mandvekar \u003clsm5@fedoraproject.org\u003e", -"Config": { - "Hostname": "76cf7f67d83a", - "Domainname": "", - "User": "", - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "ExposedPorts": null, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": null, - "Image": "48ecf305d2cf7046c1f5f8fcbcd4994403173441d4a7f125b1bb0ceead9de731", - "Volumes": null, - "VolumeDriver": "", - "WorkingDir": "", - "Entrypoint": null, - "NetworkDisabled": false, - "MacAddress": "", - "OnBuild": null, - "Labels": {} -}, -"Architecture": "amd64", -"Os": "linux", -"Size": 186507296, -"VirtualSize": 186507296, -"GraphDriver": { - "Name": "devicemapper", - "Data": { - "DeviceId": "3", - "DeviceName": "docker-253:1-2763198-ded7cd95e059788f2586a51c275a4f151653779d6a7f4dad77c2bd34601d94e4", - "DeviceSize": "171798691840" - } -} -}] -``` diff --git a/engine/reference/commandline/login.md b/engine/reference/commandline/login.md index 22f9aa4ac0..628aa48644 100644 --- a/engine/reference/commandline/login.md +++ b/engine/reference/commandline/login.md @@ -11,11 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Login to a registry on your localhost - -```bash -$ docker login localhost:8080 -``` diff --git a/engine/reference/commandline/logout.md b/engine/reference/commandline/logout.md index d0c6828671..6c85bb4688 100644 --- a/engine/reference/commandline/logout.md +++ b/engine/reference/commandline/logout.md @@ -11,11 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Log out from a registry on your localhost - -```bash -$ docker logout localhost:8080 -``` diff --git a/engine/reference/commandline/network_connect.md b/engine/reference/commandline/network_connect.md index 2be307c293..ac389c151e 100644 --- a/engine/reference/commandline/network_connect.md +++ b/engine/reference/commandline/network_connect.md @@ -11,42 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - - -```bash -$ docker network connect multi-host-network container1 -``` - -You can also use the `docker run --network=` option to start a container and immediately connect it to a network. - -```bash -$ docker run -itd --network=multi-host-network --ip 172.20.88.22 --ip6 2001:db8::8822 busybox -``` -You can pause, restart, and stop containers that are connected to a network. -A container connects to its configured networks when it runs. - -If specified, the container's IP address(es) is reapplied when a stopped -container is restarted. If the IP address is no longer available, the container -fails to start. One way to guarantee that the IP address is available is -to specify an `--ip-range` when creating the network, and choose the static IP -address(es) from outside that range. This ensures that the IP address is not -given to another container while this container is not on the network. - -```bash -$ docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 multi-host-network -``` - -```bash -$ docker network connect --ip 172.20.128.2 multi-host-network container2 -``` - -To verify the container is connected, use the `docker network inspect` command. Use `docker network disconnect` to remove a container from the network. - -Once connected in network, containers can communicate using only another -container's IP address or name. For `overlay` networks or custom plugins that -support multi-host connectivity, containers connected to the same multi-host -network but launched from different Engines can also communicate in this way. - -You can connect a container to one or more networks. The networks need not be the same type. For example, you can connect a single container bridge and overlay networks. diff --git a/engine/reference/commandline/network_create.md b/engine/reference/commandline/network_create.md index 8399993f79..794fe5d731 100644 --- a/engine/reference/commandline/network_create.md +++ b/engine/reference/commandline/network_create.md @@ -11,84 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker network create -d overlay my-multihost-network -``` - -Network names must be unique. The Docker daemon attempts to identify naming -conflicts but this is not guaranteed. It is the user's responsibility to avoid -name conflicts. - -### Connect containers - -When you start a container use the `--network` flag to connect it to a network. -This adds the `busybox` container to the `mynet` network. - -```bash -$ docker run -itd --network=mynet busybox -``` - -If you want to add a container to a network after the container is already -running use the `docker network connect` subcommand. - -You can connect multiple containers to the same network. Once connected, the -containers can communicate using only another container's IP address or name. -For `overlay` networks or custom plugins that support multi-host connectivity, -containers connected to the same multi-host network but launched from different -Engines can also communicate in this way. - -You can disconnect a container from a network using the `docker network -disconnect` command. - -### Specifying advanced options - -When you create a network, Engine creates a non-overlapping subnetwork for the -network by default. This subnetwork is not a subdivision of an existing network. -It is purely for ip-addressing purposes. You can override this default and -specify subnetwork values directly using the `--subnet` option. On a -`bridge` network you can only create a single subnet: - -```bash -$ docker network create -d bridge --subnet=192.168.0.0/16 br0 -``` - -Additionally, you also specify the `--gateway` `--ip-range` and `--aux-address` -options. - -```bash -$ docker network create \ - --driver=bridge \ - --subnet=172.28.0.0/16 \ - --ip-range=172.28.5.0/24 \ - --gateway=172.28.5.254 \ - br0 -``` - -If you omit the `--gateway` flag the Engine selects one for you from inside a -preferred pool. For `overlay` networks and for network driver plugins that -support it you can create multiple subnetworks. - -```bash -$ docker network create -d overlay \ - --subnet=192.168.0.0/16 \ - --subnet=192.170.0.0/16 \ - --gateway=192.168.0.100 \ - --gateway=192.170.0.100 \ - --ip-range=192.168.1.0/24 \ - --aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \ - --aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \ - my-multihost-network -``` - -Be sure that your subnetworks do not overlap. If they do, the network create -fails and Engine returns an error. - -#### Network internal mode - -By default, when you connect a container to an `overlay` network, Docker also -connects a bridge network to it to provide external connectivity. If you want -to create an externally isolated `overlay` network, you can specify the -`--internal` option. diff --git a/engine/reference/commandline/network_disconnect.md b/engine/reference/commandline/network_disconnect.md index 2de6f02ccc..39c7b4e898 100644 --- a/engine/reference/commandline/network_disconnect.md +++ b/engine/reference/commandline/network_disconnect.md @@ -11,9 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker network disconnect multi-host-network container1 -``` diff --git a/engine/reference/commandline/network_inspect.md b/engine/reference/commandline/network_inspect.md index 5f40a4f65b..0a3e2aaf6b 100644 --- a/engine/reference/commandline/network_inspect.md +++ b/engine/reference/commandline/network_inspect.md @@ -11,92 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ sudo docker run -itd --name=container1 busybox -f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27 - -$ sudo docker run -itd --name=container2 busybox -bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727 -``` - -The `network inspect` command shows the containers, by id, in its -results. You can specify an alternate format to execute a given -template for each result. Go's -[text/template](http://golang.org/pkg/text/template/) package -describes all the details of the format. - -```bash -$ sudo docker network inspect bridge -[ - { - "Name": "bridge", - "Id": "b2b1a2cba717161d984383fd68218cf70bbbd17d328496885f7c921333228b0f", - "Scope": "local", - "Driver": "bridge", - "IPAM": { - "Driver": "default", - "Config": [ - { - "Subnet": "172.17.42.1/16", - "Gateway": "172.17.42.1" - } - ] - }, - "Internal": false, - "Containers": { - "bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": { - "Name": "container2", - "EndpointID": "0aebb8fcd2b282abe1365979536f21ee4ceaf3ed56177c628eae9f706e00e019", - "MacAddress": "02:42:ac:11:00:02", - "IPv4Address": "172.17.0.2/16", - "IPv6Address": "" - }, - "f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": { - "Name": "container1", - "EndpointID": "a00676d9c91a96bbe5bcfb34f705387a33d7cc365bac1a29e4e9728df92d10ad", - "MacAddress": "02:42:ac:11:00:01", - "IPv4Address": "172.17.0.1/16", - "IPv6Address": "" - } - }, - "Options": { - "com.docker.network.bridge.default_bridge": "true", - "com.docker.network.bridge.enable_icc": "true", - "com.docker.network.bridge.enable_ip_masquerade": "true", - "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", - "com.docker.network.bridge.name": "docker0", - "com.docker.network.driver.mtu": "1500" - } - } -] -``` - -Returns the information about the user-defined network: - -```bash -$ docker network create simple-network -69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a -$ docker network inspect simple-network -[ - { - "Name": "simple-network", - "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a", - "Scope": "local", - "Driver": "bridge", - "IPAM": { - "Driver": "default", - "Config": [ - { - "Subnet": "172.22.0.0/16", - "Gateway": "172.22.0.1" - } - ] - }, - "Containers": {}, - "Options": {} - } -] -``` diff --git a/engine/reference/commandline/network_ls.md b/engine/reference/commandline/network_ls.md index 6142c10fba..08ff5222f7 100644 --- a/engine/reference/commandline/network_ls.md +++ b/engine/reference/commandline/network_ls.md @@ -11,160 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash - $ docker network ls - NETWORK ID NAME DRIVER SCOPE - 7fca4eb8c647 bridge bridge local - 9f904ee27bf5 none null local - cf03ee007fb4 host host local - 78b03ee04fc4 multi-host overlay swarm -``` - -Use the `--no-trunc` option to display the full network id: - -```bash -$ docker network ls --no-trunc -NETWORK ID NAME DRIVER -18a2866682b85619a026c81b98a5e375bd33e1b0936a26cc497c283d27bae9b3 none null -c288470c46f6c8949c5f7e5099b5b7947b07eabe8d9a27d79a9cbf111adcbf47 host host -7b369448dccbf865d397c8d2be0cda7cf7edc6b0945f77d2529912ae917a0185 bridge bridge -95e74588f40db048e86320c6526440c504650a1ff3e9f7d60a497c4d2163e5bd foo bridge -63d1ff1f77b07ca51070a8c227e962238358bd310bde1529cf62e6c307ade161 dev bridge -``` - -### Filtering - -The filtering flag (`-f` or `--filter`) format is a `key=value` pair. If there -is more than one filter, then pass multiple flags (e.g. `--filter "foo=bar" --filter "bif=baz"`). -Multiple filter flags are combined as an `OR` filter. For example, -`-f type=custom -f type=builtin` returns both `custom` and `builtin` networks. - -The currently supported filters are: - -* driver -* id (network's id) -* label (`label=` or `label==`) -* name (network's name) -* type (custom|builtin) - -#### Driver - -The `driver` filter matches networks based on their driver. - -The following example matches networks with the `bridge` driver: - -```bash -$ docker network ls --filter driver=bridge -NETWORK ID NAME DRIVER -db9db329f835 test1 bridge -f6e212da9dfd test2 bridge -``` - -#### ID - -The `id` filter matches on all or part of a network's ID. - -The following filter matches all networks with an ID containing the -`63d1ff1f77b0...` string. - -```bash -$ docker network ls --filter id=63d1ff1f77b07ca51070a8c227e962238358bd310bde1529cf62e6c307ade161 -NETWORK ID NAME DRIVER -63d1ff1f77b0 dev bridge -``` - -You can also filter for a substring in an ID as this shows: - -```bash -$ docker network ls --filter id=95e74588f40d -NETWORK ID NAME DRIVER -95e74588f40d foo bridge - -$ docker network ls --filter id=95e -NETWORK ID NAME DRIVER -95e74588f40d foo bridge -``` - -#### Label - -The `label` filter matches networks based on the presence of a `label` alone or a `label` and a -value. - -The following filter matches networks with the `usage` label regardless of its value. - -```bash -$ docker network ls -f "label=usage" -NETWORK ID NAME DRIVER -db9db329f835 test1 bridge -f6e212da9dfd test2 bridge -``` - -The following filter matches networks with the `usage` label with the `prod` value. - -```bash -$ docker network ls -f "label=usage=prod" -NETWORK ID NAME DRIVER -f6e212da9dfd test2 bridge -``` - -#### Name - -The `name` filter matches on all or part of a network's name. - -The following filter matches all networks with a name containing the `foobar` string. - -```bash -$ docker network ls --filter name=foobar -NETWORK ID NAME DRIVER -06e7eef0a170 foobar bridge -``` - -You can also filter for a substring in a name as this shows: - -```bash -$ docker network ls --filter name=foo -NETWORK ID NAME DRIVER -95e74588f40d foo bridge -06e7eef0a170 foobar bridge -``` - -#### Type - -The `type` filter supports two values; `builtin` displays predefined networks -(`bridge`, `none`, `host`), whereas `custom` displays user defined networks. - -The following filter matches all user defined networks: - -```bash -$ docker network ls --filter type=custom -NETWORK ID NAME DRIVER -95e74588f40d foo bridge -63d1ff1f77b0 dev bridge -``` - -By having this flag it allows for batch cleanup. For example, use this filter -to delete all user defined networks: - -```bash -$ docker network rm `docker network ls --filter type=custom -q` -``` - -A warning will be issued when trying to remove a network that has containers -attached. - -### Format - -Format uses a Go template to print the output. The following variables are -supported: - -* .ID - Network ID -* .Name - Network name -* .Driver - Network driver -* .Scope - Network scope (local, global) -* .IPv6 - Whether IPv6 is enabled on the network or not -* .Internal - Whether the network is internal or not -* .Labels - All labels assigned to the network -* .Label - Value of a specific label for this network. For example `{% raw %}{{.Label "project.version"}}{% endraw %}` diff --git a/engine/reference/commandline/network_rm.md b/engine/reference/commandline/network_rm.md index ce789e9a56..0245a9495a 100644 --- a/engine/reference/commandline/network_rm.md +++ b/engine/reference/commandline/network_rm.md @@ -11,22 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash - $ docker network rm my-network -``` - -To delete multiple networks in a single `docker network rm` command, provide -multiple network names or ids. The following example deletes a network with id -`3695c422697f` and a network named `my-network`: - -```bash - $ docker network rm 3695c422697f my-network -``` - -When you specify multiple networks, the command attempts to delete each in turn. -If the deletion of one network fails, the command continues to the next on the -list and tries to delete that. The command reports success or failure for each -deletion. diff --git a/engine/reference/commandline/run.md b/engine/reference/commandline/run.md index 603892781e..71c486399c 100644 --- a/engine/reference/commandline/run.md +++ b/engine/reference/commandline/run.md @@ -11,587 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Assign name and allocate pseudo-TTY (--name, -it) - - $ docker run --name test -it debian - root@d6c0fe130dba:/# exit 13 - $ echo $? - 13 - $ docker ps -a | grep test - d6c0fe130dba debian:7 "/bin/bash" 26 seconds ago Exited (13) 17 seconds ago test - -This example runs a container named `test` using the `debian:latest` -image. The `-it` instructs Docker to allocate a pseudo-TTY connected to -the container's stdin; creating an interactive `bash` shell in the container. -In the example, the `bash` shell is quit by entering -`exit 13`. This exit code is passed on to the caller of -`docker run`, and is recorded in the `test` container's metadata. - -### Capture container ID (--cidfile) - - $ docker run --cidfile /tmp/docker_test.cid ubuntu echo "test" - -This will create a container and print `test` to the console. The `cidfile` -flag makes Docker attempt to create a new file and write the container ID to it. -If the file exists already, Docker will return an error. Docker will close this -file when `docker run` exits. - -### Full container capabilities (--privileged) - - $ docker run -t -i --rm ubuntu bash - root@bc338942ef20:/# mount -t tmpfs none /mnt - mount: permission denied - -This will *not* work, because by default, most potentially dangerous kernel -capabilities are dropped; including `cap_sys_admin` (which is required to mount -filesystems). However, the `--privileged` flag will allow it to run: - - $ docker run -t -i --privileged ubuntu bash - root@50e3f57e16e6:/# mount -t tmpfs none /mnt - root@50e3f57e16e6:/# df -h - Filesystem Size Used Avail Use% Mounted on - none 1.9G 0 1.9G 0% /mnt - -The `--privileged` flag gives *all* capabilities to the container, and it also -lifts all the limitations enforced by the `device` cgroup controller. In other -words, the container can then do almost everything that the host can do. This -flag exists to allow special use-cases, like running Docker within Docker. - -### Set working directory (-w) - - $ docker run -w /path/to/dir/ -i -t ubuntu pwd - -The `-w` lets the command being executed inside directory given, here -`/path/to/dir/`. If the path does not exist it is created inside the container. - -### Set storage driver options per container - - $ docker run -it --storage-opt size=120G fedora /bin/bash - -This (size) will allow to set the container rootfs size to 120G at creation time. -This option is only available for the `devicemapper`, `btrfs`, `overlay2`, -`windowsfilter` and `zfs` graph drivers. -For the `devicemapper`, `btrfs`, `windowsfilter` and `zfs` graph drivers, -user cannot pass a size less than the Default BaseFS Size. -For the `overlay2` storage driver, the size option is only available if the -backing fs is `xfs` and mounted with the `pquota` mount option. -Under these conditions, user can pass any size less then the backing fs size. - -### Mount tmpfs (--tmpfs) - - $ docker run -d --tmpfs /run:rw,noexec,nosuid,size=65536k my_image - -The `--tmpfs` flag mounts an empty tmpfs into the container with the `rw`, -`noexec`, `nosuid`, `size=65536k` options. - -### Mount volume (-v, --read-only) - - $ docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd - -The `-v` flag mounts the current working directory into the container. The `-w` -lets the command being executed inside the current working directory, by -changing into the directory to the value returned by `pwd`. So this -combination executes the command using the container, but inside the -current working directory. - - $ docker run -v /doesnt/exist:/foo -w /foo -i -t ubuntu bash - -When the host directory of a bind-mounted volume doesn't exist, Docker -will automatically create this directory on the host for you. In the -example above, Docker will create the `/doesnt/exist` -folder before starting your container. - - $ docker run --read-only -v /icanwrite busybox touch /icanwrite/here - -Volumes can be used in combination with `--read-only` to control where -a container writes files. The `--read-only` flag mounts the container's root -filesystem as read only prohibiting writes to locations other than the -specified volumes for the container. - - $ docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /path/to/static-docker-binary:/usr/bin/docker busybox sh - -By bind-mounting the docker unix socket and statically linked docker -binary (refer to [get the linux binary]( -https://docs.docker.com/engine/installation/binaries/#/get-the-linux-binary)), -you give the container the full access to create and manipulate the host's -Docker daemon. - -On Windows, the paths must be specified using Windows-style semantics. - - PS C:\> docker run -v c:\foo:c:\dest microsoft/nanoserver cmd /s /c type c:\dest\somefile.txt - Contents of file - - PS C:\> docker run -v c:\foo:d: microsoft/nanoserver cmd /s /c type d:\somefile.txt - Contents of file - -The following examples will fail when using Windows-based containers, as the -destination of a volume or bind-mount inside the container must be one of: -a non-existing or empty directory; or a drive other than C:. Further, the source -of a bind mount must be a local directory, not a file. - - net use z: \\remotemachine\share - docker run -v z:\foo:c:\dest ... - docker run -v \\uncpath\to\directory:c:\dest ... - docker run -v c:\foo\somefile.txt:c:\dest ... - docker run -v c:\foo:c: ... - docker run -v c:\foo:c:\existing-directory-with-contents ... - -For in-depth information about volumes, refer to [manage data in containers](https://docs.docker.com/engine/tutorials/dockervolumes/) - -### Publish or expose port (-p, --expose) - - $ docker run -p 127.0.0.1:80:8080 ubuntu bash - -This binds port `8080` of the container to port `80` on `127.0.0.1` of the host -machine. The [Docker User -Guide](https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/) -explains in detail how to manipulate ports in Docker. - - $ docker run --expose 80 ubuntu bash - -This exposes port `80` of the container without publishing the port to the host -system's interfaces. - -### Set environment variables (-e, --env, --env-file) - - $ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash - -This sets simple (non-array) environmental variables in the container. For -illustration all three -flags are shown here. Where `-e`, `--env` take an environment variable and -value, or if no `=` is provided, then that variable's current value, set via -`export`, is passed through (i.e. `$MYVAR1` from the host is set to `$MYVAR1` -in the container). When no `=` is provided and that variable is not defined -in the client's environment then that variable will be removed from the -container's list of environment variables. All three flags, `-e`, `--env` and -`--env-file` can be repeated. - -Regardless of the order of these three flags, the `--env-file` are processed -first, and then `-e`, `--env` flags. This way, the `-e` or `--env` will -override variables as needed. - - $ cat ./env.list - TEST_FOO=BAR - $ docker run --env TEST_FOO="This is a test" --env-file ./env.list busybox env | grep TEST_FOO - TEST_FOO=This is a test - -The `--env-file` flag takes a filename as an argument and expects each line -to be in the `VAR=VAL` format, mimicking the argument passed to `--env`. Comment -lines need only be prefixed with `#` - -An example of a file passed with `--env-file` - -```none -$ cat ./env.list -TEST_FOO=BAR - -# this is a comment -TEST_APP_DEST_HOST=10.10.0.127 -TEST_APP_DEST_PORT=8888 -_TEST_BAR=FOO -TEST_APP_42=magic -helloWorld=true -123qwe=bar -org.spring.config=something - -# pass through this variable from the caller -TEST_PASSTHROUGH -$ TEST_PASSTHROUGH=howdy docker run --env-file ./env.list busybox env -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -HOSTNAME=5198e0745561 -TEST_FOO=BAR -TEST_APP_DEST_HOST=10.10.0.127 -TEST_APP_DEST_PORT=8888 -_TEST_BAR=FOO -TEST_APP_42=magic -helloWorld=true -TEST_PASSTHROUGH=howdy -HOME=/root -123qwe=bar -org.spring.config=something - -$ docker run --env-file ./env.list busybox env -PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -HOSTNAME=5198e0745561 -TEST_FOO=BAR -TEST_APP_DEST_HOST=10.10.0.127 -TEST_APP_DEST_PORT=8888 -_TEST_BAR=FOO -TEST_APP_42=magic -helloWorld=true -TEST_PASSTHROUGH= -HOME=/root -123qwe=bar -org.spring.config=something -``` - -### Set metadata on container (-l, --label, --label-file) - -A label is a `key=value` pair that applies metadata to a container. To label a container with two labels: - - $ docker run -l my-label --label com.example.foo=bar ubuntu bash - -The `my-label` key doesn't specify a value so the label defaults to an empty -string(`""`). To add multiple labels, repeat the label flag (`-l` or `--label`). - -The `key=value` must be unique to avoid overwriting the label value. If you -specify labels with identical keys but different values, each subsequent value -overwrites the previous. Docker uses the last `key=value` you supply. - -Use the `--label-file` flag to load multiple labels from a file. Delimit each -label in the file with an EOL mark. The example below loads labels from a -labels file in the current directory: - - $ docker run --label-file ./labels ubuntu bash - -The label-file format is similar to the format for loading environment -variables. (Unlike environment variables, labels are not visible to processes -running inside a container.) The following example illustrates a label-file -format: - - com.example.label1="a label" - - # this is a comment - com.example.label2=another\ label - com.example.label3 - -You can load multiple label-files by supplying multiple `--label-file` flags. - -For additional information on working with labels, see [*Labels - custom -metadata in Docker*](https://docs.docker.com/engine/userguide/labels-custom-metadata/) in the Docker User -Guide. - -### Connect a container to a network (--network) - -When you start a container use the `--network` flag to connect it to a network. -This adds the `busybox` container to the `my-net` network. - -```bash -$ docker run -itd --network=my-net busybox -``` - -You can also choose the IP addresses for the container with `--ip` and `--ip6` -flags when you start the container on a user-defined network. - -```bash -$ docker run -itd --network=my-net --ip=10.10.9.75 busybox -``` - -If you want to add a running container to a network use the `docker network connect` subcommand. - -You can connect multiple containers to the same network. Once connected, the -containers can communicate easily need only another container's IP address -or name. For `overlay` networks or custom plugins that support multi-host -connectivity, containers connected to the same multi-host network but launched -from different Engines can also communicate in this way. - -**Note**: Service discovery is unavailable on the default bridge network. -Containers can communicate via their IP addresses by default. To communicate -by name, they must be linked. - -You can disconnect a container from a network using the `docker network -disconnect` command. - -### Mount volumes from container (--volumes-from) - - $ docker run --volumes-from 777f7dc92da7 --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd - -The `--volumes-from` flag mounts all the defined volumes from the referenced -containers. Containers can be specified by repetitions of the `--volumes-from` -argument. The container ID may be optionally suffixed with `:ro` or `:rw` to -mount the volumes in read-only or read-write mode, respectively. By default, -the volumes are mounted in the same mode (read write or read only) as -the reference container. - -Labeling systems like SELinux require that proper labels are placed on volume -content mounted into a container. Without a label, the security system might -prevent the processes running inside the container from using the content. By -default, Docker does not change the labels set by the OS. - -To change the label in the container context, you can add either of two suffixes -`:z` or `:Z` to the volume mount. These suffixes tell Docker to relabel file -objects on the shared volumes. The `z` option tells Docker that two containers -share the volume content. As a result, Docker labels the content with a shared -content label. Shared volume labels allow all containers to read/write content. -The `Z` option tells Docker to label the content with a private unshared label. -Only the current container can use a private volume. - -### Attach to STDIN/STDOUT/STDERR (-a) - -The `-a` flag tells `docker run` to bind to the container's `STDIN`, `STDOUT` -or `STDERR`. This makes it possible to manipulate the output and input as -needed. - - $ echo "test" | docker run -i -a stdin ubuntu cat - - -This pipes data into a container and prints the container's ID by attaching -only to the container's `STDIN`. - - $ docker run -a stderr ubuntu echo test - -This isn't going to print anything unless there's an error because we've -only attached to the `STDERR` of the container. The container's logs -still store what's been written to `STDERR` and `STDOUT`. - - $ cat somefile | docker run -i -a stdin mybuilder dobuild - -This is how piping a file into a container could be done for a build. -The container's ID will be printed after the build is done and the build -logs could be retrieved using `docker logs`. This is -useful if you need to pipe a file or something else into a container and -retrieve the container's ID once the container has finished running. - -### Add host device to container (--device) - - $ docker run --device=/dev/sdc:/dev/xvdc --device=/dev/sdd --device=/dev/zero:/dev/nulo -i -t ubuntu ls -l /dev/{xvdc,sdd,nulo} - brw-rw---- 1 root disk 8, 2 Feb 9 16:05 /dev/xvdc - brw-rw---- 1 root disk 8, 3 Feb 9 16:05 /dev/sdd - crw-rw-rw- 1 root root 1, 5 Feb 9 16:05 /dev/nulo - -It is often necessary to directly expose devices to a container. The `--device` -option enables that. For example, a specific block storage device or loop -device or audio device can be added to an otherwise unprivileged container -(without the `--privileged` flag) and have the application directly access it. - -By default, the container will be able to `read`, `write` and `mknod` these devices. -This can be overridden using a third `:rwm` set of options to each `--device` -flag: - - - $ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - $ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc - You will not be able to write the partition table. - - Command (m for help): q - - $ docker run --device=/dev/sda:/dev/xvdc:rw --rm -it ubuntu fdisk /dev/xvdc - - Command (m for help): q - - $ docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc - fdisk: unable to open /dev/xvdc: Operation not permitted - -> **Note:** -> `--device` cannot be safely used with ephemeral devices. Block devices -> that may be removed should not be added to untrusted containers with -> `--device`. - -### Restart policies (--restart) - -Use Docker's `--restart` to specify a container's *restart policy*. A restart -policy controls whether the Docker daemon restarts a container after exit. -Docker supports the following restart policies: - - - - - - - - - - - - - - - - - - - - - - - - - - -
PolicyResult
no - Do not automatically restart the container when it exits. This is the - default. -
- - on-failure[:max-retries] - - - Restart only if the container exits with a non-zero exit status. - Optionally, limit the number of restart retries the Docker - daemon attempts. -
always - Always restart the container regardless of the exit status. - When you specify always, the Docker daemon will try to restart - the container indefinitely. The container will also always start - on daemon startup, regardless of the current state of the container. -
unless-stopped - Always restart the container regardless of the exit status, but - do not start it on daemon startup if the container has been put - to a stopped state before. -
- - $ docker run --restart=always redis - -This will run the `redis` container with a restart policy of **always** -so that if the container exits, Docker will restart it. - -More detailed information on restart policies can be found in the -[Restart Policies (--restart)](../run.md#restart-policies-restart) -section of the Docker run reference page. - -### Add entries to container hosts file (--add-host) - -You can add other hosts into a container's `/etc/hosts` file by using one or -more `--add-host` flags. This example adds a static address for a host named -`docker`: - - $ docker run --add-host=docker:10.180.0.1 --rm -it debian - root@f38c87f2a42d:/# ping docker - PING docker (10.180.0.1): 48 data bytes - 56 bytes from 10.180.0.1: icmp_seq=0 ttl=254 time=7.600 ms - 56 bytes from 10.180.0.1: icmp_seq=1 ttl=254 time=30.705 ms - ^C--- docker ping statistics --- - 2 packets transmitted, 2 packets received, 0% packet loss - round-trip min/avg/max/stddev = 7.600/19.152/30.705/11.553 ms - -Sometimes you need to connect to the Docker host from within your -container. To enable this, pass the Docker host's IP address to -the container using the `--add-host` flag. To find the host's address, -use the `ip addr show` command. - -The flags you pass to `ip addr show` depend on whether you are -using IPv4 or IPv6 networking in your containers. Use the following -flags for IPv4 address retrieval for a network device named `eth0`: - - $ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1` - $ docker run --add-host=docker:${HOSTIP} --rm -it debian - -For IPv6 use the `-6` flag instead of the `-4` flag. For other network -devices, replace `eth0` with the correct device name (for example `docker0` -for the bridge device). - -### Set ulimits in container (--ulimit) - -Since setting `ulimit` settings in a container requires extra privileges not -available in the default container, you can set these using the `--ulimit` flag. -`--ulimit` is specified with a soft and hard limit as such: -`=[:]`, for example: - - $ docker run --ulimit nofile=1024:1024 --rm debian sh -c "ulimit -n" - 1024 - -> **Note:** -> If you do not provide a `hard limit`, the `soft limit` will be used -> for both values. If no `ulimits` are set, they will be inherited from -> the default `ulimits` set on the daemon. `as` option is disabled now. -> In other words, the following script is not supported: -> `$ docker run -it --ulimit as=1024 fedora /bin/bash` - -The values are sent to the appropriate `syscall` as they are set. -Docker doesn't perform any byte conversion. Take this into account when setting the values. - -#### For `nproc` usage - -Be careful setting `nproc` with the `ulimit` flag as `nproc` is designed by Linux to set the -maximum number of processes available to a user, not to a container. For example, start four -containers with `daemon` user: - - docker run -d -u daemon --ulimit nproc=3 busybox top - docker run -d -u daemon --ulimit nproc=3 busybox top - docker run -d -u daemon --ulimit nproc=3 busybox top - docker run -d -u daemon --ulimit nproc=3 busybox top - -The 4th container fails and reports "[8] System error: resource temporarily unavailable" error. -This fails because the caller set `nproc=3` resulting in the first three containers using up -the three processes quota set for the `daemon` user. - -### Stop container with signal (--stop-signal) - -The `--stop-signal` flag sets the system call signal that will be sent to the container to exit. -This signal can be a valid unsigned number that matches a position in the kernel's syscall table, for instance 9, -or a signal name in the format SIGNAME, for instance SIGKILL. - -### Optional security options (--security-opt) - -On Windows, this flag can be used to specify the `credentialspec` option. -The `credentialspec` must be in the format `file://spec.txt` or `registry://keyname`. - -### Stop container with timeout (--stop-timeout) - -The `--stop-timeout` flag sets the timeout (in seconds) that a pre-defined (see `--stop-signal`) system call -signal that will be sent to the container to exit. After timeout elapses the container will be killed with SIGKILL. - -### Specify isolation technology for container (--isolation) - -This option is useful in situations where you are running Docker containers on -Windows. The `--isolation ` option sets a container's isolation technology. -On Linux, the only supported is the `default` option which uses -Linux namespaces. These two commands are equivalent on Linux: - -```bash -$ docker run -d busybox top -$ docker run -d --isolation default busybox top -``` - -On Windows, `--isolation` can take one of these values: - - -| Value | Description | -|-----------|--------------------------------------------------------------------------------------------| -| `default` | Use the value specified by the Docker daemon's `--exec-opt` or system default (see below). | -| `process` | Shared-kernel namespace isolation (not supported on Windows client operating systems). | -| `hyperv` | Hyper-V hypervisor partition-based isolation. | - -The default isolation on Windows server operating systems is `process`. The default (and only supported) -isolation on Windows client operating systems is `hyperv`. An attempt to start a container on a client -operating system with `--isolation process` will fail. - -On Windows server, assuming the default configuration, these commands are equivalent -and result in `process` isolation: - -```PowerShell -PS C:\> docker run -d microsoft/nanoserver powershell echo process -PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo process -PS C:\> docker run -d --isolation process microsoft/nanoserver powershell echo process -``` - -If you have set the `--exec-opt isolation=hyperv` option on the Docker `daemon`, or -are running against a Windows client-based daemon, these commands are equivalent and -result in `hyperv` isolation: - -```PowerShell -PS C:\> docker run -d microsoft/nanoserver powershell echo hyperv -PS C:\> docker run -d --isolation default microsoft/nanoserver powershell echo hyperv -PS C:\> docker run -d --isolation hyperv microsoft/nanoserver powershell echo hyperv -``` - -### Configure namespaced kernel parameters (sysctls) at runtime - -The `--sysctl` sets namespaced kernel parameters (sysctls) in the -container. For example, to turn on IP forwarding in the containers -network namespace, run this command: - - $ docker run --sysctl net.ipv4.ip_forward=1 someimage - - -> **Note**: Not all sysctls are namespaced. Docker does not support changing sysctls -> inside of a container that also modify the host system. As the kernel -> evolves we expect to see more sysctls become namespaced. - -#### Currently supported sysctls - - `IPC Namespace`: - - kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall, kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forced - Sysctls beginning with fs.mqueue.* - - If you use the `--ipc=host` option these sysctls will not be allowed. - - `Network Namespace`: - Sysctls beginning with net.* - - If you use the `--network=host` option using these sysctls will not be allowed. diff --git a/engine/reference/commandline/search.md b/engine/reference/commandline/search.md index cabb489ec1..6f3b6aeda0 100644 --- a/engine/reference/commandline/search.md +++ b/engine/reference/commandline/search.md @@ -11,27 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Search Docker Hub for ranked images - -Search a registry for the term 'fedora' and only display those images -ranked 3 or higher: - - $ docker search --filter=stars=3 fedora - NAME DESCRIPTION STARS OFFICIAL AUTOMATED - mattdm/fedora A basic Fedora image corresponding roughly... 50 - fedora (Semi) Official Fedora base image. 38 - mattdm/fedora-small A small Fedora image on which to build. Co... 8 - goldmann/wildfly A WildFly application server running on a ... 3 [OK] - -### Search Docker Hub for automated images - -Search Docker Hub for the term 'fedora' and only display automated images -ranked 1 or higher: - - $ docker search --filter=is-automated=true --filter=stars=1 fedora - NAME DESCRIPTION STARS OFFICIAL AUTOMATED - goldmann/wildfly A WildFly application server running on a ... 3 [OK] - tutum/fedora-20 Fedora 20 image with SSH access. For the r... 1 [OK] diff --git a/engine/reference/commandline/secret_create.md b/engine/reference/commandline/secret_create.md index a39cc58122..d50a69fc74 100644 --- a/engine/reference/commandline/secret_create.md +++ b/engine/reference/commandline/secret_create.md @@ -11,57 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Create a secret - -```bash -$ echo | docker secret create my_secret - -mhv17xfe3gh6xc4rij5orpfds - -$ docker secret ls -ID NAME CREATED UPDATED SIZE -mhv17xfe3gh6xc4rij5orpfds my_secret 2016-10-27 23:25:43.909181089 +0000 UTC 2016-10-27 23:25:43.909181089 +0000 UTC 1679 -``` - -### Create a secret with a file - -```bash -$ docker secret create my_secret ./secret.json -mhv17xfe3gh6xc4rij5orpfds - -$ docker secret ls -ID NAME CREATED UPDATED SIZE -mhv17xfe3gh6xc4rij5orpfds my_secret 2016-10-27 23:25:43.909181089 +0000 UTC 2016-10-27 23:25:43.909181089 +0000 UTC 1679 -``` - -### Create a secret with labels - -```bash -$ docker secret create --label env=dev --label rev=20161102 my_secret ./secret.json -jtn7g6aukl5ky7nr9gvwafoxh - -$ docker secret inspect my_secret -[ - { - "ID": "jtn7g6aukl5ky7nr9gvwafoxh", - "Version": { - "Index": 541 - }, - "CreatedAt": "2016-11-03T20:54:12.924766548Z", - "UpdatedAt": "2016-11-03T20:54:12.924766548Z", - "Spec": { - "Name": "my_secret", - "Labels": { - "env": "dev", - "rev": "20161102" - }, - "Data": null - }, - "Digest": "sha256:4212a44b14e94154359569333d3fc6a80f6b9959dfdaff26412f4b2796b1f387", - "SecretSize": 1679 - } -] - -``` diff --git a/engine/reference/commandline/secret_inspect.md b/engine/reference/commandline/secret_inspect.md index 88f1e8aeea..8ec9c9f7e7 100644 --- a/engine/reference/commandline/secret_inspect.md +++ b/engine/reference/commandline/secret_inspect.md @@ -11,47 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Inspecting a secret by name or ID - -You can inspect a secret, either by its *name*, or *ID* - -For example, given the following secret: - -```bash -$ docker secret ls -ID NAME CREATED UPDATED -mhv17xfe3gh6xc4rij5orpfds secret.json 2016-10-27 23:25:43.909181089 +0000 UTC 2016-10-27 23:25:43.909181089 +0000 UTC -``` - -```bash -$ docker secret inspect secret.json -[ - { - "ID": "mhv17xfe3gh6xc4rij5orpfds", - "Version": { - "Index": 1198 - }, - "CreatedAt": "2016-10-27T23:25:43.909181089Z", - "UpdatedAt": "2016-10-27T23:25:43.909181089Z", - "Spec": { - "Name": "secret.json" - } - } -] -``` - -### Formatting secret output - -You can use the --format option to obtain specific information about a -secret. The following example command outputs the creation time of the -secret. - -```bash -{% raw %} -$ docker secret inspect --format='{{.CreatedAt}}' mhv17xfe3gh6xc4rij5orpfds -2016-10-27 23:25:43.909181089 +0000 UTC -{% endraw %} -``` diff --git a/engine/reference/commandline/secret_ls.md b/engine/reference/commandline/secret_ls.md index 872e7a47c3..fd7d2575a9 100644 --- a/engine/reference/commandline/secret_ls.md +++ b/engine/reference/commandline/secret_ls.md @@ -11,11 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -```bash -$ docker secret ls -ID NAME CREATED UPDATED -mhv17xfe3gh6xc4rij5orpfds secret.json 2016-10-27 23:25:43.909181089 +0000 UTC 2016-10-27 23:25:43.909181089 +0000 UTC -``` diff --git a/engine/reference/commandline/service_create.md b/engine/reference/commandline/service_create.md index 60e4c5f2aa..0d54336099 100644 --- a/engine/reference/commandline/service_create.md +++ b/engine/reference/commandline/service_create.md @@ -11,509 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Create a service - -```bash -$ docker service create --name redis redis:3.0.6 - -dmu1ept4cxcfe8k8lhtux3ro3 - -$ docker service create --mode global --name redis2 redis:3.0.6 - -a8q9dasaafudfs8q8w32udass - -$ docker service ls - -ID NAME MODE REPLICAS IMAGE -dmu1ept4cxcf redis replicated 1/1 redis:3.0.6 -a8q9dasaafud redis2 global 1/1 redis:3.0.6 -``` - -### Create a service with 5 replica tasks (--replicas) - -Use the `--replicas` flag to set the number of replica tasks for a replicated -service. The following command creates a `redis` service with `5` replica tasks: - -```bash -$ docker service create --name redis --replicas=5 redis:3.0.6 - -4cdgfyky7ozwh3htjfw0d12qv -``` - -The above command sets the *desired* number of tasks for the service. Even -though the command returns immediately, actual scaling of the service may take -some time. The `REPLICAS` column shows both the *actual* and *desired* number -of replica tasks for the service. - -In the following example the desired state is `5` replicas, but the current -number of `RUNNING` tasks is `3`: - -```bash -$ docker service ls - -ID NAME MODE REPLICAS IMAGE -4cdgfyky7ozw redis replicated 3/5 redis:3.0.7 -``` - -Once all the tasks are created and `RUNNING`, the actual number of tasks is -equal to the desired number: - -```bash -$ docker service ls - -ID NAME MODE REPLICAS IMAGE -4cdgfyky7ozw redis replicated 5/5 redis:3.0.7 -``` - -### Create a service with secrets -Use the `--secret` flag to give a container access to a -[secret](secret_create.md). - -Create a service specifying a secret: - -```bash -$ docker service create --name redis --secret secret.json redis:3.0.6 - -4cdgfyky7ozwh3htjfw0d12qv -``` - -Create a service specifying the secret, target, user/group ID and mode: - -```bash -$ docker service create --name redis \ - --secret source=ssh-key,target=ssh \ - --secret src=app-key,target=app,uid=1000,gid=1001,mode=0400 \ - redis:3.0.6 - -4cdgfyky7ozwh3htjfw0d12qv -``` - -Secrets are located in `/run/secrets` in the container. If no target is -specified, the name of the secret will be used as the in memory file in the -container. If a target is specified, that will be the filename. In the -example above, two files will be created: `/run/secrets/ssh` and -`/run/secrets/app` for each of the secret targets specified. - -### Create a service with a rolling update policy - -```bash -$ docker service create \ - --replicas 10 \ - --name redis \ - --update-delay 10s \ - --update-parallelism 2 \ - redis:3.0.6 -``` - -When you run a [service update](service_update.md), the scheduler updates a -maximum of 2 tasks at a time, with `10s` between updates. For more information, -refer to the [rolling updates -tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/). - -### Set environment variables (-e, --env) - -This sets environmental variables for all tasks in a service. For example: - -```bash -$ docker service create \ - --name redis_2 \ - --replicas 5 \ - --env MYVAR=foo \ - redis:3.0.6 -``` - -### Create a docker service with specific hostname (--hostname) - -This option sets the docker service containers hostname to a specific string. For example: - -```bash -$ docker service create \ - --name redis \ - --hostname myredis \ - redis:3.0.6 -``` -### Set metadata on a service (-l, --label) - -A label is a `key=value` pair that applies metadata to a service. To label a -service with two labels: - -```bash -$ docker service create \ - --name redis_2 \ - --label com.example.foo="bar" - --label bar=baz \ - redis:3.0.6 -``` - -For more information about labels, refer to [apply custom -metadata](https://docs.docker.com/engine/userguide/labels-custom-metadata/). - -### Add bind-mounts or volumes - -Docker supports two different kinds of mounts, which allow containers to read to -or write from files or directories on other containers or the host operating -system. These types are _data volumes_ (often referred to simply as volumes) and -_bind-mounts_. - -Additionally, Docker also supports tmpfs mounts. - -A **bind-mount** makes a file or directory on the host available to the -container it is mounted within. A bind-mount may be either read-only or -read-write. For example, a container might share its host's DNS information by -means of a bind-mount of the host's `/etc/resolv.conf` or a container might -write logs to its host's `/var/log/myContainerLogs` directory. If you use -bind-mounts and your host and containers have different notions of permissions, -access controls, or other such details, you will run into portability issues. - -A **named volume** is a mechanism for decoupling persistent data needed by your -container from the image used to create the container and from the host machine. -Named volumes are created and managed by Docker, and a named volume persists -even when no container is currently using it. Data in named volumes can be -shared between a container and the host machine, as well as between multiple -containers. Docker uses a _volume driver_ to create, manage, and mount volumes. -You can back up or restore volumes using Docker commands. - -A **tmpfs** mounts a tmpfs inside a container for volatile data. - -Consider a situation where your image starts a lightweight web server. You could -use that image as a base image, copy in your website's HTML files, and package -that into another image. Each time your website changed, you'd need to update -the new image and redeploy all of the containers serving your website. A better -solution is to store the website in a named volume which is attached to each of -your web server containers when they start. To update the website, you just -update the named volume. - -For more information about named volumes, see -[Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/). - -The following table describes options which apply to both bind-mounts and named -volumes in a service: - -| Option | Required | Description -|:-----------------------------------|:--------------------------|:----------------------------------------------------------------------------------------- -| `type` | | The type of mount, can be one of `volume`, `bind`, or `tmpfs`. Defaults to `volume` if no type is specified. `volume` mounts a [managed volume](volume_create.md) into the container. `bind` bind-mounts a directory or file from the host into the container. `tmpfs`: mount a tmpfs in the container. -| `src` or `source`vvvv | for `type=bind` only | `type=volume`: `src` is an optional way to specify the name of the volume (for example, `src=my-volume`). If the named volume does not exist, it is automatically created. If no `src` is specified, the volume is assigned a random name which is guaranteed to be unique on the host, but may not be unique cluster-wide. A randomly-named volume has the same lifecycle as its container and is destroyed when the *container* is destroyed (which is upon `service update`, or when scaling or re-balancing the service). `type=bind`: `src` is required, and specifies an absolute path to the file or directory to bind-mount (for example, `src=/path/on/host/`). An error is produced if the file or directory does not exist. `type=tmpfs`: `src` is not supported. -| `dst` or `destination` or `target` | yes | Mount path inside the container, for example `/some/path/in/container/`. If the path does not exist in the container's filesystem, the Engine creates a directory at the specified location before mounting the volume or bind-mount. -| *`readonly` or `ro` | | The Engine mounts binds and volumes `read-write` unless `readonly` option is given when mounting the bind or volume. When `true` or `1` or no value the bind or volume is mounted read-only. When `false` or `0` the bind or volume is mounted read-write. - -#### Bind Propagation - -Bind propagation refers to whether or not mounts created within a given -bind-mount or named volume can be propagated to replicas of that mount. Consider -a mount point `/mnt`, which is also mounted on `/tmp`. The propation settings -control whether a mount on `/tmp/a` would also be available on `/mnt/a`. Each -propagation setting has a recursive counterpoint. In the case of recursion, -consider that `/tmp/a` is also mounted as `/foo`. The propagation settings -control whether `/mnt/a` and/or `/tmp/a` would exist. - -The `bind-propagation` option defaults to `rprivate` for both bind-mounts and -volume mounts, and is only configurable for bind-mounts. In other words, named -volumes do not support bind propagation. - -- **`shared`**: Sub-mounts of the original mount are exposed to replica mounts, - and sub-mounts of replica mounts are also propagated to the - original mount. -- **`slave`**: similar to a shared mount, but only in one direction. If the - original mount exposes a sub-mount, the replica mount can see it. - However, if the replica mount exposes a sub-mount, the original - mount cannot see it. -- **`private`**: The mount is private. Sub-mounts within it are not exposed to - replica mounts, and sub-mounts of replica mounts are not - exposed to the original mount. -- **`rshared`**: The same as shared, but the propagation also extends to and from - mount points nested within any of the original or replica mount - points. -- **`rslave`**: The same as `slave`, but the propagation also extends to and from - mount points nested within any of the original or replica mount - points. -- **`rprivate`**: The default. The same as `private`, meaning that no mount points - anywhere within the original or replica mount points propagate - in either direction. - -For more information about bind propagation, see the -[Linux kernel documentation for shared subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). - -#### Options for Named Volumes -The following options can only be used for named volumes (`type=volume`); - -| Option | Description -|:----------------------|:-------------------------------------------------------------------------------------------------------------------- -| **volume-driver** | Name of the volume-driver plugin to use for the volume. Defaults to ``"local"``, to use the local volume driver to create the volume if the volume does not exist. -| **volume-label** | One or more custom metadata ("labels") to apply to the volume upon creation. For example, `volume-label=mylabel=hello-world,my-other-label=hello-mars`. For more information about labels, refer to [apply custom metadata](https://docs.docker.com/engine/userguide/labels-custom-metadata/). -| **volume-nocopy** | By default, if you attach an empty volume to a container, and files or directories already existed at the mount-path in the container (`dst`), the Engine copies those files and directories into the volume, allowing the host to access them. Set `volume-nocopy` to disables copying files from the container's filesystem to the volume and mount the empty volume. A value is optional. `true` or `1` is the default if you do not provide a value and disables copying. `false` or `0` enables copying. -| **volume-opt** | Options specific to a given volume driver, which will be passed to the driver when creating the volume. Options are provided as a comma-separated list of key/value pairs, for example, `volume-opt=some-option=some-value,some-other-option=some-other-value`. For available options for a given driver, refer to that driver's documentation. - -#### Options for tmpfs -The following options can only be used for tmpfs mounts (`type=tmpfs`); - -| Option | Description -|:----------------------|:-------------------------------------------------------------------------------------------------------------------- -| **tmpfs-size** | Size of the tmpfs mount in bytes. Unlimited by default in Linux. -| **tmpfs-mode** | File mode of the tmpfs in octal. (e.g. `"700"` or `"0700"`.) Defaults to ``"1777"`` in Linux. - -#### Differences between "--mount" and "--volume" - -The `--mount` flag supports most options that are supported by the `-v` -or `--volume` flag for `docker run`, with some important exceptions: - -- The `--mount` flag allows you to specify a volume driver and volume driver - options *per volume*, without creating the volumes in advance. In contrast, - `docker run` allows you to specify a single volume driver which is shared - by all volumes, using the `--volume-driver` flag. - -- The `--mount` flag allows you to specify custom metadata ("labels") for a volume, - before the volume is created. - -- When you use `--mount` with `type=bind`, the host-path must refer to an *existing* - path on the host. The path will not be created for you and the service will fail - with an error if the path does not exist. - -- The `--mount` flag does not allow you to relabel a volume with `Z` or `z` flags, - which are used for `selinux` labeling. - -#### Create a service using a named volume - -The following example creates a service that uses a named volume: - -```bash -$ docker service create \ - --name my-service \ - --replicas 3 \ - --mount type=volume,source=my-volume,destination=/path/in/container,volume-label="color=red",volume-label="shape=round" \ - nginx:alpine -``` - -For each replica of the service, the engine requests a volume named "my-volume" -from the default ("local") volume driver where the task is deployed. If the -volume does not exist, the engine creates a new volume and applies the "color" -and "shape" labels. - -When the task is started, the volume is mounted on `/path/in/container/` inside -the container. - -Be aware that the default ("local") volume is a locally scoped volume driver. -This means that depending on where a task is deployed, either that task gets a -*new* volume named "my-volume", or shares the same "my-volume" with other tasks -of the same service. Multiple containers writing to a single shared volume can -cause data corruption if the software running inside the container is not -designed to handle concurrent processes writing to the same location. Also take -into account that containers can be re-scheduled by the Swarm orchestrator and -be deployed on a different node. - -#### Create a service that uses an anonymous volume - -The following command creates a service with three replicas with an anonymous -volume on `/path/in/container`: - -```bash -$ docker service create \ - --name my-service \ - --replicas 3 \ - --mount type=volume,destination=/path/in/container \ - nginx:alpine -``` - -In this example, no name (`source`) is specified for the volume, so a new volume -is created for each task. This guarantees that each task gets its own volume, -and volumes are not shared between tasks. Anonymous volumes are removed after -the task using them is complete. - -#### Create a service that uses a bind-mounted host directory - -The following example bind-mounts a host directory at `/path/in/container` in -the containers backing the service: - -```bash -$ docker service create \ - --name my-service \ - --mount type=bind,source=/path/on/host,destination=/path/in/container \ - nginx:alpine -``` - -### Set service mode (--mode) - -The service mode determines whether this is a _replicated_ service or a _global_ -service. A replicated service runs as many tasks as specified, while a global -service runs on each active node in the swarm. - -The following command creates a global service: - -```bash -$ docker service create \ - --name redis_2 \ - --mode global \ - redis:3.0.6 -``` - -### Specify service constraints (--constraint) - -You can limit the set of nodes where a task can be scheduled by defining -constraint expressions. Multiple constraints find nodes that satisfy every -expression (AND match). Constraints can match node or Docker Engine labels as -follows: - -| node attribute | matches | example | -|:----------------|:--------------------------|:------------------------------------------------| -| node.id | node ID | `node.id == 2ivku8v2gvtg4` | -| node.hostname | node hostname | `node.hostname != node-2` | -| node.role | node role: manager | `node.role == manager` | -| node.labels | user defined node labels | `node.labels.security == high` | -| engine.labels | Docker Engine's labels | `engine.labels.operatingsystem == ubuntu 14.04` | - -`engine.labels` apply to Docker Engine labels like operating system, -drivers, etc. Swarm administrators add `node.labels` for operational purposes by -using the [`docker node update`](node_update.md) command. - -For example, the following limits tasks for the redis service to nodes where the -node type label equals queue: - -```bash -$ docker service create \ - --name redis_2 \ - --constraint 'node.labels.type == queue' \ - redis:3.0.6 -``` - -### Attach a service to an existing network (--network) - -You can use overlay networks to connect one or more services within the swarm. - -First, create an overlay network on a manager node the docker network create -command: - -```bash -$ docker network create \ - --driver overlay \ - my-network - -etjpu59cykrptrgw0z0hk5snf -``` - -After you create an overlay network in swarm mode, all manager nodes have -access to the network. - -When you create a service and pass the --network flag to attach the service to -the overlay network: - -```bash -$ docker service create \ - --replicas 3 \ - --network my-network \ - --name my-web \ - nginx - -716thylsndqma81j6kkkb5aus -``` - -The swarm extends my-network to each node running the service. - -Containers on the same network can access each other using -[service discovery](https://docs.docker.com/engine/swarm/networking/#use-swarm-mode-service-discovery). - -### Publish service ports externally to the swarm (-p, --publish) - -You can publish service ports to make them available externally to the swarm -using the `--publish` flag: - -```bash -$ docker service create \ - --publish : \ - nginx -``` - -For example: - -```bash -$ docker service create \ - --name my_web \ - --replicas 3 \ - --publish 8080:80 \ - nginx -``` - -When you publish a service port, the swarm routing mesh makes the service -accessible at the target port on every node regardless if there is a task for -the service running on the node. For more information refer to -[Use swarm mode routing mesh](https://docs.docker.com/engine/swarm/ingress/). - -### Publish a port for TCP only or UDP only - -By default, when you publish a port, it is a TCP port. You can -specifically publish a UDP port instead of or in addition to a TCP port. When -you publish both TCP and UDP ports, Docker 1.12.2 and earlier require you to -add the suffix `/tcp` for TCP ports. Otherwise it is optional. - -#### TCP only - -The following two commands are equivalent. - -```bash -$ docker service create --name dns-cache -p 53:53 dns-cache - -$ docker service create --name dns-cache -p 53:53/tcp dns-cache -``` - -#### TCP and UDP - -```bash -$ docker service create --name dns-cache -p 53:53/tcp -p 53:53/udp dns-cache -``` - -#### UDP only - -```bash -$ docker service create --name dns-cache -p 53:53/udp dns-cache -``` - -### Create services using templates - -You can use templates for some flags of `service create`, using the syntax -provided by the Go's [text/template](http://golange.org/pkg/text/template/) package. - -The supported flags are the following : - -- `--hostname` -- `--mount` -- `--env` - -Valid placeholders for the Go template are listed below: - -Placeholder | Description ------------------ | -------------------------------------------- -`.Service.ID` | Service ID -`.Service.Name` | Service name -`.Service.Labels` | Service labels -`.Node.ID` | Node ID -`.Task.ID` | Task ID -`.Task.Name` | Task name -`.Task.Slot` | Task slot - -#### Template example - -In this example, we are going to set the template of the created containers based on the -service's name and the node's ID where it sits. - -```bash -{% raw %} -$ docker service create \ - --name hosttempl \ - --hostname="{{.Node.ID}}-{{.Service.Name}}" \ - busybox top - -va8ew30grofhjoychbr6iot8c - -$ docker service ps va8ew30grofhjoychbr6iot8c - -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 2e7a8a9c4da2 Running Running about a minute ago - -$ docker inspect \ - --format="{{.Config.Hostname}}" \ - hosttempl.1.wo41w8hg8qanxwjwsg4kxpprj - -x3ti0erg11rjpg64m75kej2mz-hosttempl -{% endraw %} -``` diff --git a/engine/reference/commandline/service_inspect.md b/engine/reference/commandline/service_inspect.md index ce40a9e629..aef31b7a9c 100644 --- a/engine/reference/commandline/service_inspect.md +++ b/engine/reference/commandline/service_inspect.md @@ -11,119 +11,3 @@ here, you'll need to find the string by searching this repo: https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Inspecting a service by name or ID - -You can inspect a service, either by its *name*, or *ID* - -For example, given the following service; - -```bash -$ docker service ls -ID NAME MODE REPLICAS IMAGE -dmu1ept4cxcf redis replicated 3/3 redis:3.0.6 -``` - -Both `docker service inspect redis`, and `docker service inspect dmu1ept4cxcf` -produce the same result: - -```bash -$ docker service inspect redis -[ - { - "ID": "dmu1ept4cxcfe8k8lhtux3ro3", - "Version": { - "Index": 12 - }, - "CreatedAt": "2016-06-17T18:44:02.558012087Z", - "UpdatedAt": "2016-06-17T18:44:02.558012087Z", - "Spec": { - "Name": "redis", - "TaskTemplate": { - "ContainerSpec": { - "Image": "redis:3.0.6" - }, - "Resources": { - "Limits": {}, - "Reservations": {} - }, - "RestartPolicy": { - "Condition": "any", - "MaxAttempts": 0 - }, - "Placement": {} - }, - "Mode": { - "Replicated": { - "Replicas": 1 - } - }, - "UpdateConfig": {}, - "EndpointSpec": { - "Mode": "vip" - } - }, - "Endpoint": { - "Spec": {} - } - } -] -``` - -```bash -$ docker service inspect dmu1ept4cxcf -[ - { - "ID": "dmu1ept4cxcfe8k8lhtux3ro3", - "Version": { - "Index": 12 - }, - ... - } -] -``` - -### Inspect a service using pretty-print - -You can print the inspect output in a human-readable format instead of the default -JSON output, by using the `--pretty` option: - -```bash -$ docker service inspect --pretty frontend -ID: c8wgl7q4ndfd52ni6qftkvnnp -Name: frontend -Labels: - - org.example.projectname=demo-app -Service Mode: REPLICATED - Replicas: 5 -Placement: -UpdateConfig: - Parallelism: 0 -ContainerSpec: - Image: nginx:alpine -Resources: -Endpoint Mode: vip -Ports: - Name = - Protocol = tcp - TargetPort = 443 - PublishedPort = 4443 -``` - -You can also use `--format pretty` for the same effect. - - -### Finding the number of tasks running as part of a service - -The `--format` option can be used to obtain specific information about a -service. For example, the following command outputs the number of replicas -of the "redis" service. - -```bash -{% raw %} -$ docker service inspect --format='{{.Spec.Mode.Replicated.Replicas}}' redis -10 -{% endraw %} -``` diff --git a/engine/reference/commandline/service_ps.md b/engine/reference/commandline/service_ps.md index 39cb85920a..818e653024 100644 --- a/engine/reference/commandline/service_ps.md +++ b/engine/reference/commandline/service_ps.md @@ -13,120 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Listing the tasks that are part of a service - -The following command shows all the tasks that are part of the `redis` service: - -```bash -$ docker service ps redis - -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -0qihejybwf1x redis.1 redis:3.0.5 manager1 Running Running 8 seconds -bk658fpbex0d redis.2 redis:3.0.5 worker2 Running Running 9 seconds -5ls5s5fldaqg redis.3 redis:3.0.5 worker1 Running Running 9 seconds -8ryt076polmc redis.4 redis:3.0.5 worker1 Running Running 9 seconds -1x0v8yomsncd redis.5 redis:3.0.5 manager1 Running Running 8 seconds -71v7je3el7rr redis.6 redis:3.0.5 worker2 Running Running 9 seconds -4l3zm9b7tfr7 redis.7 redis:3.0.5 worker2 Running Running 9 seconds -9tfpyixiy2i7 redis.8 redis:3.0.5 worker1 Running Running 9 seconds -3w1wu13yupln redis.9 redis:3.0.5 manager1 Running Running 8 seconds -8eaxrb2fqpbn redis.10 redis:3.0.5 manager1 Running Running 8 seconds -``` - -In addition to _running_ tasks, the output also shows the task history. For -example, after updating the service to use the `redis:3.0.6` image, the output -may look like this: - -```bash -$ docker service ps redis - -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -50qe8lfnxaxk redis.1 redis:3.0.6 manager1 Running Running 6 seconds ago -ky2re9oz86r9 \_ redis.1 redis:3.0.5 manager1 Shutdown Shutdown 8 seconds ago -3s46te2nzl4i redis.2 redis:3.0.6 worker2 Running Running less than a second ago -nvjljf7rmor4 \_ redis.2 redis:3.0.6 worker2 Shutdown Rejected 23 seconds ago "No such image: redis@sha256:6…" -vtiuz2fpc0yb \_ redis.2 redis:3.0.5 worker2 Shutdown Shutdown 1 second ago -jnarweeha8x4 redis.3 redis:3.0.6 worker1 Running Running 3 seconds ago -vs448yca2nz4 \_ redis.3 redis:3.0.5 worker1 Shutdown Shutdown 4 seconds ago -jf1i992619ir redis.4 redis:3.0.6 worker1 Running Running 10 seconds ago -blkttv7zs8ee \_ redis.4 redis:3.0.5 worker1 Shutdown Shutdown 11 seconds ago -``` - -The number of items in the task history is determined by the -`--task-history-limit` option that was set when initializing the swarm. You can -change the task history retention limit using the -[`docker swarm update`](swarm_update.md) command. - -When deploying a service, docker resolves the digest for the service's -image, and pins the service to that digest. The digest is not shown by -default, but is printed if `--no-trunc` is used. The `--no-trunc` option -also shows the non-truncated task ID, and error-messages, as can be seen below; - -```bash -$ docker service ps --no-trunc redis - -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -50qe8lfnxaxksi9w2a704wkp7 redis.1 redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842 manager1 Running Running 5 minutes ago -ky2re9oz86r9556i2szb8a8af \_ redis.1 redis:3.0.5@sha256:f8829e00d95672c48c60f468329d6693c4bdd28d1f057e755f8ba8b40008682e worker2 Shutdown Shutdown 5 minutes ago -bk658fpbex0d57cqcwoe3jthu redis.2 redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842 worker2 Running Running 5 seconds -nvjljf7rmor4htv7l8rwcx7i7 \_ redis.2 redis:3.0.6@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842 worker2 Shutdown Rejected 5 minutes ago "No such image: redis@sha256:6a692a76c2081888b589e26e6ec835743119fe453d67ecf03df7de5b73d69842" -``` - -## Filtering - -The filtering flag (`-f` or `--filter`) format is a `key=value` pair. If there -is more than one filter, then pass multiple flags (e.g. `--filter "foo=bar" --filter "bif=baz"`). -Multiple filter flags are combined as an `OR` filter. For example, -`-f name=redis.1 -f name=redis.7` returns both `redis.1` and `redis.7` tasks. - -The currently supported filters are: - -* [id](#id) -* [name](#name) -* [node](#node) -* [desired-state](#desired-state) - - -#### ID - -The `id` filter matches on all or a prefix of a task's ID. - -```bash -$ docker service ps -f "id=8" redis - -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -8ryt076polmc redis.4 redis:3.0.6 worker1 Running Running 9 seconds -8eaxrb2fqpbn redis.10 redis:3.0.6 manager1 Running Running 8 seconds -``` - -#### Name - -The `name` filter matches on task names. - -```bash -$ docker service ps -f "name=redis.1" redis -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -qihejybwf1x5 redis.1 redis:3.0.6 manager1 Running Running 8 seconds -``` - - -#### Node - -The `node` filter matches on a node name or a node ID. - -```bash -$ docker service ps -f "node=manager1" redis -ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS -0qihejybwf1x redis.1 redis:3.0.6 manager1 Running Running 8 seconds -1x0v8yomsncd redis.5 redis:3.0.6 manager1 Running Running 8 seconds -3w1wu13yupln redis.9 redis:3.0.6 manager1 Running Running 8 seconds -8eaxrb2fqpbn redis.10 redis:3.0.6 manager1 Running Running 8 seconds -``` - - -#### desired-state - -The `desired-state` filter can take the values `running`, `shutdown`, and `accepted`. diff --git a/engine/reference/commandline/service_scale.md b/engine/reference/commandline/service_scale.md index 133e6f595c..c57e7236f9 100644 --- a/engine/reference/commandline/service_scale.md +++ b/engine/reference/commandline/service_scale.md @@ -13,64 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Scale a service - -The scale command enables you to scale one or more replicated services either up -or down to the desired number of replicas. This command cannot be applied on -services which are global mode. The command will return immediately, but the -actual scaling of the service may take some time. To stop all replicas of a -service while keeping the service active in the swarm you can set the scale to 0. - -For example, the following command scales the "frontend" service to 50 tasks. - -```bash -$ docker service scale frontend=50 -frontend scaled to 50 -``` - -The following command tries to scale a global service to 10 tasks and returns an error. - -``` -$ docker service create --mode global --name backend backend:latest -b4g08uwuairexjub6ome6usqh -$ docker service scale backend=10 -backend: scale can only be used with replicated mode -``` - -Directly afterwards, run `docker service ls`, to see the actual number of -replicas. - -```bash -$ docker service ls --filter name=frontend - -ID NAME MODE REPLICAS IMAGE -3pr5mlvu3fh9 frontend replicated 15/50 nginx:alpine -``` - -You can also scale a service using the [`docker service update`](service_update.md) -command. The following commands are equivalent: - -```bash -$ docker service scale frontend=50 -$ docker service update --replicas=50 frontend -``` - -### Scale multiple services - -The `docker service scale` command allows you to set the desired number of -tasks for multiple services at once. The following example scales both the -backend and frontend services: - -```bash -$ docker service scale backend=3 frontend=5 -backend scaled to 3 -frontend scaled to 5 - -$ docker service ls -ID NAME MODE REPLICAS IMAGE -3pr5mlvu3fh9 frontend replicated 5/5 nginx:alpine -74nzcxxjv6fq backend replicated 3/3 redis:3.0.6 -``` diff --git a/engine/reference/commandline/service_update.md b/engine/reference/commandline/service_update.md index 1e0b545eeb..39a0dede99 100644 --- a/engine/reference/commandline/service_update.md +++ b/engine/reference/commandline/service_update.md @@ -13,83 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Update a service - -```bash -$ docker service update --limit-cpu 2 redis -``` - -### Perform a rolling restart with no parameter changes - -```bash -$ docker service update --force --update-parallelism 1 --update-delay 30s redis -``` - -In this example, the `--force` flag causes the service's tasks to be shut down -and replaced with new ones even though none of the other parameters would -normally cause that to happen. The `--update-parallelism 1` setting ensures -that only one task is replaced at a time (this is the default behavior). The -`--update-delay 30s` setting introduces a 30 second delay between tasks, so -that the rolling restart happens gradually. - -### Adding and removing mounts - -Use the `--mount-add` or `--mount-rm` options add or remove a service's bind-mounts -or volumes. - -The following example creates a service which mounts the `test-data` volume to -`/somewhere`. The next step updates the service to also mount the `other-volume` -volume to `/somewhere-else`volume, The last step unmounts the `/somewhere` mount -point, effectively removing the `test-data` volume. Each command returns the -service name. - -- The `--mount-add` flag takes the same parameters as the `--mount` flag on - `service create`. Refer to the [volumes and - bind-mounts](service_create.md#volumes-and-bind-mounts-mount) section in the - `service create` reference for details. - -- The `--mount-rm` flag takes the `target` path of the mount. - -```bash -$ docker service create \ - --name=myservice \ - --mount \ - type=volume,source=test-data,target=/somewhere \ - nginx:alpine \ - myservice - -myservice - -$ docker service update \ - --mount-add \ - type=volume,source=other-volume,target=/somewhere-else \ - myservice - -myservice - -$ docker service update --mount-rm /somewhere myservice - -myservice -``` - -### Adding and removing secrets - -Use the `--secret-add` or `--secret-rm` options add or remove a service's -secrets. - -The following example adds a secret named `ssh-2` and removes `ssh-1`: - -```bash -$ docker service update \ - --secret-add source=ssh-2,target=ssh-2 \ - --secret-rm ssh-1 \ - myservice -``` - -### Update services using templates - -Some flags of `service update` support the use of templating. -See [`service create`](./service_create.md#templating) for the reference. diff --git a/engine/reference/commandline/system_events.md b/engine/reference/commandline/system_events.md index 6ba3b245dd..1d43a8c2e9 100644 --- a/engine/reference/commandline/system_events.md +++ b/engine/reference/commandline/system_events.md @@ -13,109 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Listening for Docker events - -After running docker events a container 786d698004576 is started and stopped -(The container name has been shortened in the output below): - - $ docker events - - 2015-01-28T20:21:31.000000000-08:00 59211849bc10: (from whenry/testimage:latest) start - 2015-01-28T20:21:31.000000000-08:00 59211849bc10: (from whenry/testimage:latest) die - 2015-01-28T20:21:32.000000000-08:00 59211849bc10: (from whenry/testimage:latest) stop - -### Listening for events since a given date -Again the output container IDs have been shortened for the purposes of this document: - - $ docker events --since '2015-01-28' - - 2015-01-28T20:25:38.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) create - 2015-01-28T20:25:38.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) start - 2015-01-28T20:25:39.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) create - 2015-01-28T20:25:39.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) start - 2015-01-28T20:25:40.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) die - 2015-01-28T20:25:42.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) stop - 2015-01-28T20:25:45.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) start - 2015-01-28T20:25:45.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) die - 2015-01-28T20:25:46.000000000-08:00 c21f6c22ba27: (from whenry/testimage:latest) stop - -The following example outputs all events that were generated in the last 3 minutes, -relative to the current time on the client machine: - - # docker events --since '3m' - 2015-05-12T11:51:30.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) die - 2015-05-12T15:52:12.999999999Z07:00 4386fb97867d: (from ubuntu-1:14.04) stop - 2015-05-12T15:53:45.999999999Z07:00 7805c1d35632: (from redis:2.8) die - 2015-05-12T15:54:03.999999999Z07:00 7805c1d35632: (from redis:2.8) stop - -If you do not provide the --since option, the command returns only new and/or -live events. - -### Format - -If a format (`--format`) is specified, the given template will be executed -instead of the default format. Go's **text/template** package describes all the -details of the format. - - {% raw %} - $ docker events --filter 'type=container' --format 'Type={{.Type}} Status={{.Status}} ID={{.ID}}' - Type=container Status=create ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - Type=container Status=attach ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - Type=container Status=start ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - Type=container Status=resize ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - Type=container Status=die ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - Type=container Status=destroy ID=2ee349dac409e97974ce8d01b70d250b85e0ba8189299c126a87812311951e26 - {% endraw %} - -If a format is set to `{% raw %}{{json .}}{% endraw %}`, the events are streamed as valid JSON -Lines. For information about JSON Lines, please refer to http://jsonlines.org/ . - - {% raw %} - $ docker events --format '{{json .}}' - {"status":"create","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4.. - {"status":"attach","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4.. - {"Type":"network","Action":"connect","Actor":{"ID":"1b50a5bf755f6021dfa78e.. - {"status":"start","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f42.. - {"status":"resize","id":"196016a57679bf42424484918746a9474cd905dd993c4d0f4.. - {% endraw %} - -### Filters - - $ docker events --filter 'event=stop' - 2014-05-10T17:42:14.999999999Z07:00 container stop 4386fb97867d (image=ubuntu-1:14.04) - 2014-09-03T17:42:14.999999999Z07:00 container stop 7805c1d35632 (image=redis:2.8) - - $ docker events --filter 'image=ubuntu-1:14.04' - 2014-05-10T17:42:14.999999999Z07:00 container start 4386fb97867d (image=ubuntu-1:14.04) - 2014-05-10T17:42:14.999999999Z07:00 container die 4386fb97867d (image=ubuntu-1:14.04) - 2014-05-10T17:42:14.999999999Z07:00 container stop 4386fb97867d (image=ubuntu-1:14.04) - - $ docker events --filter 'container=7805c1d35632' - 2014-05-10T17:42:14.999999999Z07:00 container die 7805c1d35632 (image=redis:2.8) - 2014-09-03T15:49:29.999999999Z07:00 container stop 7805c1d35632 (image= redis:2.8) - - $ docker events --filter 'container=7805c1d35632' --filter 'container=4386fb97867d' - 2014-09-03T15:49:29.999999999Z07:00 container die 4386fb97867d (image=ubuntu-1:14.04) - 2014-05-10T17:42:14.999999999Z07:00 container stop 4386fb97867d (image=ubuntu-1:14.04) - 2014-05-10T17:42:14.999999999Z07:00 container die 7805c1d35632 (image=redis:2.8) - 2014-09-03T15:49:29.999999999Z07:00 container stop 7805c1d35632 (image=redis:2.8) - - $ docker events --filter 'container=7805c1d35632' --filter 'event=stop' - 2014-09-03T15:49:29.999999999Z07:00 container stop 7805c1d35632 (image=redis:2.8) - - $ docker events --filter 'type=volume' - 2015-12-23T21:05:28.136212689Z volume create test-event-volume-local (driver=local) - 2015-12-23T21:05:28.383462717Z volume mount test-event-volume-local (read/write=true, container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025, destination=/foo, driver=local, propagation=rprivate) - 2015-12-23T21:05:28.650314265Z volume unmount test-event-volume-local (container=562fe10671e9273da25eed36cdce26159085ac7ee6707105fd534866340a5025, driver=local) - 2015-12-23T21:05:28.716218405Z volume destroy test-event-volume-local (driver=local) - - $ docker events --filter 'type=network' - 2015-12-23T21:38:24.705709133Z network create 8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-network-local, type=bridge) - 2015-12-23T21:38:25.119625123Z network connect 8b111217944ba0ba844a65b13efcd57dc494932ee2527577758f939315ba2c5b (name=test-event-network-local, container=b4be644031a3d90b400f88ab3d4bdf4dc23adb250e696b6328b85441abe2c54e, type=bridge) - - $ docker events --filter 'type=plugin' (experimental) - 2016-07-25T17:30:14.825557616Z plugin pull ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f (name=tiborvass/sample-volume-plugin:latest) - 2016-07-25T17:30:14.888127370Z plugin enable ec7b87f2ce84330fe076e666f17dfc049d2d7ae0b8190763de94e1f2d105993f (name=tiborvass/sample-volume-plugin:latest) diff --git a/engine/reference/commandline/system_info.md b/engine/reference/commandline/system_info.md index ff7fbbc14e..b23d898b45 100644 --- a/engine/reference/commandline/system_info.md +++ b/engine/reference/commandline/system_info.md @@ -13,152 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Display Docker system information - -Here is a sample output for a daemon running on Ubuntu, using the overlay2 -storage driver: - - $ docker -D info - Containers: 14 - Running: 3 - Paused: 1 - Stopped: 10 - Images: 52 - Server Version: 1.13.0 - Storage Driver: overlay2 - Backing Filesystem: extfs - Supports d_type: true - Native Overlay Diff: false - Logging Driver: json-file - Cgroup Driver: cgroupfs - Plugins: - Volume: local - Network: bridge host macvlan null overlay - Swarm: active - NodeID: rdjq45w1op418waxlairloqbm - Is Manager: true - ClusterID: te8kdyw33n36fqiz74bfjeixd - Managers: 1 - Nodes: 2 - Orchestration: - Task History Retention Limit: 5 - Raft: - Snapshot Interval: 10000 - Number of Old Snapshots to Retain: 0 - Heartbeat Tick: 1 - Election Tick: 3 - Dispatcher: - Heartbeat Period: 5 seconds - CA Configuration: - Expiry Duration: 3 months - Node Address: 172.16.66.128 172.16.66.129 - Manager Addresses: - 172.16.66.128:2477 - Runtimes: runc - Default Runtime: runc - Init Binary: docker-init - containerd version: 8517738ba4b82aff5662c97ca4627e7e4d03b531 - runc version: ac031b5bf1cc92239461125f4c1ffb760522bbf2 - init version: N/A (expected: v0.13.0) - Security Options: - apparmor - seccomp - Profile: default - Kernel Version: 4.4.0-31-generic - Operating System: Ubuntu 16.04.1 LTS - OSType: linux - Architecture: x86_64 - CPUs: 2 - Total Memory: 1.937 GiB - Name: ubuntu - ID: H52R:7ZR6:EIIA:76JG:ORIY:BVKF:GSFU:HNPG:B5MK:APSC:SZ3Q:N326 - Docker Root Dir: /var/lib/docker - Debug Mode (client): true - Debug Mode (server): true - File Descriptors: 30 - Goroutines: 123 - System Time: 2016-11-12T17:24:37.955404361-08:00 - EventsListeners: 0 - Http Proxy: http://test:test@proxy.example.com:8080 - Https Proxy: https://test:test@proxy.example.com:8080 - No Proxy: localhost,127.0.0.1,docker-registry.somecorporation.com - Registry: https://index.docker.io/v1/ - WARNING: No swap limit support - Labels: - storage=ssd - staging=true - Experimental: false - Insecure Registries: - 127.0.0.0/8 - Registry Mirrors: - http://192.168.1.2/ - http://registry-mirror.example.com:5000/ - Live Restore Enabled: false - - - -The global `-D` option tells all `docker` commands to output debug information. - -The example below shows the output for a daemon running on Red Hat Enterprise Linux, -using the devicemapper storage driver. As can be seen in the output, additional -information about the devicemapper storage driver is shown: - - $ docker info - Containers: 14 - Running: 3 - Paused: 1 - Stopped: 10 - Untagged Images: 52 - Server Version: 1.10.3 - Storage Driver: devicemapper - Pool Name: docker-202:2-25583803-pool - Pool Blocksize: 65.54 kB - Base Device Size: 10.74 GB - Backing Filesystem: xfs - Data file: /dev/loop0 - Metadata file: /dev/loop1 - Data Space Used: 1.68 GB - Data Space Total: 107.4 GB - Data Space Available: 7.548 GB - Metadata Space Used: 2.322 MB - Metadata Space Total: 2.147 GB - Metadata Space Available: 2.145 GB - Udev Sync Supported: true - Deferred Removal Enabled: false - Deferred Deletion Enabled: false - Deferred Deleted Device Count: 0 - Data loop file: /var/lib/docker/devicemapper/devicemapper/data - Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata - Library Version: 1.02.107-RHEL7 (2015-12-01) - Execution Driver: native-0.2 - Logging Driver: json-file - Plugins: - Volume: local - Network: null host bridge - Kernel Version: 3.10.0-327.el7.x86_64 - Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo) - OSType: linux - Architecture: x86_64 - CPUs: 1 - Total Memory: 991.7 MiB - Name: ip-172-30-0-91.ec2.internal - ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S - Docker Root Dir: /var/lib/docker - Debug mode (client): false - Debug mode (server): false - Username: gordontheturtle - Registry: https://index.docker.io/v1/ - Insecure registries: - myinsecurehost:5000 - 127.0.0.0/8 - -You can also specify the output format: - - {% raw %} - $ docker info --format '{{json .}}' - - {"ID":"I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S","Containers":14, ...} - {% endraw %} diff --git a/engine/reference/commandline/version.md b/engine/reference/commandline/version.md index 70aa20027e..992d4b6362 100644 --- a/engine/reference/commandline/version.md +++ b/engine/reference/commandline/version.md @@ -13,50 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -### Display Docker version information - -The default output: - -```bash -$ docker version - Client: - Version: 1.8.0 - API version: 1.20 - Go version: go1.4.2 - Git commit: f5bae0a - Built: Tue Jun 23 17:56:00 UTC 2015 - OS/Arch: linux/amd64 - - Server: - Version: 1.8.0 - API version: 1.20 - Go version: go1.4.2 - Git commit: f5bae0a - Built: Tue Jun 23 17:56:00 UTC 2015 - OS/Arch: linux/amd64 -``` - -Get server version: - -```bash -{% raw %} -$ docker version --format '{{.Server.Version}}' - - 1.8.0 -{% endraw %} -``` - -Dump raw data: - -To view all available fields, you can use the format `{% raw %}{{json .}}{% endraw %}`. - -```bash -{% raw %} -$ docker version --format '{{json .}}' - -{"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"amd64","BuildTime":"Tue Jun 23 17:56:00 UTC 2015"},"ServerOK":true,"Server":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"go1.4.2","Os":"linux","Arch":"amd64","KernelVersion":"3.13.2-gentoo","BuildTime":"Tue Jun 23 17:56:00 UTC 2015"}} -{% endraw %} -``` diff --git a/engine/reference/commandline/volume_create.md b/engine/reference/commandline/volume_create.md index 5fb2ee19c4..1fac4851dd 100644 --- a/engine/reference/commandline/volume_create.md +++ b/engine/reference/commandline/volume_create.md @@ -13,37 +13,3 @@ https://www.github.com/docker/docker --> {% include cli.md %} - -## Examples - -$ docker volume create hello -hello -$ docker run -d -v hello:/world busybox ls /world - -The mount is created inside the container's `/src` directory. Docker doesn't -not support relative paths for mount points inside the container. - -Multiple containers can use the same volume in the same time period. This is -useful if two containers need access to shared data. For example, if one -container writes and the other reads the data. - -### Driver specific options - -Some volume drivers may take options to customize the volume creation. Use the -`-o` or `--opt` flags to pass driver options: - -$ docker volume create --driver fake --opt tardis=blue --opt timey=wimey - -These options are passed directly to the volume driver. Options for different -volume drivers may do different things (or nothing at all). - -The built-in `local` driver on Windows does not support any options. - -The built-in `local` driver on Linux accepts options similar to the linux -`mount` command: - -$ docker volume create --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=100m,uid=1000 - -Another example: - -$ docker volume create --driver local --opt type=btrfs --opt device=/dev/sda2 diff --git a/engine/security/certificates.md b/engine/security/certificates.md index 0b3c87a5d4..c04f8405e0 100644 --- a/engine/security/certificates.md +++ b/engine/security/certificates.md @@ -43,7 +43,7 @@ The following illustrates a configuration with custom certificates: └── localhost:5000 <-- Hostname:port ├── client.cert <-- Client certificate ├── client.key <-- Client key - └── localhost.crt <-- Certificate authority that signed + └── ca.crt <-- Certificate authority that signed the registry certificate ``` @@ -67,7 +67,7 @@ key and then use the key to create the certificate. ## Troubleshooting tips -The Docker daemon interprets ``.crt` files as CA certificates and `.cert` files +The Docker daemon interprets `.crt` files as CA certificates and `.cert` files as client certificates. If a CA certificate is accidentally given the extension `.cert` instead of the correct `.crt` extension, the Docker daemon logs the following error message: @@ -76,6 +76,16 @@ following error message: Missing key KEY_NAME for client certificate CERT_NAME. Note that CA certificates should use the extension .crt. ``` +If the Docker registry is accessed without a port number, do not add the port to the directory name. The following shows the configuration for a registry on default port 443 which is accessed with `docker login my-https.registry.example.com`: + +``` + /etc/docker/certs.d/ + └── my-https.registry.example.com <-- Hostname without port + ├── client.cert + ├── client.key + └── ca.crt +``` + ## Related Information * [Use trusted images](index.md) diff --git a/engine/security/seccomp.md b/engine/security/seccomp.md index de41d3d28e..6e58857043 100644 --- a/engine/security/seccomp.md +++ b/engine/security/seccomp.md @@ -28,7 +28,7 @@ CONFIG_SECCOMP=y The default seccomp profile provides a sane default for running containers with seccomp and disables around 44 system calls out of 300+. It is moderately protective while providing wide application -compatibility. The default Docker profile (found [here](https://github.com/docker/docker/blob/master/profiles/seccomp/default.json)) has a JSON layout in the following form: +compatibility. The [default Docker profile](https://github.com/docker/docker/blob/master/profiles/seccomp/default.json) has a JSON layout in the following form: ```json { @@ -114,12 +114,12 @@ the reason each syscall is blocked rather than white-listed. |---------------------|---------------------------------------------------------------------------------------------------------------------------------------| | `acct` | Accounting syscall which could let containers disable their own resource limits or process accounting. Also gated by `CAP_SYS_PACCT`. | | `add_key` | Prevent containers from using the kernel keyring, which is not namespaced. | -| `adjtimex` | Similar to `clock_settime` and `settimeofday`, time/date is not namespaced. Also gated by `CAP_SYS_TIME` | +| `adjtimex` | Similar to `clock_settime` and `settimeofday`, time/date is not namespaced. Also gated by `CAP_SYS_TIME`. | | `bpf` | Deny loading potentially persistent bpf programs into kernel, already gated by `CAP_SYS_ADMIN`. | | `clock_adjtime` | Time/date is not namespaced. Also gated by `CAP_SYS_TIME`. | | `clock_settime` | Time/date is not namespaced. Also gated by `CAP_SYS_TIME`. | | `clone` | Deny cloning new namespaces. Also gated by `CAP_SYS_ADMIN` for CLONE_* flags, except `CLONE_USERNS`. | -| `create_module` | Deny manipulation and functions on kernel modules. Obsolete. Also gated by `CAP_SYS_MODULE` | +| `create_module` | Deny manipulation and functions on kernel modules. Obsolete. Also gated by `CAP_SYS_MODULE`. | | `delete_module` | Deny manipulation and functions on kernel modules. Also gated by `CAP_SYS_MODULE`. | | `finit_module` | Deny manipulation and functions on kernel modules. Also gated by `CAP_SYS_MODULE`. | | `get_kernel_syms` | Deny retrieval of exported kernel and module symbols. Obsolete. | diff --git a/engine/security/security.md b/engine/security/security.md index 1972086f32..821bb69cfd 100644 --- a/engine/security/security.md +++ b/engine/security/security.md @@ -114,8 +114,8 @@ certificates. You can also secure them with [HTTPS and certificates](https.md). The daemon is also potentially vulnerable to other inputs, such as image -loading from either disk with 'docker load', or from the network with -'docker pull'. As of Docker 1.3.2, images are now extracted in a chrooted +loading from either disk with `docker load`, or from the network with +`docker pull`. As of Docker 1.3.2, images are now extracted in a chrooted subprocess on Linux/Unix platforms, being the first-step in a wider effort toward privilege separation. As of Docker 1.10.0, all images are stored and accessed by the cryptographic checksums of their contents, limiting the diff --git a/engine/security/trust/content_trust.md b/engine/security/trust/content_trust.md index 8b09ffd5ff..0118af0db1 100644 --- a/engine/security/trust/content_trust.md +++ b/engine/security/trust/content_trust.md @@ -66,7 +66,7 @@ the unsigned version of an image before officially signing it. Image consumers can enable content trust to ensure that images they use were signed. If a consumer enables content trust, they can only pull, run, or build with trusted images. Enabling content trust is like wearing a pair of -rose-colored glasses. Consumers "see" only signed images tags and the less +rose-colored glasses. Consumers "see" only signed image tags and the less desirable, unsigned image tags are "invisible" to them. ![Trust view](images/trust_view.png) @@ -219,7 +219,7 @@ client recognizes this is your first push and: - requests a passphrase for the root key - generates a root key in the `~/.docker/trust` directory - requests a passphrase for the repository key - - generates a repository key for in the `~/.docker/trust` directory + - generates a repository key in the `~/.docker/trust` directory The passphrase you chose for both the root key and your repository key-pair should be randomly generated and stored in a *password manager*. diff --git a/engine/security/trust/deploying_notary.md b/engine/security/trust/deploying_notary.md index 8bedce0f8a..fa6f7a57b5 100644 --- a/engine/security/trust/deploying_notary.md +++ b/engine/security/trust/deploying_notary.md @@ -6,7 +6,7 @@ title: Deploying Notary Server with Compose The easiest way to deploy Notary Server is by using Docker Compose. To follow the procedure on this page, you must have already [installed Docker Compose](/compose/install.md). -1. Clone the Notary repository +1. Clone the Notary repository. git clone git@github.com:docker/notary.git diff --git a/engine/security/trust/trust_delegation.md b/engine/security/trust/trust_delegation.md index 9f158e3eaf..fd35a8aad5 100644 --- a/engine/security/trust/trust_delegation.md +++ b/engine/security/trust/trust_delegation.md @@ -8,9 +8,9 @@ Docker Engine supports the usage of the `targets/releases` delegation as the canonical source of a trusted image tag. Using this delegation allows you to collaborate with other publishers without -sharing your repository key (a combination of your targets and snapshot keys - -please see "[Manage keys for content trust](trust_key_mng.md)" for more information). -A collaborator can keep their own delegation key private. +sharing your repository key, which is a combination of your targets and snapshot keys. +See [Manage keys for content trust](trust_key_mng.md) for more information). +Collaborators can keep their own delegation keys private. The `targets/releases` delegation is currently an optional feature - in order to set up delegations, you must use the Notary CLI: @@ -34,7 +34,7 @@ available on your path For more detailed information about how to use Notary outside of the default Docker Content Trust use cases, please refer to the -[the Notary CLI documentation](/notary/getting_started.md). +[Notary CLI documentation](/notary/getting_started.md). Note that when publishing and listing delegation changes using the Notary client, your Docker Hub credentials are required. @@ -45,7 +45,7 @@ Your collaborator needs to generate a private key (either RSA or ECDSA) and give you the public key so that you can add it to the `targets/releases` delegation. -The easiest way to for them to generate these keys is with OpenSSL. +The easiest way for them to generate these keys is with OpenSSL. Here is an example of how to generate a 2048-bit RSA portion key (all RSA keys must be at least 2048 bits): @@ -115,7 +115,7 @@ supports reading only from `targets/releases`. It also adds the collaborator's public key to the delegation, enabling them to sign the `targets/releases` delegation so long as they have the private key corresponding -to this public key. The `--all-paths` flags tells Notary not to restrict the tag +to this public key. The `--all-paths` flag tells Notary not to restrict the tag names that can be signed into `targets/releases`, which we highly recommend for `targets/releases`. @@ -141,11 +141,11 @@ IDs to collaborators yourself should you need to remove a collaborator. ## Removing a delegation key from an existing repository -To revoke a collaborator's permission to sign tags for your image repository, you must -know the IDs of their keys, because you need to remove their keys from the -`targets/releases` delegation. +To revoke a collaborator's ability to sign tags for your image repository, you +need to remove their keys from the `targets/releases` delegation. To do this, +you need the IDs of their keys. -``` +```bash $ notary delegation remove docker.io// targets/releases 729c7094a8210fd1e780e7b17b7bb55c9a28a48b871b07f65d97baf93898523a Removal of delegation role targets/releases with keys [729c7094a8210fd1e780e7b17b7bb55c9a28a48b871b07f65d97baf93898523a], to repository "docker.io//" staged for next publish. diff --git a/engine/security/trust/trust_key_mng.md b/engine/security/trust/trust_key_mng.md index e6decbf8ad..825c925f3b 100644 --- a/engine/security/trust/trust_key_mng.md +++ b/engine/security/trust/trust_key_mng.md @@ -29,7 +29,7 @@ Delegation keys are optional, and not generated as part of the normal `docker` workflow. They need to be [manually generated and added to the repository](trust_delegation.md#generating-delegation-keys). -Note: Prior to Docker Engine 1.11, the snapshot key was also generated and stored +**Note:** Prior to Docker Engine 1.11, the snapshot key was also generated and stored locally client-side. [Use the Notary CLI to manage your snapshot key locally again](/notary/advanced_usage.md#rotate-keys) for repositories created with newer versions of Docker. @@ -38,7 +38,7 @@ repositories created with newer versions of Docker. The passphrases you chose for both the root key and your repository key should be randomly generated and stored in a password manager. Having the repository key -allow users to sign image tags on a repository. Passphrases are used to encrypt +allows users to sign image tags on a repository. Passphrases are used to encrypt your keys at rest and ensures that a lost laptop or an unintended backup doesn't put the private key material at risk. @@ -48,7 +48,7 @@ All the Docker trust keys are stored encrypted using the passphrase you provide on creation. Even so, you should still take care of the location where you back them up. Good practice is to create two encrypted USB keys. -It is very important that you backup your keys to a safe, secure location. Loss +It is very important that you back up your keys to a safe, secure location. Loss of the repository key is recoverable; loss of the root key is not. The Docker client stores the keys in the `~/.docker/trust/private` directory. @@ -83,7 +83,7 @@ content that they already downloaded: Warning: potential malicious behavior - trust data has insufficient signatures for remote repository docker.io/my/image: valid signatures did not meet threshold ``` -To correct this, they need to download a new image tag with that is signed with +To correct this, they need to download a new image tag that is signed with the new key. ## Related information diff --git a/engine/security/trust/trust_sandbox.md b/engine/security/trust/trust_sandbox.md index 7a9ee3700f..97307cf376 100644 --- a/engine/security/trust/trust_sandbox.md +++ b/engine/security/trust/trust_sandbox.md @@ -69,7 +69,7 @@ the `trustsandbox` container, the Notary server, and the Registry server. $ mkdir trustsandbox $ cd trustsandbox -2. Create a filed called `docker-compose.yml` with your favorite editor. For example, using vim: +2. Create a file called `docker-compose.yml` with your favorite editor. For example, using vim: $ touch docker-compose.yml $ vim docker-compose.yml @@ -211,7 +211,7 @@ What happens when data is corrupted and you try to pull it when trust is enabled? In this section, you go into the `sandboxregistry` and tamper with some data. Then, you try and pull it. -1. Leave the `trustsandbox` shell and and container running. +1. Leave the `trustsandbox` shell and container running. 2. Open a new interactive terminal from your host, and obtain a shell into the `sandboxregistry` container. @@ -227,17 +227,17 @@ data. Then, you try and pull it. drwxr-xr-x 2 root root 4096 Jun 10 17:26 aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042 drwxr-xr-x 2 root root 4096 Jun 10 17:26 cc7629d1331a7362b5e5126beb5bf15ca0bf67eb41eab994c719a45de53255cd -4. Change into the registry storage for one of those layers (note that this is in a different directory) +4. Change into the registry storage for one of those layers (note that this is in a different directory): root@65084fc6f047:/# cd /var/lib/registry/docker/registry/v2/blobs/sha256/aa/aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042 -5. Add malicious data to one of the trusttest layers: +5. Add malicious data to one of the `trusttest` layers: root@65084fc6f047:/# echo "Malicious data" > data 6. Go back to your `trustsandbox` terminal. -7. List the trusttest image. +7. List the `trusttest` image. / # docker images | grep trusttest REPOSITORY TAG IMAGE ID CREATED SIZE @@ -275,7 +275,7 @@ data. Then, you try and pull it. ## More play in the sandbox -Now, that you have a full Docker content trust sandbox on your local system, +Now, you have a full Docker content trust sandbox on your local system, feel free to play with it and see how it behaves. If you find any security issues with Docker, feel free to send us an email at . diff --git a/engine/swarm/admin_guide.md b/engine/swarm/admin_guide.md index 8e6e7a38a2..9cc61659bc 100644 --- a/engine/swarm/admin_guide.md +++ b/engine/swarm/admin_guide.md @@ -235,7 +235,7 @@ Node node9 removed from swarm Before you forcefully remove a manager node, you must first demote it to the worker role. Make sure that you always have an odd number of manager nodes if -you demote or remove a manager +you demote or remove a manager. ## Back up the swarm @@ -276,7 +276,7 @@ To restore, see [Restore from a backup](#restore-from-a-backup). ### Restore from a backup After backing up the swarm as described in -[Backing up the swarm](#backing-up-the-swarm), use the following procedure to +[Back up the swarm](#back-up-the-swarm), use the following procedure to restore the data to a new swarm. 1. Shut down Docker on the target host machine where the swarm will be restored. @@ -292,7 +292,7 @@ restore the data to a new swarm. > encryption keys at this time. > > In the case of a swarm with auto-lock enabled, the unlock key is also the - > same as on the the old swarm, and the unlock key will be needed to + > same as on the old swarm, and the unlock key will be needed to > restore. 5. Start Docker on the new node. Unlock the swarm if necessary. Re-initialize diff --git a/engine/swarm/how-swarm-mode-works/nodes.md b/engine/swarm/how-swarm-mode-works/nodes.md index 8371590e0c..e3171cd9af 100644 --- a/engine/swarm/how-swarm-mode-works/nodes.md +++ b/engine/swarm/how-swarm-mode-works/nodes.md @@ -82,4 +82,4 @@ You can also demote a manager node to a worker node. See ## Learn More * Read about how swarm mode [services](services.md) work. -* Learn how [PKI](pki.md) works in swarm mode +* Learn how [PKI](pki.md) works in swarm mode. diff --git a/engine/swarm/index.md b/engine/swarm/index.md index 276db4771a..9e11e0421b 100644 --- a/engine/swarm/index.md +++ b/engine/swarm/index.md @@ -40,7 +40,7 @@ run. When you scale up or down, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state. * **Desired state reconciliation:** The swarm manager node constantly monitors -the cluster state and reconciles any differences between the actual state your +the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager will create two new replicas to replace the replicas that diff --git a/engine/swarm/join-nodes.md b/engine/swarm/join-nodes.md index 2750b8dc42..e23053e91e 100644 --- a/engine/swarm/join-nodes.md +++ b/engine/swarm/join-nodes.md @@ -61,7 +61,7 @@ The `docker swarm join` command does the following: from the scheduler. * extends the `ingress` overlay network to the current node. -### Join as a manager node +## Join as a manager node When you run `docker swarm join` and pass the manager token, the Docker Engine switches into swarm mode the same as for workers. Manager nodes also participate @@ -101,5 +101,5 @@ This node joined a swarm as a manager. ## Learn More -* `swarm join`[command line reference](../reference/commandline/swarm_join.md) +* `swarm join` [command line reference](../reference/commandline/swarm_join.md) * [Swarm mode tutorial](swarm-tutorial/index.md) diff --git a/engine/swarm/manage-nodes.md b/engine/swarm/manage-nodes.md index 2a9cea777a..eef4af30fa 100644 --- a/engine/swarm/manage-nodes.md +++ b/engine/swarm/manage-nodes.md @@ -29,26 +29,26 @@ ehkv3bcimagdese79dn78otj5 * node-1 Ready Active Leader The `AVAILABILITY` column shows whether or not the scheduler can assign tasks to the node: -* `Active` means that the scheduler can assign tasks to a node. +* `Active` means that the scheduler can assign tasks to the node. * `Pause` means the scheduler doesn't assign new tasks to the node, but existing -tasks remain running. + tasks remain running. * `Drain` means the scheduler doesn't assign new tasks to the node. The -scheduler shuts down any existing tasks and schedules them on an available -node. + scheduler shuts down any existing tasks and schedules them on an available + node. The `MANAGER STATUS` column shows node participation in the Raft consensus: * No value indicates a worker node that does not participate in swarm -management. + management. * `Leader` means the node is the primary manager node that makes all swarm -management and orchestration decisions for the swarm. -* `Reachable` means the node is a manager node is participating in the Raft -consensus. If the leader node becomes unavailable, the node is eligible for -election as the new leader. + management and orchestration decisions for the swarm. +* `Reachable` means the node is a manager node participating in the Raft + consensus quorum. If the leader node becomes unavailable, the node is eligible for + election as the new leader. * `Unavailable` means the node is a manager that is not able to communicate with -other managers. If a manager node becomes unavailable, you should either join a -new manager node to the swarm or promote a worker node to be a -manager. + other managers. If a manager node becomes unavailable, you should either join a + new manager node to the swarm or promote a worker node to be a + manager. For more information on swarm administration refer to the [Swarm administration guide](admin_guide.md). @@ -59,7 +59,7 @@ details for an individual node. The output defaults to JSON format, but you can pass the `--pretty` flag to print the results in human-readable format. For example: ```bash -docker node inspect self --pretty +$ docker node inspect self --pretty ID: ehkv3bcimagdese79dn78otj5 Hostname: node-1 @@ -96,7 +96,7 @@ You can modify node attributes as follows: Changing node availability lets you: * drain a manager node so that only performs swarm management tasks and is -unavailable for task assignment. + unavailable for task assignment. * drain a node so you can take it down for maintenance. * pause a node so it is unavailable to receive new tasks. * restore unavailable or paused nodes available status. @@ -157,9 +157,9 @@ You can promote a worker node to the manager role. This is useful when a manager node becomes unavailable or if you want to take a manager offline for maintenance. Similarly, you can demote a manager node to the worker role. ->**Note: Maintaining a quorum** Regardless of your reason to promote or demote -a node, you must always maintain a quorum of manager nodes in the -swarm. For more information refer to the [Swarm administration guide](admin_guide.md). +> **Note: Maintaining a quorum** Regardless of your reason to promote or demote +> a node, you must always maintain a quorum of manager nodes in the +> swarm. For more information refer to the [Swarm administration guide](admin_guide.md). To promote a node or set of nodes, run `docker node promote` from a manager node: @@ -214,9 +214,7 @@ manager node to remove the node from the node list. For instance: ```bash -docker node rm node-2 - -node-2 +$ docker node rm node-2 ``` ## Learn More diff --git a/engine/swarm/secrets.md b/engine/swarm/secrets.md index b8eae4f410..29d50817ea 100644 --- a/engine/swarm/secrets.md +++ b/engine/swarm/secrets.md @@ -757,7 +757,7 @@ Docker. First, find the ID of the `mysql` container task. ```bash - $ docker ps --filter --name=mysql -q + $ docker ps --filter name=mysql -q c7705cf6176f ``` diff --git a/engine/swarm/services.md b/engine/swarm/services.md index 5d94168e07..fc8b98fe32 100644 --- a/engine/swarm/services.md +++ b/engine/swarm/services.md @@ -553,6 +553,20 @@ $ docker service create \ --name myservice \ ``` + +> **Important:** If your volume driver accepts a comma-separated list as an option, +> you must escape the value from the outer CSV parser. To escape a `volume-opt`, +> surround it with double quotes (`"`) and surround the entire mount parameter +> with single quotes (`'`). +> +> For example, the `local` driver accepts mount options as a comma-separated +> list in the `o` parameter. This example shows the correcty to escape the list. +> +> $ docker service create \ +> --mount 'type=volume,src=,dst=,volume-driver=local,volume-opt=type=nfs,volume-opt=device=:,"volume-opt=o=addr=,vers=4,soft,timeo=180,bg,tcp,rw"' +> --name myservice \ +> + * Bind mounts are file system paths from the host where the scheduler deploys the container for the task. Docker mounts the path into the container. The diff --git a/engine/swarm/swarm-tutorial/add-nodes.md b/engine/swarm/swarm-tutorial/add-nodes.md index e5626edac6..4a67b137d0 100644 --- a/engine/swarm/swarm-tutorial/add-nodes.md +++ b/engine/swarm/swarm-tutorial/add-nodes.md @@ -2,6 +2,7 @@ description: Add nodes to the swarm keywords: tutorial, cluster management, swarm title: Add nodes to the swarm +notoc: true --- Once you've [created a swarm](create-swarm.md) with a manager node, you're ready @@ -39,7 +40,7 @@ to add worker nodes. worker node. This tutorial uses the name `worker2`. 4. Run the command produced by the `docker swarm init` output from the - [Create a swarm](create-swarm.md) tutorial step to create a second worker + [Create a swarm](create-swarm.md) tutorial step to create a second worker node joined to the existing swarm: ```bash diff --git a/engine/swarm/swarm-tutorial/create-swarm.md b/engine/swarm/swarm-tutorial/create-swarm.md index 99e0a3b536..f378aa5752 100644 --- a/engine/swarm/swarm-tutorial/create-swarm.md +++ b/engine/swarm/swarm-tutorial/create-swarm.md @@ -2,6 +2,7 @@ description: Initialize the swarm keywords: tutorial, cluster management, swarm mode title: Create a swarm +notoc: true --- After you complete the [tutorial setup](index.md) steps, you're ready @@ -24,7 +25,7 @@ machines. >**Note:** If you are using Docker for Mac or Docker for Windows to test single-node swarm, simply run `docker swarm init` with no arguments. There is no -need to specify ` --advertise-addr` in this case. To learn more, see the topic +need to specify `--advertise-addr` in this case. To learn more, see the topic on how to [Use Docker for Mac or Docker for Windows](index.md#use-docker-for-mac-or-docker-for-windows) with Swarm. @@ -80,7 +81,7 @@ Windows](index.md#use-docker-for-mac-or-docker-for-windows) with Swarm. ``` - The `*` next to the node id indicates that you're currently connected on + The `*` next to the node ID indicates that you're currently connected on this node. Docker Engine swarm mode automatically names the node for the machine host diff --git a/engine/swarm/swarm-tutorial/delete-service.md b/engine/swarm/swarm-tutorial/delete-service.md index 8a95854338..b079fed0a5 100644 --- a/engine/swarm/swarm-tutorial/delete-service.md +++ b/engine/swarm/swarm-tutorial/delete-service.md @@ -2,6 +2,7 @@ description: Remove the service from the swarm keywords: tutorial, cluster management, swarm, service title: Delete the service running on the swarm +notoc: true --- The remaining steps in the tutorial don't use the `helloworld` service, so now @@ -31,17 +32,17 @@ you can delete the service from the swarm. 4. Even though the service no longer exists, the task containers take a few seconds to clean up. You can use `docker ps` to verify when they are gone. - + ```bash $ docker ps - + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES db1651f50347 alpine:latest "ping docker.com" 44 minutes ago Up 46 seconds helloworld.5.9lkmos2beppihw95vdwxy1j3w 43bf6e532a92 alpine:latest "ping docker.com" 44 minutes ago Up 46 seconds helloworld.3.a71i8rp6fua79ad43ycocl4t2 5a0fb65d8fa7 alpine:latest "ping docker.com" 44 minutes ago Up 45 seconds helloworld.2.2jpgensh7d935qdc857pxulfr afb0ba67076f alpine:latest "ping docker.com" 44 minutes ago Up 46 seconds helloworld.4.1c47o7tluz7drve4vkm2m5olx 688172d3bfaa alpine:latest "ping docker.com" 45 minutes ago Up About a minute helloworld.1.74nbhb3fhud8jfrhigd7s29we - + $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS diff --git a/engine/swarm/swarm-tutorial/deploy-service.md b/engine/swarm/swarm-tutorial/deploy-service.md index ca77a1dd5a..9cc3990c6f 100644 --- a/engine/swarm/swarm-tutorial/deploy-service.md +++ b/engine/swarm/swarm-tutorial/deploy-service.md @@ -2,6 +2,7 @@ description: Deploy a service to the swarm keywords: tutorial, cluster management, swarm mode title: Deploy a service to the swarm +notoc: true --- After you [create a swarm](create-swarm.md), you can deploy a service to the diff --git a/engine/swarm/swarm-tutorial/drain-node.md b/engine/swarm/swarm-tutorial/drain-node.md index 7fe7f6488f..a3731edab5 100644 --- a/engine/swarm/swarm-tutorial/drain-node.md +++ b/engine/swarm/swarm-tutorial/drain-node.md @@ -2,6 +2,7 @@ description: Drain nodes on the swarm keywords: tutorial, cluster management, swarm, service, drain title: Drain a node on the swarm +notoc: true --- In earlier steps of the tutorial, all the nodes have been running with `ACTIVE` @@ -121,3 +122,7 @@ drained node to an active state: * during a rolling update * when you set another node to `Drain` availability * when a task fails on another active node + +## What's next? + +Learn how to [use a swarm mode routing mesh](/engine/swarm/ingress.md). diff --git a/engine/swarm/swarm-tutorial/index.md b/engine/swarm/swarm-tutorial/index.md index 5bac15ec3a..b5c70e2922 100644 --- a/engine/swarm/swarm-tutorial/index.md +++ b/engine/swarm/swarm-tutorial/index.md @@ -2,6 +2,7 @@ description: Getting Started tutorial for Docker Engine swarm mode keywords: tutorial, cluster management, swarm mode title: Getting started with swarm mode +toc_max: 4 --- This tutorial introduces you to the features of Docker Engine Swarm mode. You @@ -66,7 +67,7 @@ single-node and multi-node swarm scenarios on Linux machines. #### Use Docker for Mac or Docker for Windows Alternatively, install the latest [Docker for Mac](/docker-for-mac/index.md) or -[Docker for Windows](/docker-for-windows/index.md) application on a one +[Docker for Windows](/docker-for-windows/index.md) application on one computer. You can test both single-node and multi-node swarm from this computer, but you will need to use Docker Machine to test the multi-node scenarios. diff --git a/engine/swarm/swarm-tutorial/inspect-service.md b/engine/swarm/swarm-tutorial/inspect-service.md index 0a5d95b3b1..a72457df54 100644 --- a/engine/swarm/swarm-tutorial/inspect-service.md +++ b/engine/swarm/swarm-tutorial/inspect-service.md @@ -2,6 +2,7 @@ description: Inspect the application keywords: tutorial, cluster management, swarm mode title: Inspect a service on the swarm +notoc: true --- When you have [deployed a service](deploy-service.md) to your swarm, you can use diff --git a/engine/swarm/swarm-tutorial/rolling-update.md b/engine/swarm/swarm-tutorial/rolling-update.md index 3bfc8aa8ba..2db2bf8d28 100644 --- a/engine/swarm/swarm-tutorial/rolling-update.md +++ b/engine/swarm/swarm-tutorial/rolling-update.md @@ -2,6 +2,7 @@ description: Apply rolling updates to a service on the swarm keywords: tutorial, cluster management, swarm, service, rolling-update title: Apply rolling updates to a service +notoc: true --- In a previous step of the tutorial, you [scaled](scale-service.md) the number of @@ -65,7 +66,7 @@ Redis 3.0.7 container image using rolling updates. ``` 4. Now you can update the container image for `redis`. The swarm manager -applies the update to nodes according to the `UpdateConfig` policy: + applies the update to nodes according to the `UpdateConfig` policy: ```bash $ docker service update --image redis:3.0.7 redis @@ -78,12 +79,12 @@ applies the update to nodes according to the `UpdateConfig` policy: * Schedule update for the stopped task. * Start the container for the updated task. * If the update to a task returns `RUNNING`, wait for the - specified delay period then stop the next task. + specified delay period then start the next task. * If, at any time during the update, a task returns `FAILED`, pause the - update. + update. 5. Run `docker service inspect --pretty redis` to see the new image in the -desired state: + desired state: ```bash $ docker service inspect --pretty redis @@ -145,4 +146,6 @@ desired state: `redis:3.0.6` while others are running `redis:3.0.7`. The output above shows the state once the rolling updates are done. +## What's next? + Next, learn about how to [drain a node](drain-node.md) in the swarm. diff --git a/engine/swarm/swarm-tutorial/scale-service.md b/engine/swarm/swarm-tutorial/scale-service.md index 78d3135479..805531cbf4 100644 --- a/engine/swarm/swarm-tutorial/scale-service.md +++ b/engine/swarm/swarm-tutorial/scale-service.md @@ -2,6 +2,7 @@ description: Scale the service running in the swarm keywords: tutorial, cluster management, swarm mode, scale title: Scale the service in the swarm +notoc: true --- Once you have [deployed a service](deploy-service.md) to a swarm, you are ready @@ -54,7 +55,7 @@ the swarm. 528d68040f95 alpine:latest "ping docker.com" About a minute ago Up About a minute helloworld.4.auky6trawmdlcne8ad8phb0f1 ``` - If you want to see the containers running on other nodes, you can ssh into + If you want to see the containers running on other nodes, ssh into those nodes and run the `docker ps` command. ## What's next? diff --git a/engine/swarm/swarm_manager_locking.md b/engine/swarm/swarm_manager_locking.md index 7b89bd1460..f842c4b002 100644 --- a/engine/swarm/swarm_manager_locking.md +++ b/engine/swarm/swarm_manager_locking.md @@ -22,12 +22,12 @@ When Docker restarts, you must _key encryption key_ generated by Docker when the swarm was locked. You can rotate this key encryption key at any time. ->**Note**: You don't need to unlock the swarm when a new node joins the swarm, -because the key is propagated to it over mutual TLS. +> **Note:** You don't need to unlock the swarm when a new node joins the swarm, +> because the key is propagated to it over mutual TLS. ## Initialize a swarm with autolocking enabled -When you initialize a new swarm, you you can use the `--autolock` flag to +When you initialize a new swarm, you can use the `--autolock` flag to enable autolocking of swarm manager nodes when Docker restarts. ```bash @@ -151,6 +151,6 @@ Please remember to store this key in a password manager, since without it you will not be able to restart the manager. ``` -**Warning**: When you rotate the unlock key, keep a record of the old key -around for a few minutes, so that if a manager goes down before it gets the new -key, it may still be locked with the old one. +> **Warning:** When you rotate the unlock key, keep a record of the old key +> around for a few minutes, so that if a manager goes down before it gets the new +> key, it may still be locked with the old one. diff --git a/engine/userguide/eng-image/baseimages.md b/engine/userguide/eng-image/baseimages.md index eec5d82657..412f48c59d 100644 --- a/engine/userguide/eng-image/baseimages.md +++ b/engine/userguide/eng-image/baseimages.md @@ -67,14 +67,14 @@ NOTE: Because Docker for Mac and Docker for Windows use a Linux VM, you must com Then you can run it (on Linux, Mac, or Windows) using: `docker run --rm hello` This example creates the hello-world image used in the tutorials. -If you want to test it out, you can clone [the image repo](https://github.com/docker-library/hello-world) +If you want to test it out, you can clone [the image repo](https://github.com/docker-library/hello-world). ## More resources -There are lots more resources available to help you write your 'Dockerfile`. +There are lots more resources available to help you write your `Dockerfile`. * There's a [complete guide to all the instructions](../../reference/builder.md) available for use in a `Dockerfile` in the reference section. * To help you write a clear, readable, maintainable `Dockerfile`, we've also -written a [`Dockerfile` Best Practices guide](dockerfile_best-practices.md). +written a [`Dockerfile` best practices guide](dockerfile_best-practices.md). * If your goal is to create a new Official Repository, be sure to read up on Docker's [Official Repositories](/docker-hub/official_repos/). diff --git a/engine/userguide/eng-image/dockerfile_best-practices.md b/engine/userguide/eng-image/dockerfile_best-practices.md index 3f59f58cf2..3c249cef12 100644 --- a/engine/userguide/eng-image/dockerfile_best-practices.md +++ b/engine/userguide/eng-image/dockerfile_best-practices.md @@ -275,10 +275,10 @@ used an older version, specifying the new one causes a cache bust of `apt-get update` and ensure the installation of the new version. Listing packages on each line can also prevent mistakes in package duplication. -In addition, cleaning up the apt cache and removing `/var/lib/apt/lists` helps -keep the image size down. Since the `RUN` statement starts with -`apt-get update`, the package cache will always be refreshed prior to -`apt-get install`. +In addition, when you clean up the apt cache by removing `/var/lib/apt/lists` +reduces the image size, since the apt cache is not stored in a layer. Since the +`RUN` statement starts with `apt-get update`, the package cache will always be +refreshed prior to `apt-get install`. > **Note**: The official Debian and Ubuntu images [automatically run `apt-get clean`](https://github.com/docker/docker/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105), > so explicit invocation is not required. diff --git a/engine/userguide/networking/index.md b/engine/userguide/networking/index.md index ee6e8f183b..3d235c100e 100644 --- a/engine/userguide/networking/index.md +++ b/engine/userguide/networking/index.md @@ -555,7 +555,7 @@ built-in network drivers. For example: $ docker network create --driver weave mynet -You can inspect it, add containers to and from it, and so forth. Of course, +You can inspect it, add containers to and delete from it, and so forth. Of course, different plugins may make use of different technologies or frameworks. Custom networks can include features not present in Docker's default networks. For more information on writing plugins, see [Extending Docker](../../extend/legacy_plugins.md) and diff --git a/engine/userguide/networking/work-with-networks.md b/engine/userguide/networking/work-with-networks.md index dc35000cc1..76366ecdbd 100644 --- a/engine/userguide/networking/work-with-networks.md +++ b/engine/userguide/networking/work-with-networks.md @@ -17,9 +17,9 @@ available through the Docker Engine CLI. These commands are: While not required, it is a good idea to read [Understanding Docker network](index.md) before trying the examples in this section. The -examples for the rely on a `bridge` network so that you can try them -immediately. If you would prefer to experiment with an `overlay` network see -the [Getting started with multi-host networks](get-started-overlay.md) instead. +examples use the default `bridge` network so that you can try them +immediately. To experiment with an `overlay` network, check out +the [Getting started with multi-host networks](get-started-overlay.md) guide instead. ## Create networks @@ -983,6 +983,7 @@ disconnect` command. ``` 4. Remove `container4`, `container5`, `container6`, and `container7`. + ```bash $ docker stop container4 container5 container6 container7 @@ -1056,6 +1057,7 @@ remove a network. If a network has connected endpoints, an error occurs. ``` 3. Remove the `isolated_nw` network. + ```bash $ docker network rm isolated_nw ``` diff --git a/engine/userguide/storagedriver/btrfs-driver.md b/engine/userguide/storagedriver/btrfs-driver.md index 840f662b6e..798342ce3f 100644 --- a/engine/userguide/storagedriver/btrfs-driver.md +++ b/engine/userguide/storagedriver/btrfs-driver.md @@ -17,7 +17,7 @@ copy-on-write, and snapshotting. This article refers to Docker's Btrfs storage driver as `btrfs` and the overall Btrfs Filesystem as Btrfs. ->**Note**: The [Commercially Supported Docker Engine (CS-Engine)](https://www.docker.com/compatibility-maintenance) does not currently support the `btrfs` storage driver. +> **Note:** The [Commercially Supported Docker Engine (CS-Engine)](https://www.docker.com/compatibility-maintenance) does not currently support the `btrfs` storage driver. ## The future of Btrfs @@ -102,7 +102,7 @@ that as Btrfs filesystem objects and not as individual mounts. Because Btrfs works at the filesystem level and not the block level, each image and container layer can be browsed in the filesystem using normal Unix -commands. The example below shows a truncated output of an `ls -l` command an +commands. The example below shows a truncated output of an `ls -l` command for an image layer: $ ls -l /var/lib/docker/btrfs/subvolumes/0a17decee4139b0de68478f149cc16346f5e711c5ae3bb969895f22dd6723751/ @@ -194,7 +194,7 @@ multiple devices to the `mkfs.btrfs` command creates a pool across all of those Be sure to substitute `/dev/xvdb` with the appropriate device(s) on your system. - > **Warning**: Take note of the warning about Btrfs being experimental. As + > **Warning:** Take note of the warning about Btrfs being experimental. As noted earlier, Btrfs is not currently recommended for production deployments unless you already have extensive experience. diff --git a/engine/userguide/storagedriver/device-mapper-driver.md b/engine/userguide/storagedriver/device-mapper-driver.md index 6e1d50a34d..e3b0046dc1 100644 --- a/engine/userguide/storagedriver/device-mapper-driver.md +++ b/engine/userguide/storagedriver/device-mapper-driver.md @@ -224,6 +224,10 @@ assumes that the Docker daemon is in the `stopped` state. The `thin-provisioning-tools` package allows you to activate and manage your pool. + + ```bash + $ sudo yum install -y lvm2 + ``` 3. Create a physical volume replacing `/dev/xvdf` with your block device. @@ -328,7 +332,9 @@ assumes that the Docker daemon is in the `stopped` state. --storage-opt=dm.use_deferred_deletion=true ``` - You can also set them for startup in the `daemon.json` configuration, for example: + You can also set them for startup in the + [daemon configuration file](/engine/reference/commandline/dockerd/#daemon-configuration-file), + which defaults to `/etc/docker/daemon.json` configuration, for example: ```none { diff --git a/index.md b/index.md index 25402589c1..c7aaec2d9d 100644 --- a/index.md +++ b/index.md @@ -6,17 +6,17 @@ title: Docker Documentation notoc: true --- -Docker packages your app with its dependencies, freeing you from worrying about your -system configuration, and making your app more portable. +Docker packages your app with its dependencies, freeing you from worrying about +your system configuration, and making your app more portable.

{% capture basics %} ### Learn the basics of Docker -The basic tutorial introduces Docker concepts, tools, and commands. The examples show you how to build, push, -and pull Docker images, and run them as containers. This -tutorial stops short of teaching you how to deploy applications. +The basic tutorial introduces Docker concepts, tools, and commands. The examples +show you how to build, push, and pull Docker images, and run them as containers. +This tutorial stops short of teaching you how to deploy applications. {% endcapture %}{{ basics | markdownify }} {% capture basics %}[Start the basic tutorial](/engine/getstarted/){: class="button outline-btn"}{% endcapture %}{{ basics | markdownify }}
@@ -25,9 +25,10 @@ tutorial stops short of teaching you how to deploy applications. {% capture apps %} ### Define and deploy applications -The define-and-deploy tutorial shows how to relate -containers to each other and define them as services in an application that is ready to deploy at scale in a -production environment. Highlights [Compose Version 3 new features](/engine/getstarted-voting-app/index.md#compose-version-3-features-and-compatibility) and swarm mode. +The define-and-deploy tutorial shows how to relate containers to each other and +define them as services in an application that is ready to deploy at scale in a +production environment. Highlights Compose Version 3 new features and swarm +mode. {% endcapture %}{{ apps | markdownify }} {% capture apps %}[Start the application tutorial](/engine/getstarted-voting-app/){: class="button outline-btn"}{% endcapture %}{{ apps | markdownify }}
diff --git a/learn.md b/learn.md index b33a1990ce..d0c9e1e402 100644 --- a/learn.md +++ b/learn.md @@ -25,7 +25,7 @@ tutorial stops short of teaching you how to deploy applications. The define-and-deploy tutorial shows how to relate containers to each other and define them as services in an application that is ready to deploy at scale in a -production environment. Highlights [Compose Version 3 new features](/engine/getstarted-voting-app/index.md#compose-version-3-features-and-compatibility) and swarm mode. +production environment. Highlights Compose Version 3 new features and swarm mode. {% endcapture %}{{ apps | markdownify }} diff --git a/machine/drivers/hyper-v.md b/machine/drivers/hyper-v.md index 3dcdc090cb..fe196c3f8f 100644 --- a/machine/drivers/hyper-v.md +++ b/machine/drivers/hyper-v.md @@ -2,6 +2,7 @@ description: Microsoft Hyper-V driver for machine keywords: machine, Microsoft Hyper-V, driver title: Microsoft Hyper-V +toc_max: 4 --- Creates a Boot2Docker virtual machine locally on your Windows machine @@ -66,7 +67,7 @@ Select the Virtual Switch Manager on the left-side **Actions** panel. ![Hyper-V manager](../img/hyperv-manager.png) -Set up a new external network switch to use instad of DockerNAT network switch (for Moby), which is set up by default when you install Docker for Windows. (Or if you already have another network switch set up, you can use that one.) +Set up a new external network switch to use instead of DockerNAT network switch (for Moby), which is set up by default when you install Docker for Windows. (Or if you already have another network switch set up, you can use that one.) For this example, we created a virtual switch called `Primary Virtual Switch`. diff --git a/manuals.md b/manuals.md index 23ca33d6d6..d4d2f3b629 100644 --- a/manuals.md +++ b/manuals.md @@ -15,8 +15,8 @@ production-ready application. | Product | Description | | ------- | ----------- | | [Docker Cloud](/docker-cloud/) | Manages multi-container applications and host resources running on a cloud provider (such as Amazon Web Services) | -| [Universal Control Plane (UCP)](/ucp/overview/) | Manages multi-container applications on a custom host installation (on-premise, on a cloud provider) | -| [Docker Trusted Registry (DTR)](/docker-trusted-registry/) | Runs a private repository of container images and makes them available to a UCP instance | +| [Universal Control Plane (UCP)](/datacenter/ucp/2.1/guides/) | Manages multi-container applications on a custom host installation (on-premise, on a cloud provider) | +| [Docker Trusted Registry (DTR)](/datacenter/dtr/2.2/guides/) | Runs a private repository of container images and makes them available to a UCP instance | | [Docker Store](/docker-store/) | Public, Docker-hosted registry that distributes free and paid images from various publishers | | [CS Docker Engine](/cs-engine/install) | The commercially-supported version of Docker that excludes experimental features and includes customer support | @@ -29,13 +29,14 @@ Free downloadables that help your device use Docker containers. | [Docker for Mac](/docker-for-mac/) | Docker desktop solution that includes everything a developer needs to create and test applications on a Mac | | [Docker for Windows](/docker-for-windows) | Docker desktop solution that includes everything a developer needs to create and test applications on a Windows system| | [Docker for Linux](/engine/installation/#on-linux) | Installation guides for running Docker on all supported Linux distros. | -| [Docker Compose](/compose/) | Enables you to define, build, and run multi-container applications | -| [Docker Notary](/notary/) | Allows the signing of container images to enable Docker Content Trust | +| [Docker Compose](/compose/overview/) | Enables you to define, build, and run multi-container applications | +| [Docker Machine](/machine/overview/) | Enables you to provision and manage Dockerized hosts.| +| [Docker Notary](/notary/getting_started/) | Allows the signing of container images to enable Docker Content Trust | | [Docker Registry](/registry/) | The software that powers Docker Hub and Docker Store, Registry stores and distributes container images | ## Superseded products and tools * [Docker Hub](/docker-hub/) - Superseded by Docker Store and Docker Cloud -* [Docker Swarm](/swarm/) - Functionality folded directly into native Docker, no longer a standalone tool +* [Docker Swarm](/swarm/overview/) - Functionality folded directly into native Docker, no longer a standalone tool * [Docker Toolbox](/toolbox/overview/) - Superseded by Docker for Mac and Windows diff --git a/notary/index.md b/notary/index.md index 227dd6a606..6a1b972f51 100644 --- a/notary/index.md +++ b/notary/index.md @@ -2,6 +2,7 @@ description: List of Notary Documentation keywords: docker, notary, trust, image, signing, repository, tuf title: Docker Notary +notoc: true --- * [Getting Started](getting_started.md) @@ -9,4 +10,4 @@ title: Docker Notary * [Service Architecture](service_architecture.md) * [Running a Service](running_a_service.md) * [Configuration files](reference/index.md) -* [Changelog](changelog.md) \ No newline at end of file +* [Changelog](changelog.md) diff --git a/opensource/FAQ.md b/opensource/FAQ.md index 77cbce1233..35aea96d2f 100644 --- a/opensource/FAQ.md +++ b/opensource/FAQ.md @@ -44,8 +44,8 @@ Set your local repo to track changes upstream, on the `docker` repository. $ cd docker-fork ``` -2. Add a remote called `upstream` that points to `docker/docker` - +2. Add a remote called `upstream` that points to `docker/docker`. + ``` $ git remote add upstream https://github.com/docker/docker.git ``` @@ -63,7 +63,7 @@ Most [editors have plug-ins](https://github.com/golang/go/wiki/IDEsAndTextEditor * Squash your commits into logical units of work using `git rebase -i` and `git push -f`. -* If your code requires a change to tests or documentation, include code,test, +* If your code requires a change to tests or documentation, include code, test, and documentation changes in the same commit as your code; this ensures a revert would remove all traces of the feature or fix. diff --git a/opensource/get-help.md b/opensource/get-help.md index da9771c786..d2bd636af4 100644 --- a/opensource/get-help.md +++ b/opensource/get-help.md @@ -69,26 +69,18 @@ platforms. Using Webchat from Freenode.net is a quick and easy way to get chatting. To register: -1. In your browser open https://webchat.freenode.net +1. In your browser open + [https://webchat.freenode.net](https://webchat.freenode.net){: target="_blank" class="_" }. ![Login to webchat screen](images/irc_connect.png) 2. Fill out the form. - - - - - - - - - - - - - -
NicknameThe short name you want to be known as on IRC chat channels.
Channels#docker
reCAPTCHAUse the value provided.
+ | Field | Value | + |-----------|--------------------------------------------------------------| + | Nickname | The short name you want to be known as on IRC chat channels. | + | Channels | The list of channels to join. Start with `#docker`. | + | reCAPTCHA | A field to protect against spammers. Enter the value shown. | 3. Click on the "Connect" button. @@ -99,9 +91,9 @@ register: ![Registration needed screen](images/irc_after_login.png) 4. Register your nickname by entering the following command in the -command line bar: + command line bar: - ``` + ```none /msg NickServ REGISTER yourpassword youremail@example.com ``` @@ -111,7 +103,7 @@ command line bar: chat messages into IRC chat channels after you have registered and joined a chat channel. - After entering the REGISTER command, an email is sent to the email address + After entering the `REGISTER` command, an email is sent to the email address that you provided. This email will contain instructions for completing your registration. @@ -119,15 +111,16 @@ command line bar: ![Login screen](images/register_email.png) -6. Back in the browser, complete the registration according to the email by entering the following command into the webchat command line bar: +6. Back in the browser, complete the registration according to the email by entering + the following command into the webchat command line bar: - ``` + ```none /msg NickServ VERIFY REGISTER yournickname somecode ``` Your nickname is now registered to chat on freenode.net. -[Jump ahead to tips to join a docker channel and start chatting](get-help.md#tips) +[Jump ahead to tips to join a docker channel and start chatting](get-help.md#tips). ## IRCCloud @@ -135,47 +128,50 @@ IRCCloud is a web-based IRC client service that is hosted in the cloud. This is a Freemium product, meaning the free version is limited and you can pay for more features. To use IRCCloud: -1. Select the following link: - Join the #docker channel on chat.freenode.net +1. Click + [here](https://www.irccloud.com/invite?channel=%23docker&hostname=chat.freenode.net&port=6697){: target="_blank" class="_" } + to join the `#docker` channel on chat.freenode.net. The following web page is displayed in your browser: ![IRCCloud Register screen](images/irccloud-join.png) -2. If this is your first time using IRCCloud enter a valid email address in the -form. People who have already registered with IRCCloud can select the "sign in -here" link. Additionally, people who are already registered with IRCCloud may -have a cookie stored on their web browser that enables a quick start "let's go" -link to be shown instead of the above form. In this case just select the -"let's go" link and [jump ahead to start chatting](get-help.md#start-chatting) +2. If this is your first time using IRCCloud, enter a valid email address in the + form. + + If you already registered with IRCCloud, select the **sign in here** link. + + You may have a cookie stored on your web browser that enables a quick + start **let's go** link instead of the above form. In this case, select the + **let's go** link and [jump ahead to start chatting](get-help.md#start-chatting). 3. After entering your email address in the form, check your email for an invite -from IRCCloud and follow the instructions provided in the email. + from IRCCloud and follow the instructions provided in the email. -4. After following the instructions in your email you should have an IRCCloud -Client web page in your browser: +4. The instructions in the invite email take you to an IRCCloud Client web page in + your browser: ![IRCCloud](images/irccloud-register-nick.png) - The message shown above may appear indicating that you need to register your + The message shown above indicates that you need to register your nickname. -5. To register your nickname enter the following message into the command line bar -at the bottom of the IRCCloud Client: +5. To register your nickname, enter the following command into the command line bar + at the bottom of the IRCCloud client: - ``` + ```none /msg NickServ REGISTER yourpassword youremail@example.com ``` This command line bar is for chatting and entering in IRC commands. -6. Check your email for an invite to freenode.net: +6. Check your email for an invite to `freenode.net`: ![Login screen](images/register_email.png) -7. Back in the browser, complete the registration according to the email. +7. Complete the registration according to the email. - ``` + ```none /msg NickServ VERIFY REGISTER yournickname somecode ``` @@ -188,7 +184,9 @@ The procedures in this section apply to both IRC clients. Next time you return to log into chat, you may need to re-enter your password on the command line using this command: - /msg NickServ identify +```none +/msg NickServ identify +``` With Webchat if you forget or lose your password you'll need to join the `#freenode` channel and request them to reset it for you. @@ -198,22 +196,22 @@ With Webchat if you forget or lose your password you'll need to join the Join the `#docker` group using the following command in the command line bar of your IRC Client: - /j #docker +```none +/j #docker +``` -You can also join the `#docker-dev` group: - - /j #docker-dev +You can also join the `#docker-dev` group. ### Start chatting -To ask questions to the group just type messages in the command line bar: +To ask questions to the group, type messages in the command line bar: ![Web Chat Screen](images/irc_chat.png) -## Learning more about IRC +## Learn more about IRC -This quickstart was meant to get you up and into IRC very quickly. If you find -IRC useful there is more to learn. Drupal, another open source project, -has -written some documentation about using IRC for their project -(thanks Drupal!). \ No newline at end of file +This quickstart gets started with IRC. If you find +IRC useful, there is more to learn. Drupal, another open source project, +has +[a great IRC guide](https://www.drupal.org/irc/setting-up){: target="_blank" class="_" }. +(thanks Drupal!). diff --git a/opensource/project/test-and-docs.md b/opensource/project/test-and-docs.md index c922b28519..2e1bbe95d2 100644 --- a/opensource/project/test-and-docs.md +++ b/opensource/project/test-and-docs.md @@ -64,7 +64,7 @@ hour. To run the test suite, do the following: 1. Open a terminal on your local host. -2. Change to the root your Docker repository. +2. Change to the root of your Docker repository. ```bash $ cd docker-fork @@ -91,7 +91,7 @@ hour. To run the test suite, do the following: It can take approximate one hour to run all the tests. The time depends on your host performance. The default timeout is 60 minutes, which is - defined in hack/make.sh(${TIMEOUT:=60m}). You can modify the timeout + defined in `hack/make.sh`(`${TIMEOUT:=60m}`). You can modify the timeout value on the basis of your host performance. When they complete successfully, you see the output concludes with something like this: @@ -242,7 +242,7 @@ make any changes, just run these commands again. The Docker documentation source files are in a centralized repository at [https://github.com/docker/docker.github.io](https://github.com/docker/docker.github.io). The content is -written using extended Markdown, which you can edit in a plain text editor such +written using extended Markdown, which you can edit in a plain text editor such as Atom or Notepad. The docs are built using [Jekyll](https://jekyllrb.com/). Most documentation is developed in the centralized repository. The exceptions are @@ -361,4 +361,4 @@ docs.docker.com, but you will be able to preview the changes. Congratulations, you have successfully completed the basics you need to understand the Docker test framework. In the next steps, you use what you have learned so far to [contribute to Docker by working on an -issue](../workflow/make-a-contribution.md). \ No newline at end of file +issue](../workflow/make-a-contribution.md). diff --git a/opensource/ways/community.md b/opensource/ways/community.md index bdea7005cd..e7d08dfd12 100644 --- a/opensource/ways/community.md +++ b/opensource/ways/community.md @@ -36,10 +36,10 @@ open user questions on the Docker project. Docker contributors are people like you contributing to Docker open source. Contributors may need help with IRC, Go programming, Markdown, or with other -aspects of contributing. To help Docker contributors: +aspects of contributing. To help Docker contributors, visit: * the Docker Gitter IM room * the docker-dev Google group -* the `#docker-dev` channel on Freenode IRC \ No newline at end of file +target="_blank">docker-dev Google group +* the `#docker-dev` channel on Freenode IRC diff --git a/opensource/ways/issues.md b/opensource/ways/issues.md index 9d1150a1b1..220dca683c 100644 --- a/opensource/ways/issues.md +++ b/opensource/ways/issues.md @@ -44,7 +44,7 @@ Follow these steps: 3. Choose an issue from the [list of untriaged issues](https://github.com/docker/docker/issues?q=is%3Aopen+is%3Aissue+-label%3Akind%2Fproposal+-label%3Akind%2Fenhancement+-label%3Akind%2Fbug+-label%3Akind%2Fcleanup+-label%3Akind%2Fgraphics+-label%3Akind%2Fwriting+-label%3Akind%2Fsecurity+-label%3Akind%2Fquestion+-label%3Akind%2Fimprovement+-label%3Akind%2Ffeature). -4. Follow the the triage process to triage the issue. +4. Follow the triage process to triage the issue. The triage process asks you to add both a `kind/` and a `exp/` label to each issue. Because you are not a Docker maintainer, you add these through comments. @@ -53,7 +53,7 @@ Follow these steps: ![Example](../images/triage-label.png) For example, the `+exp/beginner` and `+kind/writing` labels would triage an issue as - beginner writing task. For descriptions of valid labels, see the the triage process + beginner writing task. For descriptions of valid labels, see the triage process. -5. Triage another issue. \ No newline at end of file +5. Triage another issue. diff --git a/opensource/ways/meetups.md b/opensource/ways/meetups.md index ea776d31fe..bfbdb7fe0a 100644 --- a/opensource/ways/meetups.md +++ b/opensource/ways/meetups.md @@ -40,8 +40,8 @@ We can support the co-organizers of the Docker Meetup Groups based on their spec * Put you in contact with other people interested in being a co-organizer of a Docker Meetup group, and which are in the same area * Put you in contact with companies willing to host a Docker Meetup in your area * Introduce you to people willing to give a lightning talk about Docker -Promote your Docker Group on Docker.com, Docker Weekly and Social Media -Hackday Picture +* Promote your Docker Group on Docker.com, Docker Weekly and Social Media + Hackday Picture ## Host a Docker meetup at your location @@ -52,8 +52,7 @@ Hackday Picture We are always looking for new office space to host Docker Meetups. If your company is willing to host a Docker Meetup, please contact us by email at meetup@docker.com. Previous Docker Meetups have been hosted by companies such -as Rackspace, Twitter, MongoDB, BrightCove, DigitalOcean, Viadeo and Edmodo - +as Rackspace, Twitter, MongoDB, BrightCove, DigitalOcean, Viadeo and Edmodo. ### How many attendees? The company hosting the event fixes the number of attendees depending on their @@ -63,4 +62,4 @@ office size and availability. This number usually varies between 30 and 200. Once again, each company hosting the event decides when does the meetup start, and how long it lasts. Usual meetups tend to last 2 hours, and start between -4pm and 6pm. \ No newline at end of file +4pm and 6pm. diff --git a/opensource/ways/test.md b/opensource/ways/test.md index 80ec316d62..b516175aaf 100644 --- a/opensource/ways/test.md +++ b/opensource/ways/test.md @@ -11,15 +11,15 @@ time to do everything. Choose to contribute testing if you want to improve Docker software and processes. Testing is a good choice for contributors that have experience -software testing, usability testing, or who are otherwise great at spotting +in software testing, usability testing, or who are otherwise great at spotting problems. # What can you contribute to testing? * Write a blog about how your company uses Docker its test infrastructure. * Take an online usability test or create a usability test about Docker. -* Test one of Docker's official images -* Test the Docker documentation +* Test one of Docker's official images. +* Test the Docker documentation. # Test the Docker documentation diff --git a/opensource/workflow/advanced-contributing.md b/opensource/workflow/advanced-contributing.md index e01810c3de..4e56e77250 100644 --- a/opensource/workflow/advanced-contributing.md +++ b/opensource/workflow/advanced-contributing.md @@ -66,7 +66,7 @@ The following provides greater detail on the process: 2. Review existing issues and proposals to make sure no other user is proposing a similar idea. The design proposals are - [all online in our GitHub pull requests](https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%3Akind%2Fproposal){: target="_blank" class="_"} + [all online in our GitHub pull requests](https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%3Akind%2Fproposal){: target="_blank" class="_"}. 3. Talk to the community about your idea. @@ -143,6 +143,7 @@ Design proposal discussions can span days, weeks, and longer. The number of comm In that situation, following the discussion flow and the decisions reached is crucial. Making a pull request with a design proposal simplifies this process: + * you can leave comments on specific design proposal line * replies around line are easy to track * as a proposal changes and is updated, pages reset as line items resolve diff --git a/opensource/workflow/coding-style.md b/opensource/workflow/coding-style.md index a09e434ffe..cb74162d88 100644 --- a/opensource/workflow/coding-style.md +++ b/opensource/workflow/coding-style.md @@ -65,7 +65,7 @@ program code and documentation code. * Include documentation changes in the same commit so that a revert would remove all traces of the feature or fix. -* Reference each issue in your pull request description (`#XXXX`) +* Reference each issue in your pull request description (`#XXXX`). ## Respond to pull requests reviews @@ -93,4 +93,4 @@ program code and documentation code. available almost immediately. * If you made a documentation change, you can see it at - [docs.master.dockerproject.org](http://docs.master.dockerproject.org/). \ No newline at end of file + [docs.master.dockerproject.org](http://docs.master.dockerproject.org/). diff --git a/reference.md b/reference.md index 95c0b18574..93fbed009f 100644 --- a/reference.md +++ b/reference.md @@ -8,39 +8,39 @@ various APIs, CLIs, and file formats. ## File formats -| File format | Description | -| ----------- | ----------- | -| [Dockerfile](/engine/reference/builder/) | Defines the contents and startup behavior of a single container | -| [Compose file](/compose/compose-file/) | Defines a multi-container application | -| [Stack file](/docker-cloud/apps/stack-yaml-reference/) | Defines a multi-container application for Docker Cloud | +| File format | Description | +|:-------------------------------------------------------|:----------------------------------------------------------------| +| [Dockerfile](/engine/reference/builder/) | Defines the contents and startup behavior of a single container | +| [Compose file](/compose/compose-file/) | Defines a multi-container application | +| [Stack file](/docker-cloud/apps/stack-yaml-reference/) | Defines a multi-container application for Docker Cloud | ## Command-line interfaces (CLIs) -| CLI | Description | -| --- | ----------- | -| [Engine CLI](/engine/reference/commandline/) | The main CLI for Docker, includes all `docker` and [`dockerd`](/engine/reference/commandline/dockerd/) commands. | -| [Compose CLI](/compose/reference/overview/) | The CLI for Docker Compose, which allows you to build and run multi-container applications | -| [Machine CLI](/machine/reference/) | Manages virtual machines that are pre-configured to run Docker | -| [UCP tool](/datacenter/ucp/2.0/reference/cli/) | Manages a Universal Control Plane instance | -| [Trusted Registry CLI](/docker-trusted-registry/reference/) | Manages a trusted registry | +| CLI | Description | +|:------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------| +| [Engine CLI](/engine/reference/commandline/) | The main CLI for Docker, includes all `docker` and [`dockerd`](/engine/reference/commandline/dockerd/) commands. | +| [Compose CLI](/compose/reference/overview/) | The CLI for Docker Compose, which allows you to build and run multi-container applications | +| [Machine CLI](/machine/reference/) | Manages virtual machines that are pre-configured to run Docker | +| [UCP CLI](/datacenter/ucp/2.1/reference/cli/index.md) | Manages a Universal Control Plane instance | +| [DTR CLI](/datacenter/dtr/2.2/reference/cli/index.md) | Manages a trusted registry | ## Application programming interfaces (APIs) -| API | Description | -| --- | ----------- | -| [Cloud API](/apidocs/docker-cloud/) | Enables programmatic management of your Docker application running on a cloud provider | -| [Docker ID accounts API](/docker-id/api-reference/) | An API for accessing and updating Docker ID accounts | -| [Engine API](/engine/api/) | The main API for Docker, provides programmatic access to a [daemon](/glossary/#daemon) | -| [Registry API](/registry/spec/api/) | Facilitates distribution of images to the engine | -| [Trusted Registry API](/apidocs/overview/) | Provides programmatic access to a trusted registry | +| API | Description | +|:-----------------------------------------------------------|:---------------------------------------------------------------------------------------| +| [Cloud API](/apidocs/docker-cloud/) | Enables programmatic management of your Docker application running on a cloud provider | +| [Docker ID accounts API](/docker-id/api-reference/) | An API for accessing and updating Docker ID accounts | +| [Engine API](/engine/api/) | The main API for Docker, provides programmatic access to a [daemon](/glossary/#daemon) | +| [Registry API](/registry/spec/api/) | Facilitates distribution of images to the engine | +| [Trusted Registry API](/datacenter/dtr/2.2/reference/api/) | Provides programmatic access to a trusted registry | ## Drivers and specifications -| Driver | Description | -| ------ | ----------- | -| [Image specification](/registry/spec/manifest-v2-2/) | Describes the various components of a Docker image | -| [Machine drivers](/machine/drivers/os-base/) | Enables support for given cloud providers when provisioning resources with Machine | -| [Registry token authentication](/registry/spec/auth/) | Outlines the Docker registry authentication scheme | -| [Registry storage drivers](/registry/storage-drivers/) | Enables support for given cloud providers when storing images with Registry | +| Driver | Description | +|:-------------------------------------------------------|:-----------------------------------------------------------------------------------| +| [Image specification](/registry/spec/manifest-v2-2/) | Describes the various components of a Docker image | +| [Machine drivers](/machine/drivers/os-base/) | Enables support for given cloud providers when provisioning resources with Machine | +| [Registry token authentication](/registry/spec/auth/) | Outlines the Docker registry authentication scheme | +| [Registry storage drivers](/registry/storage-drivers/) | Enables support for given cloud providers when storing images with Registry | diff --git a/registry/storage-drivers/s3.md b/registry/storage-drivers/s3.md index 7e00e878e0..16b5279f0f 100644 --- a/registry/storage-drivers/s3.md +++ b/registry/storage-drivers/s3.md @@ -20,10 +20,10 @@ Amazon S3 or S3 compatible services for object storage. accesskey - yes + no - Your AWS Access Key. + Your AWS Access Key. If you use [IAM roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), omit to fetch temporary credentials from IAM. @@ -31,10 +31,10 @@ Amazon S3 or S3 compatible services for object storage. secretkey - yes + no - Your AWS Secret Key. + Your AWS Secret Key. If you use [IAM roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), omit to fetch temporary credentials from IAM. @@ -160,7 +160,10 @@ Amazon S3 or S3 compatible services for object storage. `secretkey`: Your aws secret key. -**Note** You can provide empty strings for your access and secret keys if you plan on running the driver on an ec2 instance and will handle authentication with the instance's credentials. +> **Note** You can provide empty strings for your access and secret keys to run the driver +> on an ec2 instance and will handle authentication with the instance's credentials. If you +> use [IAM roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), +> omit these keys to fetch temporary credentials from IAM. `region`: The name of the aws region in which you would like to store objects (for example `us-east-1`). For a list of regions, you can look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html diff --git a/swarm/index.md b/swarm/index.md index 67ccf4824e..04f795026a 100644 --- a/swarm/index.md +++ b/swarm/index.md @@ -4,6 +4,7 @@ hide_from_sitemap: true description: 'Swarm: a Docker-native clustering system' keywords: docker, swarm, clustering title: Docker Swarm +notoc: true --- If you decide to use standalone Docker Swarm, use these links to get started.